id
stringlengths
3
9
source
stringclasses
1 value
version
stringclasses
1 value
text
stringlengths
1.54k
298k
added
stringdate
1993-11-25 05:05:38
2024-09-20 15:30:25
created
stringdate
1-01-01 00:00:00
2024-07-31 00:00:00
metadata
dict
258028654
pes2o/s2orc
v3-fos-license
Sauna bathing, renal function and chronic kidney disease: Cross‐sectional and longitudinal findings from the KIHD study It is uncertain if passive heat therapies are associated with adverse renal outcomes. We sought to evaluate the cross‐sectional and longitudinal associations of the frequency of sauna bathing with renal function measures and chronic kidney disease (CKD). | INTRODUCTION Passive heat therapy or 'thermal therapy' is characterized by exposure to a high environmental temperature for a brief period. A range of passive heat therapies exist, which include repeated hot water immersion, waterperfused suits, microwave diathermy, infrared-ray sauna, Waon therapy and Turkish bath; however, the most widely used and studied till date are the Finnish saunas. Sauna bathing, a tradition embedded in the Finnish culture, has mainly been used for thousands of years for leisure and relaxation purposes. Sauna bathing is becoming a popular global lifestyle activity given its link with a myriad of health benefits. 1 In the seminal study by Laukkanen and colleagues, frequent sauna bathing (4-7 sessions/ week) was demonstrated to be associated with a reduced risk of several adverse cardiovascular outcomes including all-cause mortality, compared with 1 sauna session/ week. 2 Since then, a wealth of observational and interventional data suggests that frequent sauna bathing (i) reduces the risk of other cardiovascular outcomes including stroke, 3,4 hypertension, 5 dementia, 6 and venous thromboembolism, 7 as well as lung diseases [8][9][10] and psychotic disorders; 11 (ii) improves the severity of musculoskeletal disorders such as osteoarthritis, rheumatoid arthritis, and fibromyalgia, 12,13 COVID-19, 14 and lung conditions such as asthma, chronic bronchitis, and chronic obstructive pulmonary disease; [15][16][17] and (iii) extends the life span. 18,19 Anecdotal evidence has also suggested that passive heat therapies may be linked with adverse health effects. For instance, there were reports linking infrared sauna use with an increased risk of cancer. 20 We have, however, recently shown that life-long Finnish sauna bathing is not associated with the risk of several types of cancer. 21 Given that sauna use causes sweating and changes in body fluid balance and has the potential to cause dehydration, there have been some isolated reports of a link to renal impairments including acute renal failure. 22 Poyhonen et al. 23 recently evaluated the effect of sauna bathing on lower urinary tract symptoms in a Finnish population-based cohort and found no clear evidence of an association. Whether frequent and regular sauna bathing causes long-term impairment in renal function or increases the risk of renal disease is uncertain. Using a population-based prospective cohort study comprising of 2071 middle-aged and older men with normal kidney function at baseline, we sought to evaluate the cross-sectional and longitudinal associations of the frequency of sauna bathing with measures of renal function including estimated glomerular filtration rate (GFR) and serum concentrations of creatinine, potassium (K) and sodium (Na) and the risk of chronic kidney disease. | Study population and assessment methods Reporting of the study conforms to broad EQUATOR guidelines 24 and was conducted according to STROBE (strengthening the reporting of observational studies in epidemiology) guidelines for reporting observational studies in epidemiology (Supplementary Material S1). The study cohort for this analysis was part of the ongoing Finnish Kuopio Ischemic Heart Disease (KIHD) prospective cohort study. The KIHD study was initially based on a cohort of 2682 general population-based Caucasian men aged 42-61 years recruited in Eastern Finland between March 1984 and December 1989 (Cohort 1). 25 At the 11-year follow-up examination of Cohort 1, another cohort based on a sample of 1774 men and women aged 53 to 74 years were recruited (Cohort 2). They had baseline examinations carried out between March 1998 and December 2001. Cohort 1 has mainly been used in most evaluations (including the current analysis) because of the larger sample size, availability of measurements on a comprehensive list of lifestyle factors, blood-based biomarkers and outcomes and longer follow-up. 3 Self-administered lifestyle and health questionnaires were used to assess prevalent medical conditions and lifestyle characteristics such as smoking, alcohol consumption, physical activity, socioeconomic status (SES) and the weekly frequency of traditional Finnish sauna sessions. 3,5,25 Participants were categorized into three sauna bathing frequency groups (1, 2-3 and 4-7 sessions per week) as used in previous reports. 3,5,25 A history of coronary heart disease was defined as previous myocardial infarction, angina pectoris, the use of nitroglycerin for chest pain ≥ once a week or chest pain. Alcohol consumption was assessed using the Nordic Alcohol Consumption Inventory. The assessment of SES involved the creation of a summary index comprising relevant indicators such as income, education, occupational prestige, material standard of living and housing conditions. The composite SES index ranged from 0 to 25, with higher values indicating lower SES. Resting blood pressure was measured on three occasions between 8:00 and 10:00 am following a supine rest of 5 min using a randomzero sphygmomanometer, and the mean of all available measurements was calculated. Estimated GFR and serum concentrations of creatinine, K and Na were measured at baseline with repeated measurements of K and Na at 11 years. Repeat measurements were only made for some blood-based markers and for a random sample of study participants due to logistical reasons. Supplementary Material S2 provides further details on the number of participants who had complete data on outcome measures evaluated in this analysis. For blood biomarkers measurements, participants were required to fast overnight and abstain from drinking alcohol for at least 3 days and from smoking for at least 12 h before blood samples were taken between 8:00 and 10:00 am. Serum creatinine concentrations were measured by the colorimetric Jaffe method using a Konelab 20XT automatic analyser (Thermo Fisher Scientific). Estimated GFR was calculated using the Chronic Kidney Disease Epidemiology Collaboration (CKD-EPI) equation 26 using the formula: 141 × (creatinine in mg/ dL/.9) −1.209 × .993 Age . Chronic kidney disease was defined as kidney damage (e.g. albuminuria) or estimated GFR lower than 60 mL/min per 1.73 m 2 (or both) for 3 months or longer based on the National Kidney Foundation Kidney Disease Outcomes Quality Initiative (KDOQI) guidelines. 27 All incident chronic kidney disease cases that occurred from study entry to 31 December 2014 were included. Chronic kidney disease outcomes were collected from the National Hospital Discharge Register data by computer linkage and a comprehensive review of hospital records. The diagnoses of chronic kidney disease were validated by two physicians. | Statistical analyses Baseline characteristics were presented as means (standard deviation, SD) or median (interquartile range, IQR) for continuous variables and percentages for categorical variables. We also assessed univariable relationships between sauna bathing and baseline characteristics using ANOVA (for continuous variables) and chi-square tests (for categorical variables). The associations of frequency of sauna bathing with baseline values of estimated GFR, creatinine, K and Na and 11-year values of K and Na were examined using multiple regression analyses with robust standard errors. Cox proportional hazard regression models were used to estimate hazard ratios (HRs) and 95% confidence intervals (CIs) for incident chronic kidney disease. For this analysis, we excluded men with (i) existing kidney disease at baseline (n = 56) and (ii) missing data on the exposure or potential confounders (n = 555), which left a total of 2071 men who had complete information on sauna bathing, relevant covariates, measures of renal function and chronic kidney disease events for the analysis ( Figure 1). Given the high mortality rate in the KIHD cohort, we performed additional analyses to estimate the baseline cumulative subhazard of chronic kidney disease considering mortality as a competing outcome to chronic kidney disease. We used the competing-risks extension of the Cox proportional hazards models, as proposed by Fine and Grey. 28 In a subsidiary analysis, we also assessed the associations between sauna bathing and mortality risk. All analyses were conducted using stata version 16 (Stata Corp). | RESULTS The mean (SD) age of study participants at baseline was 53 (5) years. Compared to men who had one sauna session F I G U R E 1 Flow of participants in the study. CKD, chronic kidney disease. per week, participants who had 4-7 sauna sessions per week were slightly younger, more likely to be physically active, less likely to be smokers and have comorbidities such as type 2 diabetes, coronary heart disease and hypertension (Table 1). In partial correlational analyses adjusted for age, frequency of sauna bathing (as a continuous variable) was weakly and inversely correlated with baseline estimated GFR (r = −.06, p = .006), weakly and positively correlated with creatinine (r = .06, p = .011) and potassium (r = .08, p = .001), with no evidence of a correlation with Na (r = .001, p = .95). The cross-sectional and longitudinal associations of frequency of sauna bathing with levels of estimated GFR, creatinine, K and Na are presented in Table 2. Compared to a single sauna session/week, cross-sectional analyses showed that 4-7 sauna sessions/week were not associated with significant changes in levels of estimated GFR, creatinine and Na, but there was a slight increase in K .05 mmoL/L (95% CI, .00, .10; p = .033). In longitudinal analyses, however, there were no significant changes in levels of K 3.37 mmoL/L (95% CI: −1.34 to 8.08; p = .16) and Na 5.88 mmoL/L (95% CI: −4.74 to 16.51; p = .28). N=3433 During a median (IQR) follow-up of 25.7 (18.4-27.9) years, 188 chronic kidney disease cases were recorded. In analysis adjusted for age, body mass index, smoking status, systolic blood pressure, total cholesterol, histories of type 2 diabetes, hypertension and coronary heart disease, alcohol consumption, SES, and physical activity, the HR (95% CI) for chronic kidney disease was .84 (.46-1.53; p = .56) comparing 4-7 sauna sessions/week with a single sauna session per week. Following further adjustment for estimated GFR, the HR (95% CI) was .84 (.46-1.53; p = .56; Figure 2). A total of 1302 mortality events occurred during follow-up. In analyses including mortality as a competing risk event, the HR (95% CI) for chronic kidney disease was .81 (.21-3.14; p = .76), comparing 4-7 sauna sessions/ week with a single sauna session per week. In a multivariable model adjusted for age, body mass index, smoking status, systolic blood pressure, total cholesterol, histories of type 2 diabetes, hypertension and coronary heart disease, alcohol consumption, socioeconomic status, physical activity and estimated GFR, the HR (95% CI) for all-cause mortality was .78 (.62 to .98; p = .031), comparing 4-7 sauna sessions/week with a single sauna session per week (Table 3). | DISCUSSION Given the controversial and scanty evidence on the link between passive heat therapies and adverse renal outcomes, we evaluated the cross-sectional and longitudinal associations of frequency of Finnish sauna bathing (most commonly used and widely studied passive heat therapy 19 ) with measures of renal function and chronic kidney disease in a general population-based cohort of middleaged and older Caucasian men. Baseline cross-sectional analysis showed that men who had 4-7 sauna sessions per week were younger, more likely to be physically active, and less likely to be smokers and have cardiometabolic comorbidities; these results likely reflect the fact that these individuals were young and had fewer comorbidities and therefore more likely to be physically active and engage in leisure activities such as sauna bathing. Except for an .05 mmoL/L increase in baseline levels of K with 4-7 sauna sessions/week, there were no significant changes in levels of the other renal markers at baseline and at 11year follow-up. Furthermore, we found no evidence of an association between frequency of sauna bathing and the future risk of chronic kidney disease. We confirmed previously reported associations between frequent sauna bathing and reduced risk of all-cause mortality. 2,29 We are unable to compare and contrast these findings with previous studies as this is the first study to assess both the cross-sectional and longitudinal associations of Finnish sauna baths with measures of renal function and the future risk of chronic kidney disease. However, there are a few related studies that are worth discussing. In a before-and-after experimental study to determine the effects of a single relatively hot sauna session of 30 min duration on several blood-based parameters in 102 participants with at least one conventional cardiovascular risk factor, it was demonstrated that apart from an increase in plasma creatinine levels from 76 to 79 μmoL/L which were within normal range, levels of Na and K remained constant. 30 In a recent review of the effect of high temperature on kidney disease morbidity, the results suggested that high temperature was associated with an increased incidence of kidney stones, renal colic, acute kidney injury and hospital admissions for kidney disease. 31 The temperature exposures that were evaluated included atmospheric temperature, heat waves, stress and strain, which are not controlled and have the potential to cause kidney injury due to excessive dehydration. A recent review reported on the rising death toll from chronic kidney disease of unknown origin in the Central American region, and it has been attributed to heat exposure and dehydration. 32 Indeed, the sugarcane workers in that region who have been found to have high rates of chronic kidney disease of unknown origin typically work for several hours per day, while heavily clothed in temperatures that exceed 40°C. 32 Furthermore, exposure to heavy metals, agrochemicals, infectious agents, genetic factors and risk factors related to poverty, malnutrition, and other social determinants of health have been suggested to be contributory factors for chronic kidney disease of unknown origin. 134 (17) 135 (18) 133 (16) 135 (17) .075 Response variables Diastolic blood pressure (mmHg) 88 (10) 89 (11) 88 (10) 90 (12) .12 Physical activity (kj/day) The current findings may seem unexpected given the emerging evidence of potential adverse effects of longterm exposure to high temperature on the kidney and the following plausible reasons: unlike infrared saunas which operate at a lower temperature and gradually heat up the body and provide a form of whole-body hyperthermia; the heat from the dry Finnish sauna rapidly increases the skin temperature to about 40°C after about 10 min in the sauna, T A B L E 2 Associations of frequency of sauna bathing with estimated glomerular filtration rate and serum levels of creatinine, potassium and sodium. Cross-sectional association (Baseline) Longitudinal association ( with core body temperature ranging between 37 and 38°C. 34 This has the potential to cause substantial dehydration and even rhabdomyolysis, leading to acute kidney injury, a major risk factor for chronic kidney disease. 35 In a case study of a patient referred to the intensive care unit with acute renal failure, the cause was attributed to rhabdomyolysis following severe dehydration due to frequent visits to the sauna. 22 However, further evaluation showed the patient had a sickle cell trait; hence, the dehydration induced a vaso-occlusive crisis with resulting acute renal failure. 22 Recurrent exposure to heat stress and dehydration have been shown to induce chronic inflammation and tubular injury in mice. 36 Any renal impairments associated with sauna use are likely to be short-term and may be due to comorbidities, dehydration following excessive sauna use, and failure to rehydrate. It is well known that sauna users keep themselves hydrated the whole time when undergoing sauna bathing sessions. Overall, these new results are encouraging and provide an important public health message that regular sauna baths do not have adverse effects on renal function. Based on previous evidence and the direction of the effect estimate for the relationship between frequency of sauna bathing and chronic kidney disease risk, if there is any association at all, it would be most likely to be a protective one. The several benefits of sauna are mainly attributed to the body's responses and adaptations to heat stress. 19 There is a possibility that heat stress may not have any beneficial effect on renal function. However, this explanation is mainly speculative and whether there are any beneficial effects of sauna exposure on renal function warrants further investigation. In a comprehensive review of the role of hot baths for the treatment of chronic renal failure, the authors concluded that hot baths had the potential to clear uremic toxins by skin eccrine sweating and decrease the frequency of adverse events in patients with chronic renal failure. 37 In a rodent model of chronic kidney disease, mild systemic thermal therapy was effective at ameliorating renal dysfunction. 38 Both observational and interventional evidence suggests that sauna reduces blood pressure and the risk of hypertension, which is a major risk factor for chronic kidney disease. We have shown that life-long sauna use of 4-7 sessions/week reduces the risk of hypertension compared to a single sauna session/ week. 5 In a recent systematic review and meta-analysis, regular heat therapy compared with controls was shown to decrease both systolic and diastolic blood pressure by an average of 4 mmHg, with larger reductions in those with higher blood pressure at baseline. 39 Furthermore, we have recently demonstrated in a randomized controlled trial that 8 weeks of regular sauna bathing sessions combined with exercise produced a mean reduction in systolic blood pressure of 8 mmHg. 40 Sauna bathing has a good safety profile and few adverse effects when used prudently and it is generally tolerated by most people, including patients with stable cardiovascular disease and heart failure. 41,42 Typical hot and dry sauna sessions consist of short stays ranging from 5 to 20 min, although longer sauna bathing sessions may be used depending on the experience and comfort of the individual. 43 It is always essential to keep rehydrated during and immediately after a sauna session and avoid the use of alcohol, which increases the risk of dehydration and hypotension. Known contra-indications to sauna bathing include patients with unstable angina pectoris, recent myocardial infarction, uncontrolled hypertension, ischemic or decompensated heart failure, or severe aortic stenosis. 41,42,44 Obviously, individuals with kidney disease and those prone to dehydration need to exercise caution or avoid its use. Nevertheless, despite the lack of evidence of any associations, this is the first ever evaluation of the topic using a large-scale prospective cohort and needs to be confirmed in women, other populations and age groups. Apart from the novelty, other strengths of this study include the use of a prospective cohort design, large sample size, evaluation of several renal outcomes including their repeat measures after several years of follow-up as well as incident chronic kidney disease events, and the ability to account for several potential confounders. Important limitations which deserve consideration are the (i) lack of repeat measurements on variables such as estimated GFR and creatinine during follow-up, which was purely due to logistical reasons; (ii) absence of data on the precise cause of chronic kidney disease and classification of chronic kidney disease; (iii) potential for misclassification of sauna assessment because it was based on self-reports; (iv) sauna bathing habits may have changed during follow-up due to changes in habits or development of diseases over the long period of time. However, any changes may be minimal as sauna bathing habits are relatively consistent within the Finnish population based on previous studies and our reproducibility studies of sauna bathing habits (regression dilution ratio of .69); 21 (v) inability to generalize findings to women and those with comorbidities such as existing kidney disease; and (vi) lack of other relevant renal disease-related measures such as urea, albuminuria, and cystatin C as well as data on the progression of renal disease. In conclusion, frequent sauna bathing is not associated with impaired renal function or the future risk of chronic kidney disease. AUTHOR CONTRIBUTIONS S.K.K.: Study design, data analysis and interpretation, drafting manuscript, and revising manuscript content and approving final version of manuscript; J.K.: Study design and conduct, responsibility for the patients and data collection, and revising manuscript content and approving final version of manuscript; J.A.L.: Study design and conduct, responsibility for the patients and data collection, and revising manuscript content and approving final version of manuscript.
2023-04-09T06:16:41.893Z
2023-04-08T00:00:00.000
{ "year": 2023, "sha1": "fcfc48785e8e6fb3e46739911d300a765b703484", "oa_license": "CCBY", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1111/eci.14001", "oa_status": "HYBRID", "pdf_src": "Wiley", "pdf_hash": "909c54025c63b5e9bdd124863d49995bbbc896c3", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
235247943
pes2o/s2orc
v3-fos-license
Integrating and validating urban simulation models Urban systems are intrinsically complex, involving different dimensions and scales, and consequently various approaches and scientific disciplines. In that context, urban simulation models have been coined as essential for the construction of evidence-based and integrated urban sciences. This review and position paper synthesises previous work focused on coupling and integrating urban models on the one hand, and exploring and validating such simulation models on the other hand. These research directions are complementary basis for a research program towards the development of integrated urban theories, with some application perspectives to sustainable territorial planning. an important aspect, but the coupling between dimensions remains generally weak or to a certain level of exogeneity. For example, [4] use an urban growth model to evaluate the local impact of climate change, but does not make associated aspects endogenous, such as energy price or the energy efficiency of the urban structure. Negative externalities such as congestion and pollution, directly linked to emissions, have a feedback for example on the location of activities. Similarly, numerous studies in ecology which establish the impact of anthropic habitat disturbances would particularly benefit from a coupling with urban growth models, for example for a better management of the interface between the city and nature within the new urban regimes that are urban mega-regions [5]. At the macroscopic scale, several couplings between models of systems of cities and ecological models or from environmental science can also be considered. For example issues related to the production, storage and distribution of energy are endogenous to territorial systems, and the dynamics of infrastructures and associated entities can be integrated within models for systems of cities [6]. In that context, we propose a research program rooted into complexity science to foster the construction of bridges and interdisciplinary dialogues in urban science, in some sense the construction of integrated urban theories. The concept of integration can be understood through the complex systems roadmap [7]. It combines horizontal integration (fundamental transversal questions at the intersection of different types of complex systems) with vertical integration (multiple levels coupled within multi-scale models). The integration between knowledge domains in the sense of [8], (theoretical, empirical, modeling, data, methods and tools knowledge domains) is also an aspect of the expected integration, especially through the strong link between models, empirical data and methods, with their exploration and validation. We indeed directly inherit from the evolutionary theory of cities developed by Denise Pumain [9], in particular regarding the role of simulation models and model validation methods -the OpenMOLE platform and associated methods [10] were mainly developed in this frame. We assume thus that the construction of integrated models can not be achieved independently from their systematic exploration and validation. The project is thus built on three complementary axis: (i) the coupling of heterogenous urban models to integrate dimensions; (ii) the coupling of models at different scales for the vertical integration; and (iii) the development of new spatial simulation model validation methods, simultaneously applied to previous models. The integration of urban models has a long history linked to the development of quantitative approaches to urban systems and operational urban models [11]. Land-use Transport Interaction models for example, which aim at simulating the impact of transportation infrastructure on different components of territories, generally include several sub-models such as transport models, cellular automatons for the evolution of land-use, or economic microsimulation models [12]. The framework proposed by [13] implements spatial interaction modeling between population and employment on large-scale urban system, providing a basis for coupling with other dimensions and other types of mod-els. More recently, the development of integrated models for smart cities has also put forward the relevance of model coupling in urban simulation. [14] couple for example models for land-use and energy consumption. [15] introduce a framework to integrate information from different components of the smart city. The question of integrating models and dimensions has also been the focus of other disciplines not directly related to urban issues. Ecology has for example studied the interaction between society, ecosystems and resources [16]. The link between economics and ecology is a crucial issue regarding sustainability [17]. Resource economics is a field in which model coupling can act as a medium for smoother interdisciplinary collaborations [18]. Within urban science, such research directions remain mostly to be explored, from diverse viewpoints including methodology and theory. We describe each axis of our research program in the rest of this paper, mostly by developing a synthesis of previous research at the basis of this project. We finally discuss how the construction of such integrated models could be applied to the design of sustainable territorial planning policies. 2 Horizontal integration: coupling urban models and dimensions Model coupling An horizontal integration between complementary dimensions of urban systems can be achieved through the coupling of different urban models, taking into account different dimensions. The example of urban dynamics and environmental issues developed above can be investigated regarding several issues related to climate change, such as flooding [19] or heat waves [20]. The integration of economy and geography is also a question that remains to be mostly explored [21]. More generally, the Sustainable Development Goals are declined into 17 goals and 169 targets which must be tackled in an integrative way [22]. Some previous work, building on the evolutionary theory of cities and at the inception of this project, focuses on the co-evolution of transportation networks and territories. It illustrates how such model coupling can be achieved. [23] couples a reaction-diffusion urban morphogenesis model with a multimodel of transportation network growth to yield a co-evolution model at the mesoscopic scale. At the scale of the system of cities, [24] integrates an abstract urban network evolution model into an urban dynamics model for the population of cities. These models effectively achieve a strong coupling, as they capture circular causality between the transportation network and cities [25]. A refinement of the network growth heuristic for the macro model, described by [26], introduces an intermediate scale of geographical description, and is a first step towards multi-scale models. Current work recently presented as [27] and based on the methodology of [28] focuses on coupling models to build open transport models from the bottom-up using open source components: the MATSim transport modeling framework [29] is for example integrated with the Quant spatial interaction model [13] and a population microsimulation model [30]. The implementation of model coupling using the OpenMOLE workflow system [31] illustrates the joint development and exploration of models. Expected policy results of this work integrate an additional dimension, through the implementation of density indicators in public transport which can be used as a proxy for potential contamination in the epidemiological context of COVID-19. Literature mapping Model coupling between disciplines is made simpler when a reflexive positioning is available, for example using scientometrics and literature mapping techniques. Therefore, [32] provides a framework to jointly explore interdisciplinary citation and semantic networks, while [33] develop open tools to explore scientific corpuses. Literature mapping and systematic reviews can also be used to identify and compare existing models with great detail, as for the case of land-use transport interactions: [34] proposes a map of the interactions between disciplines involved in such modeling (from geography to urban economics, planning and physics) while [35] proceeds to a meta-analysis of model characteristics based on their scientific context. Such efforts can also be a basis towards more advanced model benchmarking experiments, as [36] in the case of several urban morphogenesis models in terms of produced urban morphologies; these benchmark being in turn a step towards model integration or multi-modeling approaches. Open issues Several research questions remain rather open in the case of model coupling for urban systems, although some methodological and technical contributions could be imported from other disciplines to tackle them. For instance, the definition of model coupling itself is not clear. A serial coupling of models (outputs of one becoming inputs of the other) can be qualified as weak or loose coupling [37], while the integration of feedback loops between model states at each time step, or the construction of a more general model including the two, can be seen as a strong coupling. Measures to quantify a degree of coupling also remain to be investigated. Beyond the nature of the coupling, technical aspects must be investigated. When models have different time steps or even different time scales, implementations can not directly communicate. The DEVS framework implies that coupled components follow a same formalism and can be seamlessly integrated [38]. Methodological frameworks have been introduced in physics to couple asynchronous models across different time scales [39]. An other dimension is wether underlying epistemologies and ontologies are intrinsically compatible. In other words, the semantic contents of models may in some cases be in contradiction. Some disciplines (or even different schools of thought in the same discipline) may introduce conflicting assumptions. Making ontologies explicit, either using established ontology systems [40], or a less strict approach to ontology in the case of social systems [41], is a way to mitigate such issues. A last difficulty worth mentioning is the computational complexity which may arise from model coupling. In the case of a strong integration in the sense of many feedback loop, this complexity may severely increase in comparison to the computational burden of models alone; notwithstanding the fact that coupling mechanically increases the number of parameters, making the integrating model difficult to explore due to the curse of dimensionality. Simplifying expectations on model outputs, working on patterns only as does the approach of Pattern Oriented Modeling [42], may reduce uncertainties due to a higher complexity. Specific methods, such as inverse problems methods [43], allow targeted exploration of the parameter space and a reduction of complexity. One must still notice that different types of complexity are related [44], and thus there is generally no straightforward way in terms of computation to take into account emergence in a model. Vertical integration: constructing multi-scale models The construction of multi-scale models is in itself a crucial aspect towards integrated models and theories. Following [45], systems of cities have reached multiple levels of articulation and interdependencies. This implies that their management and planning must necessarily be multi-scalar in order to take into account geographical particularities, while still ensuring a global consistence which is a condition for limited inequalities between territories. Moreover, considerable methodological work is required to elaborate coupling methods between scales, as for example the hybrid modeling coupling agent-based models with differential equations for an epidemiological model [46]. This allows determining the relevance of levels to be included for tackling a particular problem and avoid so-called ontological dead-ends (i.e. include appropriate levels of representation) [47]. Such methodological investigation are also necessary to understand the nature of retroactions between scales to be included, and when these are necessary or not. A crucial aspect mostly neglected in the literature is to effectively achieve a strong coupling between scales, in the sense of including both upward and downward feedbacks. Recent work paved the way towards such multi-scale strongly coupled models. [48] couples the urban dynamics model of [49] at the scale of the system of cities with the urban morphogenesis model of [50] at the scale of the urban area. Downward feedback is captured through policy parameters, as local development decisions generally react to the global integration of a given city, while upward feedback is taken into account by combining positive and negative externalities within the urban area to update its global performance and interaction parameters with other cities. To understand the interplay between urban form at the microscopic scale, transportation network development and developer agents decisions at the mesoscopic scale, [51] proposes a stylised agent-based model achieving a strong coupling between the microscopic and mesoscopic scale. These efforts remain at the stage of stylised and rather simple model, and the construction of data-driven and validated multi-scale models remains to be explored. Coupling between scales is also a way to couple between different urban dimensions, and both integration are not necessarily independent. Model validation methods Working to integrate models and theories necessarily implies a better understanding of the modeling process itself, but also of stylised dynamics produced by simulation models, in the sense of patterns [42]. In the case of geographical models for urban systems, such a knowledge has for example been developed within the Geodivercity European Research Council project lead by Denise Pumain [9], which included to the conception of new model validation methods. These methods were elaborated specifically to tackle thematic geographical questions, but were applied to many contexts thereafter, witnessing a beneficial relationship both from the geography and computer science viewpoints [52]. More generally in social sciences, modeling and simulation driven by new practices including model coupling, the use of high-performance computing for model exploration, and open science practices, are tightly linked to the production of a new type of integrated knowledge [53]. This is in some sense an aspect of the computational shift in contemporary science coined by [54]. Such model validation methods can be applied for example to assess the necessity and sufficiency of processes in a multi-modeling context, in the case of the Calibration Profile algorithm [55]. Genetic algorithms distributed on a computation grid using OpenMOLE are a highly efficient tool for model calibration [56]. The search for diversity in model outputs, obtained with the Pattern Space Exploration algorithm, can be used to explore the space of feasible configurations a model can produce, and possibly highlight unexpected behavior [57]. Therefore, the development of specific methods and tools to improve the extraction of knowledge from simulation models, and an epistemological investigation on modeling practices, are crucial within this project. Methods for the exploration, sensitivity analysis, and validation, are essential for robust application of models, but also yield a better complementarity with other types of approaches since they can establish when modeling is not relevant anymore. Spatial sensitivity analysis Recent work has focused on the development of validation methods in the specific case of spatial simulation models. More particularly, a methodology coined as Spatial Sensitivity Analysis has been introduced by [58] in order to understand the effect of spatial initial conditions on model outcomes. This provides a generic way to disentangle effects which are intrinsic to model dynamics from contingent effects due to the geography. The generation of synthetic spatial configuration for territorial systems is required, and [59] has investigated the generation of such data while controlling correlation patterns. [60] introduce and compare multiple generators for building configurations at the district scale. These methods are implemented into a single scala library, allowing an easier integration into the OpenMOLE software [61]. Further developments Different directions are worth exploring regarding the development of validation methods specific to spatial urban simulation models. Null models are rather rare in the field, in comparison to ecology for example which has used the Neutral Landscape Model for quite some time [62]. Bayesian calibration methods such as particle filters [63] are not widely used for urban models, although notable exceptions exists in domains close to engineering such as the study of traffic [64]. Finally, some work on the nature and definition of validation itself would be required. Different standards are expected depending on disciplines and types of models, since for example a computational model will always be more difficult to trust than a tractable analytical model. Methods developed will depend on such definition and expected standards of robustness. 5 Discussion: towards evidence-based multi-scalar sustainable territorial planning The strong complementarity of the three axis of the research program, elaborated on previous and current work described above, stems naturally from the intrinsic link between vertical and horizontal integrations, and from the integration of knowledge domains as model validation methods are elaborated jointly as models are constructed and explored. A fully open issue is the transfer of integrated models towards policy applications. Models have multiple functions, for which a precise typology has been introduced by [65]. Regarding the theoretical and quantitative geography [66] side of our proposal, in terms of theoretical contribution, models are instruments of knowledge production and contribute to the construction of theories. Transferring models towards policy applications requires to diversify their functions, and the end users. In a perspectivist view of knowledge production [67], models can not be dissociated from the question they are designed to answer and from the entity formulating it. Model application implies thus an upstream design, or redesign, of models such that planners, policy makers, stakeholders, and the public -any relevant user indeed -are blent in the modelling process from the beginning. Concrete research directions have also to be investigated for a possible application of models. A first research direction lies in the modeling of policy and governance itself. The management of sustainability implies according to [68] new governance structures. Their modeling is still not well integrated within spatially explicit models. [69] has proposed to integrate transportation network governance into a land-use transport model to yield a co-evolution model. [70] uses game-theory to simulate transportation network investments at the scale of the system of cities. One other crucial research axis towards the application of integrated models to policies is the construction of harmonised multi-dimensional databases. Such databases must (i) consider consistent geographical entities in time, in the sense of the dynamical ontology proposed by [71]; (ii) integrate multiple dimensions to allow coupling urban dynamics with socio-economic, environmental, sustainability and global change issues; (iii) be open, accessible, documented and reproducible. Technical aspects of data integration will also have to be carefully taken into account. In that regard, contributions from digital twins modeling (a model simulating a system in real time and having potential feedback on its functioning) applied to urban systems [72], or tools and methods to handle information elaborated in the context of smart cities [73], are important contributions. Finally, models must be constructed jointly with the precision of their application context to ensure their potential applicability. Obstacles to applicability can raise on diverse dimensions. Regarding the aforementioned model end users, many different paths can be taken to ensure a co-construction, such as companion modeling [74] in which local stakeholders participate to the construction and exploration of the model. Issues related to real-time model visualisation are also an important technical problem to take into account for model applications [75]. Scientific mediation initiatives are an other way to diffuse the results of modeling experiments into policy applications. Conclusion The diversity and complexity of urban systems requires a plurality of viewpoints [76], which call for integration between dimensions and scales. We have synthesised in this paper a current stream of research focusing around three axis, namely (i) horizontal integration through model coupling; (ii) vertical integration through the construction of multi-scale urban models; and (iii) development of model validation and exploration methods. These strongly complementary axis form the basis of a long term research program, with a goal of application to sustainable territorial policies.
2021-05-31T01:16:02.655Z
2021-05-27T00:00:00.000
{ "year": 2021, "sha1": "c7943336c5428a3b45c705c103afbb084826018c", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "c7943336c5428a3b45c705c103afbb084826018c", "s2fieldsofstudy": [ "Environmental Science", "Engineering" ], "extfieldsofstudy": [ "Computer Science", "Physics" ] }
13664990
pes2o/s2orc
v3-fos-license
Pain in Charcot-Marie-Tooth disease: an update ABSTRACT Charcot-Marie-Tooth (CMT) disease, the most common inherited peripheral neuropathy, has pain as one of its clinical features, yet it remains underdiagnosed and undertreated. This literature review assessed data related to pain from CMT to determine its prevalence, type and importance as a symptom, which, unlike other symptoms, is likely to be treated. The research encompassed 2007 to 2017 and included five articles that addressed pain from CMT. All of the papers concurred that pain is frequently present in CMT patients, yet its classification remains undefined as there has been no consensus in the literature about the mechanisms that cause it. Despite being considered a rare disease, Charcot-Marie-Tooth (CMT) is the most common inherited peripheral neuropathy with an estimated prevalence between 1 in 2,500 1 and 1 in 1,214 2 , depending on ethnic background and the method used to diagnose it.It is a slowly-progressive motor and sensory disorder characterized by distal weakness of the lower limbs and atrophy, but it can also affect the upper limbs distally.Disability, sensory impairment, deformities and pain are clinical features of CMT, the severity of which varies among individuals.Classification of CMT can be based on clinical, neurophysiological and genetic assesments 1,2,3 . In more than 90% of the cases of CMT in which a molecular diagnosis was performed, mutations were found in four genes: PMP22, GJB1, MPZ, and MFN2.Molecular changes in these genes can produce different phenotypes.For example, duplication of the PMP22 gene is responsible for CMT1A, the predominant type of CMT, while its deletion is responsible for hereditary neuropathy with liability to pressure palsy (HNPP), which has been considered by some authors as a type of CMT based on neurophysiology 1,4,5 . Thus far, more than 70 gene mutations have been recognized as responsible for this inherited neuropathy in all of its forms.This disorder can cause demyelination, axonal loss, or both, depending on the type of mutation.It may also be autosomal dominant or autosomal recessive and exhibit an X-linked inheritance pattern 1,5 . There have been few studies that have analyzed pain in CMT patients since it has not been recognized as a relevant symptom.The lack of an assessment of this specific manifestation directly affects the treatment of pain because it is not known if it is nociceptive or neuropathic 5,6 . Medication and therapies for treating CMT to reverse or slow its progression are not yet available.However, properly treating the symptom of pain is possible, as it is a concern for patients 1,4 . The objective of this study was to compile the results of the primary literature on CMT pain to assess the prevalence of this clinical expression and its classification according to the type of CMT and the type of pain. METHODS The Pubmed database was searched using the key words "pain" and "Charcot-Marie-Tooth disease".Studies that focused on the prevalence and type of CMT pain published between 2007 and 2017 were included (Figure). Only human studies were included, and those with 10 or less participants were not considered.Articles evaluating pain of only one site, such as painful feet or trigeminal neuralgia, were also excluded, as were studies that did not evaluate pain and its clinical characteristics exclusively. As an exception, a study that evaluated the clinical features of HNPP, including pain, was included as an article of interest, as it was unusual to find pain as a clinical feature of this type of CMT 1,4,5 . DISCUSSION Five articles 5,7,8,9,10 were found that assessed pain from CMT, all of which used specific pain questionnaires and scales to measure pain and its features, such as gender, type, duration, intensity and frequency.Two of the studies focused on CMT1A 8,9 .The number of participants with assessed pain, among other symptoms, in each study was 50, 16, 49, 176 and 39. The most common scale used to diagnose pain was the DN4 (Douleur Neuropathique en 4 Questions), a pain questionnaire that uses specific questions to evaluate pain.This was used in three of the five articles 7,8,9 .The questionnaire includes four questions about pain quality (burning, painful cold, and electric shock); four about associated symptoms (tingling, pins and needles, numbness, and itching) and physical tests for negative (hyperesthesia to touch, hyperesthesia to pinprick) and positive (brush-evoked pain) signs in areas that the patient referred to as experiencing pain.Each positive response is given a score of 1, and each negative response is given a score of 0. The total score is calculated as the sum of the 10 items, with scores of > 4 out of 10 suggesting neuropathic pain 11 .The Visual Analog Scale (VAS) was used in two of the studies 7,9 .The VAS is a 100 mm-long line anchored by verbal descriptors, with 0 mm being no pain and 100 mm being the worst pain imaginable. A study carried out by Ribiere et al. 7 , evaluating the prevalence of chronic pain from CMT, assessed 50 patients with confirmed CMT diagnoses.The 27 women and 21 men (one woman and one man were excluded due to missing data) included in the study had a mean age of 47 years and a mean duration of 20 years of pain symptoms.The group comprised 76.9% CMT1A; 13.5% CMTX; 5.8% CMT2; and 3.8% CMT4.Pain evaluation included the VAS, medication need, DN4 questionnaire, Questionnaire Concis Sur Les Douleurs, Neuropathic Pain Symptom Inventory, Pain Questionnaire of Saint Antoine and clinical examination.Thirty-two of the 50 patients had had pain for at least 20 years, while 18 were pain free.Of all the patients evaluated, 66% had chronic pain.The pain scale analysis determined that 62.5% of those patients with pain had neuropathic pain, with a positive DN4 in 50% of the cases.The oldest patients with the longest disease duration had mechanical pain.The most common spontaneous pain descriptor was cramps or tearing.Patients with CMT1A were found to be less affected by pain.Almost two thirds (65.4%) of the patients reported some pain with an average duration of 140 months.The mean score for the VAS was 5.5, and was > 4 in 79.4% of the cases.Analgesics were needed by 38.4% of the patients.Nearly two thirds (64.7%) of the patients presented with distal, peripheral and symmetric pain, and the feet were affected in 80% of cases.In conclusion, this study found pain to be a frequent occurrence for CMT patients with characteristics of neuropathic pain.It should be noted that the Questionnaire Concis Sur Les Douleurs determined that the pain had a low impact on the quality of life of the patients. A study by Pazzaglia et al. 7 attempted to answer an unsolved question presented by Padua et al. 11 in a brief communication in 2008, and proposed to investigate the origin of pain.They investigated 16 patients affected by CMT1A in a class of moderate severity (according to the Charcot-Marie-Tooth Neuropathy Score (CMTNS)) and 14 control participants in order to characterize pain in their neurophysiological mechanisms and correlate it with their psychophysical mechanisms.The CMT patients were selected from a larger group, based on their pain complaint.Assessment of the participants with the DN4, which evaluates neuropathic pain, revealed a mean score of 4.6, with 10 patients (62.5%) having DN4 ≥ 4 and six (37.5%) with DN4 ≤ 4.This result indicated that pain was neuropathic in the study sample.This study also tested laser evoked potentials, which showed Aδ fiber impairment in this neuropathy involving the lower limbs.In comparing the DN4 scores with the laser evoked potentials outcome for CMT1A patients, the findings were consistent with higher pain scores in this questionnaire, which was indicative of higher probabilities of neuropathic pain.The study found that patients with DN4 ≥ 4 had reduced laser evoked potential amplitudes (abnormal N2/P2 amplitude).Among the 62.5% of patients of the sample who had neuropathic pain, some also had pain in the same areas where non-neuropathic pain patients had pain (lower back, muscles, knee), suggesting the coexistence of both neuropathic and biomechanical pain 6,8,11 . In a study performed by Laurà et al. 9 to determine the characteristic of pain, whether neuropathic or related to musculoskeletal deformities, sensory symptoms were found in 49 CMT patients.The study also determined whether pain and small fiber involvement changed over a period of two years.Pain was assessed using the specific pain scales of the DN4 and the McGill Pain Questionnaire and two pain rating scales: the 11-point Likert Scale and the VAS.Clinical impairment was evaluated using the CMTNS, while small fiber function was assessed using thermal thresholds.Pain was a complaint for 43 of the 49 patients (88%), with it being in the feet in 30 patients (61%).Other pain locations included the knees (20%), lower limbs distally (27%), lower limbs proximally (4%), hip (12%), back (20%) and hands (22%).Nineteen patients (39%) reported pain in only one location, while 11 patients (22%) had pain in two or three locations and two (4%) had pain in four different areas.The mean VAS score was 3.5.Nine patients (18%) had DN4 ≥ 4, suggesting neuropathic characteristics, eight of whom (89%) had the pain in their feet.Women had significantly higher pain scores than men in the Likert Scale and in some domains of the McGill Pain Questionnaire.The Fatigue Severity Scale score was significantly correlated with the VAS.In a 24-month evaluation, the VAS was 4.0 and the DN4 was 1.5, which was considered an indistinguishable change.A small drop in the Likert Score was considered important for indicating mild congruent reductions in some domains of the McGill Pain Questionnaire.One or more of the thermal thresholds were abnormal in 29 patients (59%).In patients with a longer duration of the disease, the Warm Detection Threshold and Cold Detection Threshold were elevated.During the period of the study, there were no relevant differences between patients with treated or untreated arms and there was no correlation between thermal thresholds and DN4 ≥ 4.These findings suggest that there was no association between pain and disease severity or duration, and that only a small proportion of patients with CMT1A had neuropathic characteristics.In this respect, it is more likely that pain had a multifactorial origin.Either neuropathic or musculoskeletal pain was present in 29 patients (56%) and, for 15 patients, pain was the main symptom.Biomechanical pain was found to be especially frequent in CMT1A. A paper published by Ramchandren et al. in 2014 10 reported on data collected on 176 children with CMT, evaluating whether the origin of their pain was neuropathic or biomechanical.The authors hypothesized that children, who experience fewer biomechanical changes than adults, experience less pain despite the severity of the neuropathy.The Faces Pain Scale, Child Health Questionnaire, CMTNS, Six-Minute Walk Test and the Validated Foot Posture Index were used to relate ankle/foot structural deformity and child-reported pain in pediatric CMT.The population of the study was split into two groups, one with children aged 2-7 years (parent' s report) and another with children aged 8-18 years (self-report).The resulting average for the Faces Pain Scale was 2.0 "hurts a little more".The prevalence of pain was 80% according to the children' s reports, and 85% according to the parent' s reports.They found that children with CMT had mild to moderate pain, which compromised their quality of life.Scores reported by children and parents, respectively, were: physical quality of life -0.433 and -0.488; mental quality of life -0.293 and -0.110; CMTNS -0.102 and -0.051; and standardized Six-Minute Walk Test 0.11 and 0.019.Pain was not related to neuropathy severity as assessed by the CMTNS, which suggests that pain is not due to nerve damage alone.This paper hypothesized that the pain etiology would be due to structural changes in the feet, which was confirmed by univariate regression models.Mechanical pain in pediatric CMT cases could worsen into adulthood with the progression of joint damage; however, multivariate regression models found this not to be significant. Although pain is considered an uncommon symptom of HNPP, a study 5 performed in 2015 reviewed clinical and neurophysiological features of 39 patients with HNPP, and found pain to be a complaint at disease onset for six patients (15%), with three others reporting pain at some point of the disease (approximately 8%).Out of the six patients with pain as an initial symptom, three presented with chronic painful sensorimotor polyneuropathy affecting the lower limbs, which was phenotypically indistinguishable from CMT1 1,5 . The data are detailed in the Table. Pain prevalence could not be obtained from the reviewed studies because the methods used to select the study samples varied.Two out of the five studies included only CMT1A patients.One study included only patients with referred pain, while another study only evaluated children. The specific questionnaires adopted for pain assessment also varied among the studies.Furthermore, Ribiere et al. 7 demonstrated that DN4 has a specificity of only 81.2%, which could explain the discordance in the results of these studies. Only two studies correlated small fiber involvement and pain to explain pathophysiology.Pazzaglia et al. 8 correlated clinical scales of pain with small fiber neurophysiological data, while Laurà et al. 8 correlated pain scales with thermal thresholds.Whereas laser evoked potentials were found to be significantly related to DN4 scores, with lower amplitudes for DN4 scores ≥ 4, thermal thresholds showed no correlations between small fiber function and pain scales.These two studies also had contrasting conclusions about the pain origin. Laurà et al. 9 and Ramchandren et al. 10 agreed that pain was not correlated with the severity of CMT. An important bias of the Ramchandren et al. 10 study, however, was the differences in the cognitive development of the reporting children and parents.Since CMT is a hereditary disease, parents affected by CMT could report higher scores for their children. In conclusion, there are few studies in the literature about pain from CMT disease.In the last 10 years, only five studies assessed pain using specific pain questionnaires.All five studies were in agreement that pain has a high frequency of occurrence and a strong impact on CMT patients.Among the studies there were more assessments of CMT1A because it is the most common type of CMT.Only three papers mentioned pain classification, and there was no consensus among them whether it was caused by biomechanical or neuropathic mechanisms.Two papers concluded that pain was more likely to be due to a neuropathic origin, while one of them found multifactorial pathways.There was no consensus on whether the frequency of pain varied among the specific types of CMT. More research is needed to elucidate how to deal with CMT pain, to improve patient pain management and quality of life, and to direct the treatment of pain from CMT, which is currently general, and common to other neuropathies. 5 Figure . Figure.Flowchart of research articles.
2018-05-11T22:37:44.601Z
2017-09-24T00:00:00.000
{ "year": 2017, "sha1": "910730e52745f34c38a85c57800868c5881e60e4", "oa_license": "CCBYNC", "oa_url": "http://www.scielo.br/pdf/anp/v76n4/0004-282X-anp-76-04-0273.pdf", "oa_status": "GOLD", "pdf_src": "Thieme", "pdf_hash": "f633c3e3fb2b4e2de4e80da105545a5549fe2130", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
29170424
pes2o/s2orc
v3-fos-license
Comparison of two commercial carbapenemase gene confirmatory assays in multiresistant Enterobacteriaceae and Acinetobacter baumannii-complex Multidrug-resistant Gram-negative bacilli (MDR-GNB) producing carbapenemases are increasing at an alarming speed. Rapid confirmation of carbapenemase type will be an important diagnostic step in clinical microbiology laboratories not only to reduce the risk of transmissions but also for optimising antibiotic therapy in the future. We compared diagnostic reliability of two commercially available molecular assays (Check-Direct CPE vs. AID line probe assay) for detection and typing of carbapenemase genes in 80 well-characterized isolates of MDR-GNB. Respective strains were isolated in various clinical specimens at our clinical microbiology laboratory. The reference standard included confirmation of carbapenemase-production at the molecular level at the German National Reference Laboratory for Multidrug-resistant Gram-negative bacteria (Ruhr-University Bochum, Germany). 53 Enterobacteriaceae and 27 members of the A. baumannii-complex were used in this study. The tested assays appeared highly reliable to confirm carbapenemase-producing Enterobacteriaceae (CPE) with respective sensitivities of 97.7%, but are currently unsuitable for analysis of members of the A. baumannii-complex. Both assays are easy to perform and rapid tools for confirmation and typing of the most common carbapenemase genes in Enterobacteriaceae. Implementation should be possible for any clinical microbiology laboratory with Check-Direct CPE being easier to handle and having less technological requirements. Treatment of patients suffering from infections with carbapenemase-producing MDR-GNB is difficult as only a few alternative treatment options remain [7]. Being one of the last-line options, the consumption of polymyxins, particularly colistin, almost doubled in Europe between 2009 and 2013 [2]. Carbapenemases are capable to hydrolyze most β-lactams, including carbapenems, and most enzymes are not inhibited by clinically available β-lactamase inhibitors [1,3]. Avibactam, a non-β-lactam β-lactamase inhibitor, recently became commercially available and inhibits class A and partially class D carbapenemases. It is not active against class B carbapenemases [8]. Infections due to carbapenemase-producers show high mortality [9]. Thus, rapid confirmation of carbapenemase type will be an important diagnostic step in clinical microbiology laboratories not only to reduce the risk of transmissions but also for optimising antibiotic therapy for these difficult-to-treat bacteria in the future. A wide array of newer techniques for detection of carbapenemases has become available recently [4]. The goal of this study was to compare diagnostic reliability of two commercially available molecular assays for detection and typing of carbapenemase genes in cultured MDR-GNB-isolates. AID line probe assay The AID carbapenemase line probe assay RDB2290 (GenID 1 , Straßberg, Germany) allows the genetic detection of various carbapenemases by reverse hybridization ( Table 2). The assay is able to discriminate between 13 carbapenemases ( Check-Direct CPE Check-Direct CPE (Check-Points Health, Wageningen, The Netherlands) detects the presence of the carbapenemases KPC, NDM, VIM and OXA-48 by multiplex real-time PCR ( Table 2). The assay discriminates between KPC, OXA-48, and NDM/VIM, it does not report a specific variant of the carbapenemase gene families. As NDM and VIM are detected by the same fluorochrome, it is not possible to differentiate between these two types of carbapenemases. According to the user manual, crude DNA was extracted from bacterial cell suspension by heating at 98˚C for 10 minutes followed by centrifugation at 21255 g for 2 minutes. PCR reactions were performed using a Rotor-Gene Q/Corbett Rotor-Gene 6000 (Qiagen, Hilden, Germany) according to the proposed Rotor-Gene Q program. Results were interpreted per the Check-Points instructions with the Rotor-Gene Q Software (Qiagen, Hilden, Germany). The total time to perform Check-Direct CPE from cultural isolate to result is approximately 150 minutes. Diagnostic reliability Among Enterobacteriaceae OXA-48 was the most frequently detected carbapenemase. Among A. baumannii-complex OXA-23 was the predominant carbapenemase. Both assays missed IMP-14 with IMP-14 being in the proposed spectrum of AID line probe assay but not Check-Direct CPE (Table 3). ID and phenotype of the respective isolate (Enterobacter cloacae, MIC imipenem 1 μg/ml, MIC meropenem 4 μg/ml) were confirmed after analysing the organisms from fresh stock cultures. Both assays correctly confirmed 42/43 (97.7%) of the carbapenemase-producing Enterobacteriaceae-strains. One of twenty-five (4%) carbapenemase-positive members of A. baumanniicomplex were detected by AID line probe assay, while Check-Direct CPE detected none (Table 3). Both tests showed no false-positive reactions, neither in carbapenemase-negative isolates, nor in carbapenemase-producing MDR-GNB (e.g. different than those detected by the reference analysis). Discussion The main goal of the current study was to find a reliable method for confirmation and typing of carbapenemases in phenotypically suspicious cultured isolates in order to provide guidance for antibiotic therapy and to rapidly institute infection control measures for these difficult-totreat bacteria. Thus, we compared the commercially available AID line probe assay with the Check-Direct CPE multiplex-PCR in well-characterized cultured MDR-GNB-isolates. Our results show, that the tested assays are highly reliable to confirm CPE as respective sensitivities were 97.7%. Both assays failed to detect IMP-14 with IMP-14 being in the proposed spectrum of AID line probe assay but not Check-Direct CPE. In line with current epidemiology both tests appeared unsuitable for analysis of members of the A. baumannii-complex. It appears highly desirable to augment these assays with the capability of detecting OXA-23, as this enzyme may also emerge in Escherichia coli [10,11]. However, no false-positive reactions occurred, neither in carbapenemase-negative isolates, nor in carbapenemase-producing MDR-GNB. As the number of carbapenemase-negative A. baumannii-complex strains was very low, the A. baumannii strains with carbapenemases not targeted by the assays are negative specificity controls for this line of bacteria. Table 3. Tested carbapenemase genes and results for gold standard, AID line probe assay and Check-Direct CPE. Enterobacteriaceae OXA Assuming that both assays are able to detect their predicted carbapenemase targets, we calculated expected sensitivities using the nationwide German data of the NRL Bochum of the year 2015 [6]. Expected sensitivities were high for both assays (data not shown) being higher for Check-Direct CPE as it possesses a broader spectrum among the most prevalent carbapenemases OXA-48-like, KPC, NDM, and VIM, respectively (Table 2), but this awaits further study. A recently published study showed also excellent diagnostic reliability using AID line probe assay for cultured isolates. In contrast, direct molecular testing of urine samples revealed problems with specificity/positive predictive values as positive AID line probe assay-results could not be confirmed by culture methods [12]. This fact is also true for using Check-Direct CPE in primary specimens (e.g. perirectal swabs). In a published clinical study, in only 16% of the Check-Direct CPE-positive perirectal swabs a corresponding carbapenemase-producing organisms could be identified by culture methods. Thus, the positive predictive value (PPV) was only 21% [13]. A significant false-positive rate and low PPV are also described in a similar study evaluating Check-Direct CPE for direct analysis of rectal swabs [14]. In general, false-positive results seem to be appearing in many studies performing molecular carbapenemase testing in stool samples or rectal swabs [13,15,16]. Interpretation of such molecular results can be difficult as the epidemiological and clinical relevance is not known [15]. The major strength of our study was the utilization of well-characterized MDR-GNB-isolates that were analysed by the German NRL for Multidrug-resistant Gram-negative bacteria as reference standard. There, confirmation of carbapenemase-production was performed on a molecular level. By intentionally using cultured isolates our aim was to focus not primarily on the hygiene and contact isolation issue using the assays for screening of primary specimens, but on confirmation of carbapenemase-production in phenotypically suspicious isolates of a relevant clinical sample (e.g. blood culture, urine culture), i.e. reduced carbapenem susceptibility and/or positive modified hodge test. So, we suggest that a combination of culture plus susceptibility testing and molecular methods is the ideal workflow. A limitation of this study is, that the number of tested carbapenemase-negative isolates of the A. baumannii-complex was low as prevalence of carbapenemase-production (92.6%) was high in these MDR-GNB (Table 1). Summarizing, both assays are easy to perform and rapid tools ideally allowing same day confirmation and typing of the most common carbapenemase genes in Enterobacteriaceae. Implementation should be possible for any clinical microbiology laboratory. Check-Direct CPE seems to be easier to handle in regard to extraction method, technician time, turnaround-time and technological needs. Among CPE, Check-Direct CPE showed comparable sensitivity. Check-Direct CPE is not capable to distinguish between the metallo-β-lactamases (class B) NDM and VIM because both are detected by the same fluorochrome. In regard to the goal concerning the current needs, differentiation between class A, B and D will be sufficient, as the newly introduced βlactamase inhibitor avibactam shows no activity against class B and partial activity against class D [8]. Currently, we would prefer implementing Check-Direct CPE for confirmation of carbapenemase-producing Enterobacteriaceae in our laboratory. Continuous surveillance of carbapenemase epidemiology is required as the sensitivity of both assays may change unfavourably if some of the rarer carbapenemase-types become more prevalent.
2018-05-25T23:38:10.115Z
2018-05-21T00:00:00.000
{ "year": 2018, "sha1": "65e3ff2f8a5df61f6226c938b2b6db12fa276ad8", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0197839&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "65e3ff2f8a5df61f6226c938b2b6db12fa276ad8", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
235280751
pes2o/s2orc
v3-fos-license
Synthesis of Modified Graphite with High Crystalline Na-LTO by Simple Doping Method. Graphite doped with Na-Li4Ti5O12 (Na-LTO) for lithium-ion batteries with stable rateability were prepared in a short time by simple grounding which followed with the short calcination time. High crystalline Na-LTO improved the diffusivity of the Li-ion kinetics and stability. Therefore, Graphite/Na-LTO (G-NaLTO) was compared with the pure graphite to find out the differences in terms of XRD pattern and electrochemical performance. XRD analysis showed there was no significant differences pattern from G-NaLTO with commercial graphite. In terms of electrochemical performance, G-NaLTO which was doped by 1 wt% Na-LTO, presented a high initial charge capacity 356.69 mAh g−1 at 0.1C and stable discharge rate-capability at 0.2C and 1C were 264.64 mAh g−1 and 220.65 mAh g−1, respectively, after 15 cycles. G-NaLTO with a composition of 3 wt% Na-LTO shows a decreasing electrochemical performance so it can be concluded that the addition of 1 wt% LTO by the doping method is the most optimal. Introduction Due to its high thermal stability, high chemical stability, low cost, and high theoretical capacity, graphite is the most commonly used among the different anode materials [1]. Graphite has a limitation which is that it can't work at high currents. This is caused by dendrites growing on the graphite surface while working at high currents. Dendrite formation has an effect on lowering the charge/discharge output and increase the temperatures operating that was become another problem, such as the safety of battery use. In addition, dendrites forming are the cause of the formation of a thick layer of SEI that decreases battery efficiency. [2]. Many efforts have been made to solve this issue, including surface modification to avoid cracking or expansion [3], decreasing the thickness of the electrode to decrease the limitation of lithium diffusion [4], mesoporous structures leading to high-rate capability of Li insertion/extraction [5], and Carbon Nano Tubes (CNT) for ion transport channels [6]. Among various metal oxides, Li4Ti5O12 (LTO), a promising candidate for anode content to ensure battery protection by avoiding lithium dendrites problems [7]. LTO has a high operating voltage value (1.55V vs Li / Li +) where the dendrite is less formed due to the property of LTO that no structural change occurs just like with graphite in the charge/discharge cycle [8]. The addition of LTO to graphite in our previous research has provided with the highest capacity of 130.66 mAh g-1 [9]. Further research is developed to enhance electrochemical performance by modifying LTO. The enhancement of crystallinity can improve electrochemical performance of LTO material [10]. An IOP Publishing doi:10.1088/1757-899X/1096/1/012143 2 alternative way to enhance a material's crystallinity is by adding a small amount of salt during synthesis, called salt-assisted. The addition of salt (NaCl, KCl) has been succeeded to assist in the crystal formation of materials such as ZnO, ZnFe2O4, and Strontium Ferrite [11]. Thus, high LTO crystallinity by salt addition NaCl was expected to be sufficient for graphite anode material modification to overcome the SEI and rateability problem. In this study, for the negative electrode of lithium-ion batteries, we used a simple solid-state reaction to dope a high crystalline Na-LTO onto the graphite particles. X-Ray Diffraction (XRD) studied the composite material's Na-LTO and G-NaLTO crystalline structures. Furthermore, the electrochemical performance will be studied to understand the effect of high crystalline Na-LTO addition on graphitic carbon. Preparation of Graphite/Na-LTO The Li4Ti5O12 (LTO) powders were prepared via solid-state reaction with the addition of NaCl 4% wt [12]. In order to investigate the effect of Na-LTO addition, two different amounts of Na-LTO 1 and 3 wt% were added to the commercial graphite (MTI, America) by grounding the Na-LTO and graphite. Each composition named by G-NaLTO1 for graphite with addition LTO 1 wt% and G-NaLTO3 for graphite with addition LTO 3 wt%. Furthermore, the mixture calcined 600 o C under N2 for 1 hour to obtain graphite/Na-LTO. Material Characterization Pure graphite data were analyzed to obtain Brunauer-Emmett-Teller (BET) surface by N2 adsorption and desorption and shown in Table 1. The crystal structure of the LTO and G-NaLTO powders has been examined by X-ray diffractometer / XRD (D2 Phaser Bruker, Germany) using Cu-K-alpha radiation (π=1.5418 Å) with a frequency of 10 o -80 o of two theta (2 theta). Electrochemical Measurements The electrochemical characteristics were studied using a full-cell (18650-type cylindrical battery), LiFePO4/LFP as a cathode (GELON LIB CO., China). G-NaLTO powder was combined with conductive carbon (acetylene black / AB, MTI, Richmond, CA, USA) and water-based binder (CMC and SBR) at a weight ratio of 80:10:2:8 in water to form slurry that coated on both sides of copper foil. The assembly of cylindrical cells was performed in Li-ion batteries manufacturing plant located in Surakarta. 1 M LiPF6 dissolved in EC:DMC (3:7) was used as electrolyte. The cells were tested on the NEWARE Battery Device Analyzer, battery performance analysis is performed with the voltage range of 2.5-3.6V with an initial charge/discharge cycle current of 0.1C (37,5 mA/g). Figure 2. Initial discharge-charge curves of (A) G-NaLTO1 (B) G-NaLTO3 at 0.1C; (C) specific charge-discharge rate performance (0.1C, 0.2C, 1C) of G-NaLTO1; (D) cycle performance of G-NaLTO1 Figure. 2A. and Fig. 2B. exhibits the initial charge/discharge curves of the sample G-NaLTO1 and G-NaLTO3, respectively, with 0.1C rate. The initial specific charge capacities at 0.1C rate of graphite with doping 1% and 3% wt were 356.69 mAh g -1 and 339.89 mAh g -1 , respectively. The G-NaLTO1 shows the highest initial charge capacitiy which is close to the theoretical value (372 mAh g -1 ). The sample of G-NaLTO3 exhibits the lowest initial charge capacities which means the addition of Na-LTO 3% wt. altering the electrochemical properties of the graphite. The result indicates that higher than 1% wt LTO content will decreases the reversible capacity of the composite. Fig. 3C. shows an analysis of the sample G-NaLTO1 by enhancing the rate charge/discharge to 0.2C and 1C which produces specific discharge capacities 258.75 mAh g -1 and 220.65 mAh g -1 , respectively. These results can be compared to another previous studies listed on Table 2. that was showing graphite/Na-LTO by doping method have a similar result with another method. Figure. 2D. shows the charging plot (lithium-ion insertion) potential as a cycle life function. With the increasing charge/discharge rate, the power slow decreased even at 1C rate, the initial specific charge power still being 220.65 mAh g -1 which can occur because the LTO properties resist lithium dendrites problems which occur in high ratability performance [15]. Thus, The Na-LTO doping had close initial capacity result with another research. It shows that this method can apply as one of the potential development to enhance electrochemical performance. Conclusion Graphite/Na-LTO was prepared with success using a doping process followed by calcination. The chemical properties on the graphite were strengthened by a Na-LTO. Unlike previous research, the graphite's rate-capability was enhanced by LTO doping. The performance of cycling improvements is poor because the properties of the graphite material were affected by the calcination process during graphite/Na-LTO receiving. Alternatively, for lithium ion batteries, the graphite/Na-LTO could be potential for high-rate and stable rateability anode material.
2021-06-02T23:49:08.808Z
2021-01-01T00:00:00.000
{ "year": 2021, "sha1": "21f33a4a497075fdcd26804cf9da652cb589547c", "oa_license": null, "oa_url": "https://doi.org/10.1088/1757-899x/1096/1/012143", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "21f33a4a497075fdcd26804cf9da652cb589547c", "s2fieldsofstudy": [ "Materials Science" ], "extfieldsofstudy": [ "Physics" ] }
229163505
pes2o/s2orc
v3-fos-license
SlGT11 controls floral organ patterning and floral determinacy in tomato Background Flower development directly affects fruit production in tomato. Despite the framework mediated by ABC genes have been established in Arabidopsis, the spatiotemporal precision of floral development in tomato has not been well examined. Results Here, we analyzed a novel tomato stamenless like flower (slf) mutant in which the development of stamens and carpels is disturbed, with carpelloid structure formed in the third whorl and ectopic formation of floral and shoot apical meristem in the fourth whorl. Using bulked segregant analysis (BSA), we assigned the causal mutation to the gene Solanum lycopersicum GT11 (SlGT11) that encodes a transcription factor belonging to Trihelix gene family. SlGT11 is expressed in the early stages of the flower and the expression becomes more specific to the primordium position corresponding to stamens and carpels in later stages of the floral development. Further RNAi silencing of SlGT11 verifies the defective phenotypes of the slf mutant. The carpelloid stamen in slf mutant indicates that SlGT11 is required for B-function activity in the third whorl. The failed termination of floral meristem and the occurrence of floral reversion in slf indicate that part of the C-function requires SlGT11 activity in the fourth whorl. Furthermore, we find that at higher temperature, the defects of slf mutant are substantially enhanced, with petals transformed into sepals, all stamens disappeared, and the frequency of ectopic shoot/floral meristem in fourth whorl increased, indicating that SlGT11 functions in the development of the three inner floral whorls. Consistent with the observed phenotypes, it was found that B, C and an E-type MADS-box genes were in part down regulated in slf mutants. Conclusions Together with the spatiotemporal expression pattern, we suggest that SlGT11 functions in floral organ patterning and maintenance of floral determinacy in tomato. Supplementary Information The online version contains supplementary material available at 10.1186/s12870-020-02760-2. Background Flowers of angiosperms are the reproductive organs playing an important role in reproduction. A typical eudicot flower, such as Arabidopsis and tomato, consists of four different organs arranged in four whorls at the tip of floral shoot. Based on genetics studies on model plants including Arabidopsis [1][2][3], Antirrhinum majus [1] and Petunia hybrid [4], an elegant model involving ABCDE class genes, has been proposed to explain the organ patterning in flower. In Arabidopsis, A class genes APETALA1 (AP1) and APETALA2 (AP2) are involved in the development of sepals and petals. B class genes APETALA3 (AP3) and PISTILLATA (PI) can form protein complexes with C class gene AGAMOUS (AG) and E class genes SEPALLATAs (SEPs) to promote stamens development. The carpels formation is regulated by both C class genes AGs and E class genes SEPs. The interference of ABCDE genes leads to confusion in the identity of floral organs [3,5]. Compared with Arabidopsis, tomato genome has more homologous ABCDE genes. Tomato possesses four B class homologous genes, two DEF lineage genes-Tomato APETALA3 (TAP3) and Tomato MADS-box 6 (TM6), two GLO genes-Solanum lycopersicum GLOBOSA (TPIB, SlGLO1) and Tomato PISTILLA TA (TPI, SlGLO2) [6,7]. In tomato, there are two C class homologous genes (TOMATO AGAMOUS 1 (TAG1) and TOMATO AGAMOUS-LIKE 1 (TAGL1)) [8,9] and six E class homologous genes (Tomato MADSbox 5 (TM5), TM29, JOINTLESS-2 (J2), ENHANCER-of-JOINTLESS2 (EJ2), RIPENING INHIBITOR (RIN) and Solyc04g005320) [10]. Although the ABC genes clearly have similar functions between Arabidopsis and tomato, they may have separate functions independent of each other. The development of stamens and carpels has drawn particular attention, as the regulation of these two floral parts is important for crop breeding. In Arabidopsis, mutations in the B-class genes APETALA3 (AP3) or PISTIL LATA (PI) promote the conversions of petals into sepals and stamens into carpels [1]. Similarly, the tomato stamenless mutant was identified to have mutations in B class gene TAP3 [11,12]. In the mutant, stamens are completely transformed into carpels which are fused with the carpels in the fourth whorl to form a unique gynoecium, and petals are partially transformed into sepals [11,12]. The silencing of another B class gene TM6 also produced similar phenotypes [6,13]. Mutants of the B-class genes DEFICIENS (def) and GLOBOSA (glo) triggered the same homeotic transformations in Antirrhinum [14]. The rice stamenless 1(sl1) mutant exhibits homeotic conversions of lodicules and stamens to palea/ lemma like organs and carpels, which resembles the mutant of B class gene SPW1 [15]. Another type of homeotic transformation has also been reported in these species. In Arabidopsis, the mutant of the C-class gene AG showed that the stamens and carpels were transformed into petals and sepals [1]. These phenotypes are very similar to the tag1 mutant in tomato [8] and agamous-like flower (aglf) mutant in Medicago truncatula [16]. The initiation and termination of floral meristem are precisely controlled to ensure the successful development of flowers, in which a set of transcription factors are spatiotemporally coordinated [17]. In Arabidopsis, the activity of floral meristem (FM) is maintained through the WUSCHEL-CLAVATA (WUS-CLV) signaling pathway, which plays a key role in maintaining undifferentiated cell populations in the meristem [18,19]. In addition, the ag − 2 mutant in Arabidopsis produce flowers without stamens and carpels and form indeterminate flowers with reiterating sepals and petals, suggesting AG is very important for floral meristem determinacy [20]. The LEAFY (LFY) gene together with WUSCHEL (WUS) activates AGAMOUS (AG) at floral stage 3 [21,22]. While in later stages of the floral development (starting from stage 6), induction of KNUCKLES (KNU) by AG is crucial for the timely termination of FM [23]. In addition, high AG level indirectly regulates WUS activity to ensure the proper termination of meristematic activity in the FM, in which a set of regulators including trithorax group protein ULTRAPETALA1 (ULT1), bZIP transcription factor PERIANTHIA (PAN), and other factors such as REBELOTE (RBL) and SQUINT (SQN) are involved [24][25][26][27][28]. The tetramerization of SEPALLATA3 (SEP3) and AG is essential for AG function that activates CRABS CLAW (CRC) and KNU during floral determinacy. In addition, these regulatory networks also interplay with plant hormones during the floral development. It was found that AUXIN RESPONSE FACTOR 3 (ARF3) is transcriptionally regulated by AG and APET ALA2 (AP2) in developing flowers, which represses cytokinin activity to inhibit WUS expression [29]. The mechanisms regulating floral development seem to be conserved among species. It has been found that KNU interacts with MINI ZINC FINGER (MIF) to regulate WUS expression and this mechanism is conservative between Arabidopsis and tomato [30]. Floral reversion is an unusual process in which the committed floral development is reverted back to vegetative growth, resulting in outgrowth of leaves or inflorescence structures from the first flower [31]. This phenomenon is usually related to varied environmental conditions, such as temperature and photoperiod [31]. For example, the floral reversion was observed in lfy-6 and ag-1 mutants of Arabidopsis grown in short-day conditions [32]. Floral reversion was also observed in natural allopolyploid Arabidopsis suecica, in which abnormal expression of floral genes, including AGL24, APELATA1 (AP1), SHORT VEGETATIVE STAGE (SVP) and SUPPRESSOR OF CONSTANS1 (SOC1) were detected [33,34]. Unlike in Arabidopsis, LEAFY (LFY), TERMINAL FLOWER1 (TFL1) and AG in Impatiens balsamina seemed not to be involved in terminal flowering and floral determinacy [35]. In Petunia hybrida, cosuppression of FLORAL BINDING PROTEIN1 (FBP2), a homolog of Arabidopsis SEPALLATA-like gene, led to new inflorescences growing from axils of carpels [36]. In tomato, the down-regulated of TM29, a homolog of Arabidopsis SEPALLATA gene, also resulted in ectopic leafy stems and flowers formed in fruits [37]. In this study, we identified a tomato recessive mutant with the mutation in the gene Solyc03g006900 which is named Solanum lycopersicum GT1 (SlGT11) based on the previous nomenclature and encodes a transcription factor belonging to Trihelix gene family [38]. Recently, a mutant of SlGT11 ortholog in Medicago truncatula has been reported to control the C-function gene expression and it was named AGAMOUS-LIKE FLOWER (AGLF) [16]. In aglf mutant, the stamens and carpels in the inner whorl are replaced by petals and sepals respectively, resembling the floral phenotype of ag-1 mutant in Arabidopsis [16,20,39]. We found that the loss-function of SlGT11 resulted in sepaloid petal at high temperature in the second whorl, carplloid stamen in the third whorl, and ectopic formation of stem-, leaf-and flower-like structures in the fourth whorl. Together with the result that B, C and an E-type MADS-box genes were downregulated in slf mutants, we concluded that SlGT11 has important functions in the development of the three inner floral whorls. Furthermore, spatiotemporal expression analysis showed that SlGT11 was expressed throughout the flower in the early stages and its expression became more specific to the primordium position of stamens and carpels in later stages of the floral development. Together our results suggest that SlGT11 functions in floral organ patterning and maintenance of floral determinacy in tomato. Identification of the slf mutant In order to study the mechanism regulating floral organ identity in tomato, we screened tomato EMS-mutant library [40] and identified a mutant (TOMJPG2637-1) with identity defects in stamens and pistils, while the identity and number of sepals and petals were unchanged compared with wild type (WT) (Fig. 1a-c, e, g). In the mutant, stamens showed severely carpelloid in the third whorl (Fig. 1a). From the longitudinal sections, we verified that the pistil-like structures were formed in the third whorl (Fig. 1a, b). Only a few flowers (~22.95%) have stamen-like structure remaining in the third whorl of the mutant based on the transverse sections of flowers (Fig. 1b, d). The carpelloid stamens of the mutant developed into irregular fruits with more locules and the vestigial stamen structures were later formed radial cracks on the fruit surface (Fig. 1c). As this phenotype is similar to previously reported stamenless mutant [11,12], we thus named the mutant stamenless like flower (slf). Besides, our histological analyses showed that new shoot/floral meristems instead of carpel primordium formed in the fourth whorl of the mutant ( Fig. 1. g). The ectopic shoot/floral meristem in the slf mutant produced ectopic aberrant foliage and flowers in the fourth whorl, indicating that the normal floral determinacy was lost ( Fig. 1. g). As a result, the carpelloid stamen in slf mutant developed into the parthenocarpic fruit without seeds (Fig. 1c, d, Fig. S1c, d). Interestingly, in the carpallike structure, the ovule development seemed normal in slf mutant. Therefore, we attempted to use WT pollen grains for the cross-pollination in slf mutant, and only small amount of seeds were obtained. This result may be due to the abnormal pistil-like structures that hindered the pollen-ovule process (Fig. S1e, g). All these results indicated that the slf mutant was almost sterile. SlGT11 gene encodes a regulator involved in floral organ identity To identify the causal gene in slf mutant, we first conducted a genetic analysis by crossing the mutant to the WT. In the F2 segregated population, we found 92 progenies resembling the WT and 28 progenies with slf phenotypes, which were close to the 3:1 Mendelian segregation rule, indicating that the phenotypes in slf were caused by a recessive mutation at a single locus. Through bulked segregant analysis sequencing (BSA-Seq), we identified a signal peak on chromosome 3 (Fig. 2a). Further SNP analysis assigned the causal mutation to the gene Solyc03g006900 which encodes a nucleuslocalized Trihelix transcription factor named SlGT11 previously [38], containing a putative GT1 DNAbinding domain and a PKc kinase domain (Fig. S3). The A to T substitution at the 2195 bp position identified forms the termination codon TAG and mRNA level of the SlGT11 gene in the mutant was significantly decreased (Fig. 2b, f). Further sequencing analyses verified that the base substitution occurred in all 28 F2 progenies with slf phenotypes (Fig. 2c). Subcellular localization in tobacco leaves showed that SlGT11-GFP was located in the nucleus, consistent with the presence of DNA-binding domain (Fig. 2d). To further verify the SlGT11 function, we transformed WT tomato with an RNA interference (RNAi) plasmid targeting the C-terminus of the SlGT11. The phenotypes of 5 independent transgenic RNAi lines were consistent with the slf mutant. qRT-PCR verified the significant reduced expression of SlGT11 in RNAi lines (Fig. 2f). The observed phenotypes including carpelloid stamens in the third whorl and new meristem formation from the fourth whorl in RNAi lines (#1 and #6) indicated that SlGT11 was the gene causing the developmental defects of stamens and carpels in slf (Fig. 2e). In addition, abnormal fruits were also found in transgenic lines #1 and #6, indicating that SlGT11 plays an important role in regulating the floral identity and floral meristem termination (Fig. 2g). Phylogenetic analysis showed that all the SlGT11 homologous genes in solanaceae were grouped into the same cluster, while Arabidopsis homologous gene At5g51800 belonged to a less related cluster (Fig. S2). Consistent with this phylogenetic distance, At5g51800 mutation does not cause the similar floral phenotype, indicating the function of this gene is not completely conserved among different species. The comparative analysis of the amino acid sequence of SlGT11 in solanaceae showed slf. e Quantification of sepals, petals, stamens and carpels in WT and slf. The vestigial stamen-like structure in slf is counted as the stamen; the carpelloid stamen in slf is counted as the carpel. The data represent means ±SD (n = 296). f A schematic diagram of the floral organs in WT and slf. The WT flower consists of four whorls: sepal (green), petal (orange), stamen (yellow and blue solid line), and carpel (purple). The slf flower consists of sepals (green) in the 1st whorl, petals (orange) in the 2nd whorl, carpelloid stamens (purple) with or without stunted stamens (yellow and blue dotted line) in the 3rd whorl, and ectopic shoot/floral meristem in the 4th whorl (red circular). g Ectopic shoot/floral meristems emerge in the flower, and ectopic shoots produce flowers and leaf-like structures in the fruit, longitudinal sections of the slf flowers show ectopic meristem stained with toluidine blue at the floral developmental stage 14. es/fm: ectopic shoot/floral meristem; es: ectopic stem; ef: ectopic flower; el: ectopic leaf; ec: ectopic carpel. Scale bars: (a, b, g) 1 mm that the N-terminal GT1 domain and the C-terminal PKc kinase domain are highly conserved (Fig. S3). Spatial and temporal expression pattern of SlGT11 in tomato To examine the expression pattern of SlGT11, we performed qRT-PCR in different tomato tissues including roots, hypocotyls, cotyledons, stems, leaves, flowers and fruits. The expression of SlGT11 was highly enriched in the flowers (Fig. 3a). Then RNA was extracted from different parts of flowers at anthesis for qRT-PCR and we found that SlGT11 was predominantly expressed in stamens, indicating that SlGT11 could be important for stamen development (Fig. 3b). Furthermore, we analyzed the temporal expression trend of SlGT11 during the floral development. qRT-PCR showed that SlGT11 expression was time-specific, with high expression levels from 6 days to 2 days before flowering (at stage12-18) (Fig. 3c). Furthermore, we constructed a GUS reporter driven by SlGT11 promoter and transformed it into WT tomato (Fig. 3d). GUS staining showed that SlGT11 was expressed throughout the early stages of flowers and the expression became more specific to the stamen and carpel in later stages (Fig. 3e). The expression pattern of SlGT11 in inner two whorls of flower implies that it is probably involved in the regulation of tomato stamen and carpel development. Stamen defects occur at the early stage To investigate how SlGT11 affects stamens and carpels at different floral developmental stages [41], we used scanning electron microscopy (SEM) to visualize the floral development in WT, slf and SlGT11 RNAi line 6 ( Fig. 4a-o). The early stages (before the stage 3 when sepal primordia and petal primordia were initiated) of floral development in slf and SlGT11 RNAi line 6 appeared to be similar to that of the WT (Fig. 4a-f). At and WT. f qRT-PCR analysis of the SlGT11 expression in WT, slf, and SlGT11 RNAi lines # 1, # 2, and # 6. SlACTIN was used as the internal control. Error bars represent the SD from three biological replicates. g The fruits of WT and SlGT11 RNAi lines # 1 and # 6 show the radial cracks and ectopic stems. #1: SlGT11-RNAi-1; #2: SlGT11-RNAi-2; #6: SlGT11-RNAi-6; Scale bars: e 1 mm; g 1 cm stage 5, the differences between WT and slf or SlGT11 RNAi line 6 became more prominent. In the WT, six stamen primordia and one carpel primordium with four locules were initiated in the third and fourth whorl respectively (Fig. 4g). In contrast, the third and fourth floral organ primordia in the slf and SlGT11 RNAi line 6 were initiated in disorder (Fig. 4h, j). The defective floral organ identity became more severe in slf and SlGT11 RNAi line 6 at stages 6 and 9 (Fig. 4j, m). In the mutant, most stamens were transformed into carpel-like structures, and some ectopic meristems were produced in the central area of the flower (Fig. 4k, l, n and o). Combined with the spatiotemporal expression, we concluded that SlGT11 plays an essential role in the early development of floral organs. Expression of floral development genes in slf mutant Since the defects of stamens and carpels occurred at the early stages, we compared the expression of BCE genes that were previously reported to affect stamen and carpel identity in the floral buds at stage 1-6 between WT and slf [41]. Consistent with the phenotypes, the BCE genes showed the distinct expression pattern between WT and slf mutant. Class B genes TAP3, TPI and TPIB, class C genes TAG1, TAGL1 and class E gene TM29, were all significantly down-regulated in slf. However, the expression level of the B-class gene TM6 and E class gene TM5 were not significantly affected in slf during the floral development (Fig. 5a). We next analyzed the expression levels of some regulators involved in floral meristem identity and floral meristem termination. Since the ectopic floral meristem was repeatedly emerged in the later stages of floral development (Fig. 1g), we chose a set of essential genes for floral development including SlWUS, SlKNU, SlCLV3, SlCLV1, SlCLV2, FALSIFLORA (FA), SlULT1-like and SlRBL-like for transcriptional analysis at the later floral stage (stage 20). FA and SlWUS were up-regulated in slf, while SlKNU, SlCLV3, SlCLV1 and SlULT1-like appeared to be down-regulated in slf flowers (Fig. 5b, c). These results were consistent with the floral meristem termination defects in slf mutant. High temperature inhibits the expression of SlGT11 and TM29 During the cultivation in the greenhouse where the temperature in summer was higher than the standard, we found that the phenotypes of slf became more severe, with stamens hardly visible and the defective flowers with ectopic floral meristem dramatically increased. As the temperature was reported previously to play a role in the floral development [42], we tested whether the SlGT11 function is also affected by temperature. To that end, we germinated the WT and slf mutant seeds at 25°C for 4 weeks, and grew them in a heated incubator (37°C daytime/ 28°C at night) for 20 days. The flowers produced by the slf mutant grown at the higher temperature had more carpelloid structures and no stamen-like structures in the third whorl was visible (Fig. 6d). In addition, the petals seemed to partially acquire sepals identity by forming greenish petals with sepal structure (Fig. 6f). Furthermore, we found shoot/floral meristems were produced at the center of the mature flowers (Fig. 6d). Despite the carpelloid stamens and ectopic shoot/ floral meristems were also produced in slf flowers at lower temperature (25°C daytime/22°C night), their occurrence frequency became significantly higher at higher temperature. To further dissect the influence of the higher temperature on SlGT11, we performed qRT-PCR to analyze potential transcriptional change. The floral buds at early stages of WT and slf mutant grown at higher temperature were collected for RNA extraction and qRT-PCR. Our results showed that SlGT11 expression was inhibited by the higher temperature (Fig. 6g). We then examined the expression levels of BCE genes at 3 h, 7 h and 24 h after the high temperature treatment. Our results showed that E class gene TM29 was further significantly down-regulated by high temperature in slf mutant (Fig. 6h, i). Yeast-one-hybrid assay failed to detect the direct binding of SlGT11 to the TM29 promoter region. All results (Fig. 5, Fig. 6 g-i) indicate that SlGT11 indirectly activates TM29 transcription, and the high temperature further represses the transcription of SlGT11 and TM29 both in WT and slf. Discussion The classic ABC model was previously established in the model plants Arabidopsis and Antirrhinum majus [1]. In A-class mutants, flowers have carpels-stamens-stamenscarpels (from the outermost to the innermost whorl), while B-class mutants have sepals-sepals-carpels-carpels flowers and C-class mutants have sepals-petals-petals-sepals flowers. E-class mutants have the flower with all organs resembling sepals [1,43]. In tomato, the functions of B/C/E class genes seem to be more complicated than those in Arabidopsis and Antirrhinum. There are four homologous class B genes in tomato: TAP3, TM6, TPI and TPIB. Despite similar phenotypes were observed when TAP3 and TPIB were mutated, mutations in TM6 or TPI only resulted in the transformation of stamens into carpels without affecting petals and carpels [6,7,13,44,45]. Two tomato C class genes TAG1 and TAGL1 have redundant and divergent functions in the floral development [9]. The transgenic plants expressing TAG1 antisense RNA showed homeotic conversion of third whorl stamens into petaloid organs and the emergence of indeterminate floral meristems in the fourth whorl [8]. However, TAGL1 mainly specifies stamens and carpels development in flowers and controls fruit development and ripening [46]. E class gene TM29 expression was down-regulated by the co-suppression produced aberrant flowers with morphogenetic alterations in the organs of the inner three whorls. In these three whorls, petals and stamens were partially conversed to a sepalloid structure, and ectopic shoots with leaves and secondary flowers emerged from the fruit [37]. In this study, we identified the recessive mutant of SlGT11 gene whose phenotypes resemble some previously characterized mutants with dysfunctional B/C/E class genes. The carpelloid stamen in slf mutants indicates that SlGT11 is required for the function of B type genes in the third whorl. The failed termination of floral meristem and the occurrence of floral reversion in slf indicate that the function of C type genes partially requires SlGT11 activity in the fourth whorl. Furthermore, we found that the defects in slf were substantially enhanced at higher temperature, with petals transformed into sepals, and SlGT11 is expressed extensively in the early stage of floral development, but its expression gradually became concentrated in the stamens and the vascular bundles of the ovary. We speculate that SlGT11 plays the roles in the initiation of each whorl of floral organs, especially the initiation of stamens. It has been reported that the expression of BCE genes which affects stamens development overlaps with SlGT11 expression domain. Class B genes including TAP3, TM6 and TPI were all previously shown to have expression in the stamen position [6,7]. The C gene TAG1 is also mainly expressed in the stamens and carpels during the floral development in tomato [8]. Compared with the class B and C genes, the expression of TM29 at early stage was more extensive, including vascular bundles. But during the later stage of the floral development, TM29 expression is mainly concentrated in stamens and carpels, which overlaps with the expression domain of SlGT11 [37]. In addition, the SlGT11 gene is also expresses in vascular bundles, which could be the origin of the abnormal stem. Compared with the WT, the expression of TAP3, TPI, TPIB and TM29 in slf was all down-regulated, suggesting SlGT11 could regulate the BCE gene expression to promote the stamens development. Therefore, SlGT11 could be one of regulators in addition to the ABC model genes that regulate floral organ development. Floral development is strictly controlled by complex regulatory networks to ensure the successful reproduction of plants. Under natural conditions, the transition from vegetative to reproductive growth is irreversible so the correct tissue patterning can be achieved during the floral development [31]. slf mutant has a reversion of floral development to vegetative organs, indicating that meristem termination in flowers becomes defective. As evidenced by a number of previously characterized mutants including TAP3, TPIB and TM6, this reversion phenotype is not necessarily associated with the defects of fused stamens and carpels though [6,7]. In Arabidopsis, the carpels of a weak allele ag-4 are partially transformed into sepals while the stamens and carpels of a strong allele ag-6 are completely transformed into petals and sepals [5,20,47]. Despite new flowers are formed in the whorl four of ag-2 flowers, no leaves can be seen, indicating that this defect only represents the aberrant termination of flower meristem [1]. But grown in short day, ag-1 mutants displayed the reversion of floral meristem back to vegetative development in Arabidopsis. Similar phenotype was also reported in Arabidopsis mutant lfy-6 [32]. The direct homologous gene of AG in tomato is TAG1. In line with the conserved function of AG, tag1 showed homeotic conversion of the third whorl stamens into petaloid organs and the replacement of fourth whorl carpels with indeterminate floral meristems, which are similar to ag-2 [8]. Transgenic plants expressing TM29 antisense RNA produced ectopic shoots with partially developed leaves and secondary flowers in the fruit [37]. Here we identified that the inhibition of flower meristem was terminated, and the floral development was reversed into vegetative organs in slf mutants, indicating that SlGT11 activity is required for the function of these previously reported genes. In Arabidopsis, stem cell maintenance is lost at the stage 6 of floral development, which makes the flower determinate [48]. In WT flowers, WUS mRNA is undetectable at this stage but in ag mutants, WUS is continuously expressed in the FM, resulting in the disrupted FM termination [48]. In slf, SlWUS was not repressed in the later stages of tomato floral development. The direct or indirect repressors of WUS, such as SlKNU, TAG1, SlCLVs, SlULT1, were all down-regulated in slf. However, the floral meristem identity gene FA was upregulated in slf, which was consistent with the defect of floral meristem termination in slf. Interestingly, AGLF, the homologous gene of SlGT11 in Medicago truncatula, seemed to function only as the C type gene [16]. Despite the similar expression pattern of AGLF and SlGT11 in the inner two whorls, the different developmental defects of stamens and carpels in respective mutants indicate that this gene likely has the different functions in Medicago and tomato. The knockout mutant of the SlGT11 homologous gene in Arabidopsis (At5g51800) showed no defectives in floral organ identity [16,39], indicating that SlGT11 function may have evolutionarily diverged in different species. In summary, we found that the loss-function of SlGT11 resulted in sepaloid petal at high temperatures in the second whorl, carplloid stamen in the third whorl, and ectopic formation of stem-, leaf-and flower-like structures in the fourth whorl. These phenotypes indicate that SlGT11 has complex functions that are similar to B/C/E-class genes in floral organ specification. Spatiotemporal expression analysis showed that SlGT11 was expressed throughout the early stages of the floral development, and SlGT11 expression became more specific to the primordium of stamens and carpels in later stages. Together, our results suggest that SlGT11 functions in floral organ patterning and maintenance of floral determinacy in tomato. Conclusions The results obtained through this study indicate that the disruption of a novel tomato Trihelix gene SlGT11 results in the loss of floral organ identity and the reversion of the flower to vegetative development during the floral development. Together with the spatiotemporal expression pattern of SlGT11, our results suggest that SlGT11 is essential for the reproductive organ development, but the function of SlGT11 homologous genes is evolutionarily diverged in Arabidopsis and Medicago. The presented study provides new insight into the function of Trihelix gene SlGT11 in the floral development. Plant material and growth conditions All plants used in this study were in tomato (Solanum lycopersicum L.) accession Micro-Tom background. Seeds of stamenless like flower (slf) mutant (TOMJPG2637-1) were obtained from the Tomato Mutants Archive (http:// tomatoma.nbrp.jp/). Since the slf mutant is partial sterility, seeds from heterozygous plants were used for generating homozygous individuals. Seeds were pre-germinated on moistened filter paper at 28°C in complete darkness. Plants were grown under long-day conditions (16-h light/8-h dark) in a greenhouse with a relative humidity of 60%. Daytime and nighttime temperatures were 26°C and 22°C, respectively. All plants received regular watering and fertilizer treatments. Phenotype characterization For analyzing the defects of floral organs, we counted the floral organ number of at least 20 flowers on each examined tomato plant. For analyzing the number of stamen in each flower, we collected flowers at anthesis for the quantification. For analyzing the ectopic floral meristem, we removed sepals and petals of examined flowers before anthesis. Immediately after the dissection, morphology of ectopic floral meristem was imaged using Nikon SMZ18 stereomicroscope. Histological analysis To determine morphological and developmental characteristics, fresh floral organs were dissected and examined by Nikon SMZ18 stereomicroscope. The toluidine blue staining was performed as previously described [49]. Briefly the flower buds from the six-week-old WT and slf plants were harvested and treated in FAA (3.7% formaldehyde, 5% acetic acid, 50% ethanol) under vacuum conditions for 30 min. These samples were dehydrated in a graded ethanol and tertbutanol series, and then embedded in a paraffin solution containing 50% tertbutanol for 4 h. The infiltrated samples were placed in pure paraffin (Sigma-Aldrich) for over-night. Sections (10 μm thick) were cut with a Leica RM2255 microtome, and the paraffin was further removed by the dewaxing agent. The tissue parts were washed in pure water carefully, and then stained for 1 min in 0.25% toluidine blue-O (Sigma-Aldrich, U.S.A). All micrographs were photographed with a Nikon SMZ18 stereomicroscope. Scanning electron microscopy Scanning electron microscopy (SEM) analysis of the flowers at the early stage were conducted as following: the sepals and petals were carefully separated from fresh floral organs under stereomicroscope; these samples were observed using a TM3030 PLUS scanning electronic microscopy under a quanta 250 FEG scanning electron microscope at an accelerating voltage of 5 kV. Subcellular localization The 35Spro:SlGT11-GFP and the corresponding empty vector pHellsgate 8 (35Spro:GFP) were transformed into agrobacterium GV3101 and injected into Nicotiana benthamiana leaves. The plants with infiltrated leaves were incubated at 25°C in dark for 24 h and then exposed to light for 12 h before GFP signals were observed by confocal microscopy (LSM 880, Germany Carl Zeiss). The primers were listed in the S1 Table. Bulked segregant analysis (BSA) Bulked segregant analysis was performed according to Chang et al. [50]. The slf homozygous plants were used as female parent and crossed to the WT. The F1 plants were then selfed to generate F2 mapping population. For BSA-seq, we extracted genomic DNA from 28 slf mutant individuals and 30 WT individuals in the F 2 mapping population using CTAB method [51]. All DNA quality and concentration were checked before being mixed to construct two bulks (slf bulk and WT bulk). The slf bulk and WT bulk were sequenced to a depth of 28× and 30× coverage of the tomato genome by HiseqXten-PE150 (Novogene). Trimmed sequences are mapped onto the tomato reference genome (Heinz 1706 cultivar) and mutation variants are filtered. Analysis of the allelic variant frequencies in the pools led to the identification of the causal mutation with 100% frequency in the slf bulk. The genes with the expected allelic frequency of 1 were further examined and we conducted transgenic verification for the identified candidate gene. The candidate genes were cloned and sequenced to verify the mutations. The primers were listed in the S1 Table. SlGT11 RNAi gene constructs To generate the SlGT11 RNA interference transgenic plants, we selected a 253 bp-sequence near the 3′ end of the PKc kinase-like domain by Sol Genomics Network vigs tool (https://vigs.solgenomics.net/). The amplified cDNA was cloned into entry vector PDONR221, then further recombined into the binary vector pK7GWIWG2 (II). The binary plasmids were transformed into agrobacterium C58 strain for generating SlGT11 RNAi transgenic lines. Plant genetic transformation Agrobacterium-mediated transformations of tomato were performed according to Brooks et al. [52]. In brief, cotyledon segments from 6-to 8-d old seedlings were precultured for 1d followed by the inoculation with agrobacterium strain C58 containing the RNAi construct. After 2d cocultivation, the cotyledon segments were transferred to a selective regeneration medium containing kanamycin. Subcultures were performed every 15 days until these seedlings produced three true leaves. These seedlings were transferred to a selective rooting medium containing kanamycin. Only wellrooted plants were transferred to the greenhouse. Phylogenetic and sequence analyses Sequences of SlGT11 family members in tomato and other species were obtained from the NCBI database (https://blast.ncbi.nlm.nih.gov/Blast.cgi), and aligned using the ClustalW function in MEGA5. Phylogenetic trees for proteins with 1000 bootstrap replicates were constructed using the maximum likelihood method in MEGA5. Imaging, microscopy and GUS staining To produce the pSlGT11::GUS construct, 3 kb of genomic sequence comprising the SlGT11 upstream region, was cloned into PGWB432 binary vector by the infusion cloning method. Whole floral primordium and flowers at different stages were stained with GUS solution in 37°C for 10 h after the fixation in cold 90% acetone for 20 min. These samples were dehydrated in a solution (ethanol: acetic acid glacial, in proportions 4:1 by volume) for about 6 h [53]. The samples were then cleared and washed briefly by different concentration ethanol (40% ethanol for 15 min, 20% ethanol for 15 min, 10% ethanol for 15 min). The dehydrated samples were embedded in 5% agar and sectioned by Vibrating slicer (Leica, Germany). The expression pattern of SlGT11 was observed by a Nikon SMZ18 stereomicroscope [54]. Quantitative real-time PCR analysis For quantitative real time PCR (qRT-PCR), four-weekolds tomato plants with similar growth conditions were used for tissue collection including roots, hypocotyls, cotyledons, stems, leaves, flowers, fruits, different floral organs and the flowers at different developmental stages [41]. Total RNA was isolated using the Eastep Super Total RNA Extraction Kit (Promega, Shanghai). Subsequently, HIScript II 1st Strand cDNA Synthesis Kit (+gDNA wiper, Vazyme) was used to synthesize the first strand cDNA. ChamQ Universal SYBR qPCR Master Mix kit (Vazyme) was used to perform qRT-PCR reactions in a 7300 Real-Time PCR System (CFX Connect, BIO-RAD). An actin gene was used as the constitutive control. The relative gene expressions were calculated using the 2 inu − ΔΔct method [50]. All analyses were performed in three biological replicates and two technical replicates. All primer sequences for qRT-PCR can be found in Table S1. Yeast-one-hybrid assay Yeast-one-hybrid assay was performed using the Matchmaker EYG48 Yeast-one-hybrid system (Clontech) as described by the manual. The coding sequence of SlGT11 for effector protein was cloned into the PJG4-5 vector, and the promoter sequence of B/C/E class genes were cloned into the reporter vector Placzy. Both vectors were transformed into the EYG48 yeast strain. Diploid yeast cells were grown and selected on dropout medium without uracil and tryptophan. To assay proteinpromoter interactions, clones were grown on twodropout medium without uracil and tryptophan, but with x-gal, for 2 d at 30°C. The empty vectors were used as control. All primer sequences used for cloning can be found in Table S1. Yeast-two-hybrid assay Protein interaction assays in yeast were performed using the Matchmaker Gold Yeast Two-Hybrid System (Clontech) according to the manual. The SlGT11 coding sequence for bait protein was cloned into the pGBKT7 vector and BCE class genes for prey proteins were cloned into the pGADT7 vector. The vectors were then transformed into the Y2HGold yeast strain. Diploid yeast cells were selected and grown on dropout medium without leucine and tryptophan. To assay protein-protein interactions, clones were grown on quadruple-dropout medium without leucine, tryptophan, histidine and adenine for 3 d at 30°C. All primer sequences used for cloning can be found in Table S1.
2020-12-15T14:30:25.227Z
2020-09-10T00:00:00.000
{ "year": 2020, "sha1": "2130c44b0d4c7b19c6b01e687667c41e6d6e1d35", "oa_license": "CCBY", "oa_url": "https://bmcplantbiol.biomedcentral.com/track/pdf/10.1186/s12870-020-02760-2", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "5794142a7af045819b26e55c13527f4b3a3b4eab", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
254093747
pes2o/s2orc
v3-fos-license
Mental Performance and Sport: Caffeine and Co-consumed Bioactive Ingredients The plant defence compound caffeine is widely consumed as a performance enhancer in a sporting context, with potential benefits expected in both physiological and psychological terms. However, although caffeine modestly but consistently improves alertness and fatigue, its effects on mental performance are largely restricted to improved attention or concentration. It has no consistent effect within other cognitive domains that are important to sporting performance, including working memory, executive function and long-term memory. Although caffeine’s central nervous system effects are often attributed to blockade of the receptors for the inhibitory neuromodulator adenosine, it also inhibits a number of enzymes involved both in neurotransmission and in cellular homeostasis and signal propagation. Furthermore, it modulates the pharmacokinetics of other endogenous and exogenous bioactive molecules, in part via interactions with shared cytochrome P450 enzymes. Caffeine therefore enjoys interactive relationships with a wide range of bioactive medicinal and dietary compounds, potentially broadening, increasing, decreasing, or modulating the time course of their functional effects, or vice versa. This narrative review explores the mechanisms of action and efficacy of caffeine and the potential for combinations of caffeine and other dietary compounds to exert psychological effects in excess of those expected following caffeine alone. The review focusses on, and indeed restricted its untargeted search to, the most commonly consumed sources of caffeine: products derived from caffeine-synthesising plants that give us tea (Camellia sinensis), coffee (Coffea genus), cocoa (Theabroma cacao) and guaraná (Paullinia cupana), plus multi-component energy drinks and shots. This literature suggests relevant benefits to mental performance that exceed those associated with caffeine for multi-ingredient energy drinks/shots and several low-caffeine extracts, including high-flavanol cocoa and guarana. However, there is a general lack of research conducted in such a way as to disentangle the relative contributions of the component parts of these products. Background Caffeine is widely consumed in a sporting context. Some three-quarters of athletes consume caffeine before or during competitions, with the highest prevalence amongst endurance sports [1]. Whilst caffeine has well-established ergogenic properties [2,3], it also exerts a number of purely psychological effects outside of an exercise/sport context. Most prevalent here, caffeine modestly but consistently increases alertness and arousal, alleviating fatigue, and it enhances several aspects of mental performance [4]. Optimal cognitive functioning is essential for peak sporting performance, and it is self-evident that efficient brain function is intrinsic to all forms of sporting performance. For instance, in terms of cognitive domains such as: attention, which incorporates psychomotor function, reaction times and concentration or vigilance; spatial or verbal working memory, which represent the ability to hold small amounts of spatial or verbal information temporarily in mind; and executive function, which incorporates the ability to manipulate information, plan actions and inhibit inappropriate responses. Clearly the comparative contribution of each of these cognitive domains is dependent on the demands of differing sports [5][6][7][8][9]. Indeed, evidence suggests that multiple aspects of an individuals' mental performance in everyday life, including attention, executive function and working memory, enjoy a positive bi-directional relationship Further research disentangling the comparative contributions of caffeine and other co-consumed bioactive ingredients to their combined effects is required in order to inform future recommendations. Ecological Roles and Synthesis Caffeine, and related methylxanthine alkaloids such as theophylline and theobromine, are 'secondary metabolites', phytochemicals that do not play any direct role in the plant's primary metabolic processes. Instead, they increase the plant's overall ability to survive by allowing the plant to interact with its environment [11,12]. In this case, caffeine is synthesised in vulnerable tissue when the plant is under attack by insect or mollusc herbivores or threatened by a range of biotic and abiotic stressors [13,14]. Once synthesised, caffeine functions as a bitter tasting feeding deterrent [15,16] and as an insecticide and molluscicide [16,17]. Additionally, it functions as a neurological behaviour modifier that typically induces a sharp increase in locomotor activity and corresponding reduction in feeding by herbivores [18]. Interestingly, the mechanisms that underlie these interactions with herbivores are largely the same as those that drive caffeine's effects on human behaviour [12]. The small group of plants that synthesise caffeine include the unrelated, geographically distributed genera of flowering plants that include coffee plants (Coffea genus), the kola nut (Cola acuminata), tea (Camelia sinensis), yerba maté (Ilex paraguariensis), guaraná (Paullinia cupana) and cocoa (Theobroma cacao). The repeated convergent evolution of methylxanthine synthesis in these disparate clades is a reflection of both the utility of caffeine for the plant and the simplicity of methylxanthine synthetic pathways [19,20]. Caffeine, and the related methylxanthines, are classified as 'purine' alkaloids because they incorporate the purine xanthine, which itself is synthesised or recycled in the plant from the ubiquitous purine bases adenine and guanine or their products. One route of synthesis in the plant is directly from the ubiquitous bioactive purine derivative adenosine [13,21]. Both the ecological roles played by caffeine and its functional effects in humans and other animals are related to caffeine's structural similarity to adenosine. Mechanisms of Action Adenosine itself is both a ubiquitous bioactive molecule and a building block for a host of other cellular signalling molecules across all forms of life. It therefore plays a wide range of pivotal roles in cellular functioning. For instance it is required for catabolic metabolism in every living cell in the form of the energy carriers adenosine diphosphate/triphosphate (ADP/ATP); it is a component of the ubiquitous second messenger molecule cyclic adenosine monophosphate (cAMP), it is a component of multiple enzymes, including with sporting activities that engage these domains [5][6][7]. Additionally, modulation of an individual's subjective psychological state, particularly increased alertness and decreased mental fatigue, will naturally have a knock-on effect on motivation and performance. In these terms, pure caffeine only offers a very partial option for improving mental performance in a sporting context [10]. Whilst its effects are consistent, they are limited, and caffeine has little to offer in terms of improving spatial or verbal working memory or executive function. It also does not generally have a beneficial impact on long-term memory function. However, caffeine is most often consumed alongside other bioactive compounds, often in products derived from caffeine-synthesising plants. Caffeine enjoys interactive relationships with multifarious other classes of compounds, including many of those that co-exist in the most prevalent caffeinated products. The following therefore describes the mechanisms of action and functional effects of caffeine, and the psychological effects of commonly consumed multi-component caffeinated products. It also assesses, to the extent that the literature allows, whether the non-caffeine components either have relevant independent effects on mental performance beyond those of caffeine, or whether they enjoy an interactive relationship with caffeine that potentiates their own functional effects, or indeed those of caffeine. the poly (ADP ribose) polymerase (PARP) family; and, as the adenosine radical adenosyl, it plays pivotal roles in anabolic metabolism in every cell as a component of SAM-e (S-adenosyl methionine). Most importantly here, adenosine itself is an inhibitory neuromodulator that serves to decrease overall neuronal activity. In this role it can be released by neurons in a manner similar to that of classical neurotransmitters, but it is most prevalent as a simple breakdown product of cellular adenine nucleotides [22]. Adenosine levels therefore vary with overall brain activity, building up in the cortex and basal forebrain during wakefulness, thus increasing fatigue and decreasing alertness, and then dissipating during sleep [23]. This provides adenosine with a number of properties related to the regulation of tiredness and the sleep-wake cycle, and the homeostatic and neuro-protective down-regulation of brain activity [22,23]. Orally consumed caffeine is rapidly absorbed and distributed, with peak plasma levels seen at around 30 min post-ingestion, followed by a gradual return to baseline [24]. Although caffeine's circulating half-life is generally in the region of 3-5 h [25], this can vary depending on a number of factors, including genotype, sex, disease, environmental factors and diet, and the consumption of other bioactive molecules. With respect to genotype, research has been able to identify so-called fast and slow metabolisers of caffeine depending on the cytochrome 450 (CYP450) 1A2 enzyme alleles; with 40% of the population believed to be A/A or 'fast metabolisers' and 50 and 10% A/C and C/C 'slow metabolisers', respectively [26]. In regard to sex, a recent book [27] summarised the potential for hormonal fluctuations, especially related to oral contraceptive use (and ethinyl estradiol with and without progesterone, in particular) to reduce the plasma clearance and increase the elimination half-life of caffeine. The authors also posit that differences in adiposity could influence the sensitivity to caffeine-induced lipolysis, with females typically having more adipose tissue and less sensitivity, which could underlie metabolic differences between males and females. With respect to disease, slowed metabolism of caffeine is seen in those with liver disease, attributed to reduced N3-and N7-demethylations affecting transformation through the paraxanthine pathway [28], and obesity reportedly increases the volume and distribution of caffeine and prolongs its half-life [29]. The latter review, however, notes the small amount of research here, which was performed over three decades ago in some cases, and the nuance associated with degree of obesity and concomitant pharmacological drug use. With regard to the potential role of other bioactive ingredients, caffeine is metabolised to paraxanthine and to a lesser extent theophylline and theobromine by the action of members of the CYP450 family of enzymes that manage the metabolism and clearance of endogenous and exogenous bioactive compounds [29]. The effects of caffeine within the brain and central nervous system are generally attributed to antagonism of A 1 and A 2A adenosine receptors and the resultant blockade of adenosine's inhibitory action [30]. This results in decreased fatigue, increased alertness and an increase in the neural activity associated with a variety of neurotransmitters, including dopamine, acetylcholine, noradrenaline, serotonin, glutamate and gamma-aminobutyric acid (GABA) [31]. Adenosine A 1 and A 2A receptors within the vasculature have contradictory effects on vasodilation. Within the brain, the net effect of caffeine's action at these receptors is the inhibition of adenosine's vasodilatory effects leading to vasoconstriction and reduced cerebral blood flow [32]. However, in the periphery, the net effect of caffeine is less clear, with contradictory effects on blood pressure and vasodilation, mediated by a complex array of adenosine receptor interactions and other mechanisms (see below) [33]. Subsequent sections below detail how commonly co-consumed compounds, like polyphenols, are also able to influence vascular tone, specifically vasodilation, and these already complex effects from caffeine alone are therefore likely to be even more difficult to interpret in response to co-consumption with other compounds. Beyond adenosine receptor interactions, caffeine or its metabolites also inhibit the activity of several key enzymes. These include nervous system-specific enzymes that catalyse neurotransmitters, including acetylcholinesterase and monoamine oxidase (leading to increased acetylcholine, dopamine, epinephrine, norepinephrine and serotonin) and amino acid decarboxylase enzymes [34]. Throughout the body, caffeine also inhibits phosphodiesterase, which regulates the cellular secondary messengers cAMP and cyclic guanosine monophosphate (cGMP), and therefore governs cellular responses to other bioactive molecules. It also inhibits several of the poly (ADP-ribose) polymerase (PARP) enzymes that modulate the response to cellular damage, including the modification of inflammatory responses [35][36][37]. Whilst the physiological effects of some of these individual non-adenosine receptor mechanisms, taken in isolation, might only become relevant at doses of caffeine above those normally found in the diet, their contribution to caffeine's overall effects on brain function is arguably underestimated [37]. In terms of ergogenic effects, the current consensus is that caffeine's beneficial effects on physical performance are also largely driven by central and peripheral inhibition of adenosine receptors. The downstream effects of this modulated neurotransmission includes increased motor unit firing, suppression of exercise-related pain, reduced sensation of force, and decreased ratings of perceived physical effort alongside related psychological benefits in terms of reduced fatigue and increased motivation [38,39]. However, caffeine can also modulate metabolism by interacting with the sympathetic nervous system [40], increasing energy expenditure, thermogenesis and fat oxidation, at least in part via phosphodiesterase inhibition [41]. Caffeine can also modulate the activity of the enzymes responsible for glycogen formation and degradation (i.e., glycogen synthase and glycogen phosphorylase) by interacting with the nucleoside-inhibitor site of glycogen phosphorylase, promoting a reduction of catalytic activity [42]. In skeletal muscles, caffeine also mimics the role of ATP [43], activating ryanodine receptors in the sarcoplasmic reticulum, causing an influx of charged molecules (Ca 2+ ) and inhibition of Ca 2+ reuptake, increasing muscle contraction [37,44]. One important point to note here is that, while some of the above research included human samples and doses equivalent to those elsewhere in this review (i.e., 3-6 mg/kg), much was predicated on animal models, utilizing what could be regarded as suprapharmacological doses, and this raises the question of how closely effects can be extrapolated to human samples. Interactive Mechanisms of Action Given its potential to modulate neurotransmission, enzyme activity and the function of cellular signal pathways, caffeine has potentially wide-ranging modulatory properties with regard to the effects of other bioactive molecules. As an example, co-consumption of caffeine increases both the physiological effects and toxicity of psychostimulant drugs such as amphetamines or cocaine, via both caffeine's adenosine receptor binding properties and its ability to modulate cell signalling pathways by interacting with the enzyme phosphodiesterase [45]. Caffeine also enjoys multifarious pharmacokinetic interactions that affect the absorption, distribution, metabolism and excretion of many other bioactive molecules. Mechanisms here include the formation of complexes with other acidic compounds; direct gastrointestinal effects including increased diuresis, increased gastrointestinal acid secretion, and accelerated gastric emptying; and modulation of the distribution of molecules, including by increasing the tightness of the blood brain barrier [34]. Caffeine and its metabolite products also compete for access to several ubiquitous CYP450s. Caffeine is primarily metabolised by the CYP1A2 and CYP2E1 enzymes, whilst caffeine's metabolites (paraxanthine, theophylline and theobromine) are also significant substrates for CYP1A1 and CYP2A6. These enzymes are also involved in the metabolism of a wide range of exogenous nutrients and drugs and many endogenous compounds, including several hormones and neurochemicals that are substrates, inducers or inhibitors of these CYPs. The presence of caffeine and its metabolites can therefore limit the access of these other compounds, or vice versa, to the CYP450 enzymes with which they interact, increasing or decreasing the bioavailability, clearance, effectiveness or toxicity of the compound/drug/nutrient and/or indeed caffeine [34,[46][47][48]. These same CYP enzymes have also recently been discovered to be active across cognition-relevant brain regions and structures, including neurons [49]. It is therefore likely that caffeine and its metabolites may enjoy a number of un-delineated interactions with a range of neurochemicals [48,50]. Interestingly, instigated by the removal of caffeine from the World Anti-Doping Agency (WADA) list, an analysis of a large sample (> 20,000) of competitive athletes' urine samples showed that, whilst caffeine levels were significantly lower than the 12 µg/mL threshold set by WADA previously (in all but 0.6% of the sample), nearly 20% of the samples indicated that caffeine had been co-consumed alongside pharmacological drugs that may have interactive properties with caffeine [1]. Examples include some of those examples listed above, such as pseudoephedrine (in 6.4% of samples) and pain killers like paracetamol (1.5%), tramadol (3.5%) and ibuprofen (2.7%), as well as psychoactive compounds like cathine (0.7%). This demonstrates that, whilst the potential for caffeine to incite direct effects on the psychophysiology of athletes alone may arguably have been reduced by this generally low caffeine consumption, the potential for unanticipated interactions with co-consumed pharmacological compounds should not be underestimated. Sport and Exercise Performance Caffeine, when taken by itself, has well-established ergogenic properties. Doses within the optimal range of 3-6 mg/ kg body weight have been shown to enhance performance of aerobic endurance exercise, muscular endurance and power, strength, and high-intensity and intermittent exercise [3,39,58]. These physical benefits translate into improvements in multifarious sport-specific aspects of physical performance, including those intrinsic to combat, racquet and team sports [39,59]. However, there is a paucity of studies investigating the ergogenic effects of caffeine at lower doses. A recent review [60] presented evidence that the ergogenic effects of caffeine were largely flat over 3 mg/kg body weight but that they extended down to 2 mg/kg. The authors noted that there was little evidence for effectiveness for doses of caffeine lower than 2 mg/kg, but also scant research assessing the effects of these lower doses. In general, the caffeine literature here is complicated by unresolved issues such as the role that genetic polymorphisms (e.g., in CYP1A2) [61], habituation [62] and tolerance [63,64] may play with respect to caffeine's effects. There is also a pronounced bias against using female participants in ergogenic research [65], although meta-analytic evidence does suggest equivalence between the effects seen in males and females [66], especially in relation to aerobic performance and fatigue [67]. Nevertheless, the latter review did present evidence that aspects of power (e.g., total weight lifted) and speed were more increased in men than women following the same dose of caffeine, meaning potential sex differences in sport-/exercise-specific ergogenic effects of caffeine certainly warrant further attention. The current lack of representation of women in this research area may be due to perceived noise introduced by the hormonal fluctuations of the menstrual cycle. However, recent small-scale trials suggest that this does not have a significant impact. Lara et al. [68], for example, demonstrated that 3 mg/kg caffeine did increase peak aerobic capacity in a cohort of 13 female triathletes (mean age 31 ± 6 years) during the early follicular, preovulation and mid-luteal phase of the menstrual cycle, but the magnitude of this ergogenic effect was similar during all phases. Taken together, ergogenic effects of caffeine appear to be independent of sex and menstrual status but, again, this may not be the case for all types of physical performance/athletes and so should not be wholly disregarded in future trials. Psychological Functioning In terms of purely psychological effects (i.e., outside of a sport/exercise context) it is notable that, in this body of research, caffeine is usually administered at a set dose for all participants, rather than being titrated to an individual's bodyweight (note: where mg/kg are provided below, an average adult body weight of ~ 70 kg has been assumed). The literature here shows that caffeine, when taken by itself, has consistent effects on cognitive function and arousal/alertness at doses well below those considered to be ergogenic. Here, evidence shows benefits following doses as low as 32 mg (i.e., < 0.5 mg/kg) [25]. Thereafter the cognitive effects of caffeine are readily apparent at 75 mg (~ 1 mg/kg), which is the dose currently required for a European Food Safety Authority (EFSA)-approved claim of caffeine's psychological effects, but they plateau above this dose. The effects then attenuate as single doses exceed 300 mg [25,69] or 400 mg [70] (~ 4-5 mg/kg). Whilst they are consistent, the psychological effects of caffeine alone are restricted to increased subjective alertness/arousal [71][72][73] and, in terms of mental performance, consistent improvements in the performance of tasks assessing attention or focussed attention/vigilance [72,[74][75][76]. Caffeine's effects do not generally extend to other cognitive domains, including several domains potentially relevant to sport. Caffeine in isolation has not been shown to have any demonstrable effect on long-term memory tasks, and has inconsistent effects on working memory and executive function tasks, with evidence suggesting that it can impair the performance of more complex tasks [77][78][79]. However, a recent meta-analysis [80] does suggest that caffeine's cognition-enhancing effects may expand into higher-order cognitive functions as a consequence of significant sleep loss or restriction. In this meta-analysis the average period of continuous wakefulness imposed on participants was 31 h, and therefore well beyond the loss of sleep associated with a poor night's sleep. It is important to note here that, whilst not consistent, a small amount of literature supports anxietyinducing effects of caffeine at doses consistent with many of the trials described within this narrative review; Loke et al. [81], for example, report increased anxiety with doses of between 3 and 6 mg/kg. However, a review by Smith [82] notes that the data overall are somewhat equivocal here and that the reporting of anxiety by participants is more likely in those who are 'non-choosers' of caffeine; i.e., those who do not consume caffeine in their everyday life, thus raising questions about sensitivity, tolerance and habituation, which are now widely believed to be due to genetic polymorphisms of the adenosine A 2 receptor [83]. Psychological Functioning During Sport and Exercise The question of caffeine's cognitive effects during sport/ exercise is somewhat under-researched. A 2021 review and meta-analysis addressing this question [84] only identified 13 studies that fitted the stipulated methodological criteria. However, several of these studies, including one of the five studies subsequently entered into their meta-analysis, involved the administration of multi-component products. Most of the studies also assessed behaviour before and/or after exercise, rather than during exercise. Nevertheless, the findings from this meta-analysis were similar to those from the general literature, in that the cognitive benefits of caffeine were restricted to improved speed and accuracy of attention task performance. A recent single study also found that attention task performance was improved both during and after exercise following 3 mg/kg caffeine, with greater effects seen for CYP1A2 'fast' metabolisers [85]. However, other studies have generated equivocal evidence as to whether caffeine's beneficial effects on attention task performance are subsumed within improvement due to simply engaging in exercise alone [9,86]. Overall, there is a general lack of research assessing the effects of caffeine on cognitive function during active sport or exercise. In part, this is likely due to the complexity of conducting a dual-focus trial where the disruption of the physical activity would be required for the assessment of cognitive function (and vice versa), thus negating their accurate measurement. Additionally, researchers proficient in either physical performance or cognitive assessment are unlikely to be equally proficient in the other domain. This certainly speaks to the need for more interdisciplinary collaboration wherein both areas receive equal focus. One workaround here is to assess cognitive domains required for a particular sport/exercise outside of the physical activity itself (e.g., reaction times in sprint starts), but research is also lacking here. As an exception to this, a recent study reported improved goal-keeper reactive diving times, which were directly related to increased reaction time speed at rest, following 4-6 mg/kg caffeine [87]. It is interesting to note a clear disjunction between the doses of caffeine that are effective in physical, ergogenic terms (optimal dose 3-6 mg/kg), and the doses of caffeine that have an impact on cognitive function and mood (optimal dose 1 to 4 mg/kg). This may be partly explained on the basis that doses of 3 mg/kg or more of caffeine are required to achieve the higher plasma levels required to have the physiological effects in peripheral metabolic tissue that underpin some ergogenic effects, whereas lower plasma concentrations are required to modulate neurotransmission in the brain [69]. Taken together, this might suggest that doses of at least 3-4 mg/kg would be required for both physical and cognitive ergogenic effects to be realised. Multi-component Caffeinated Products Much of the research assessing the effects of caffeine on psychological functioning or sport/exercise performance has involved the administration of pure, anhydrous caffeine [39]. In contrast, caffeine is most often consumed by athletes and the general population in the form of naturally caffeinated drinks, foods and food supplements, or in the form of 'energy drinks' and similar products. All of these more ecologically valid sources of caffeine involve the co-consumption of other bioactive compounds, with plant sources of caffeine always containing significant levels of polyphenols. As noted above, at a mechanistic level, caffeine is liable to engender synergistic, additive or modulatory effects when co-consumed with other bioactive compounds. This raises the possibility that the presence of other bioactive compounds may lead to a stronger or broader palette of benefits to mental performance than those that would be expected following caffeine alone. Unfortunately, there is a limited number of studies with the requisite comparator arms to unambiguously disentangle the effects of caffeine from the effects of co-consumed bioactive compounds. However, there is evidence, summarised below, suggesting both that some of the compounds co-consumed with caffeine have independent effects on physiological and brain function, and that they enjoy an additive or interactive relationship with caffeine. Further, this typically results in benefits that are stronger or broader than those following caffeine alone. The evidence summarised below concentrates on energy drinks and the sources of naturally occurring caffeine that represent the most frequently consumed caffeinated products or foods. Polyphenols and Caffeine Caffeine is most often taken in the form of plant-derived caffeinated products, including the most popular sources of caffeine globally: coffee and tea. These plant-based sources of caffeine always naturally contain significant levels of polyphenols. These ubiquitous phytochemicals incorporate within their structure multiple phenyl aromatic hydrocarbon rings with one or more hydroxyl groups attached. The largest group, flavonoids, can be subdivided into chalcones, flavanones and their derivatives the flavones, flavonols, isoflavones, flavanols and anthocyanins [88]. In contrast to the predominant neurotransmission effects of caffeine, polyphenols owe their multifarious health benefits to their global effects on physiological functioning. These are predicated on interactions with, and modulation of, diverse components of a wide range of mammalian cellular signal transduction pathways throughout the body. These pathways, in turn, control gene transcription and a plethora of cellular responses, including cellular metabolism, proliferation, apoptosis, and the synthesis of growth factors, and vasodilatory and inflammatory molecules, which have a direct effect on the metabolic, cardiovascular and inflammatory status of the body [89][90][91]. In the brain, these cellular signal transduction effects drive modulation of neuro-inflammation, direct and indirect effects on neurotransmission, interactions with neurotrophin receptors and signalling pathways, and increased synthesis of neurotrophins and vasodilatory molecules, leading to increased angiogenesis/neurogenesis and local cerebral blood flow [92][93][94][95][96]. With regard to ergogenic effects, recent meta-analyses suggest that diverse polyphenol rich foods and extracts might accelerate the recovery of muscle function and strength [97,98], improve post-exercise oxidative and inflammatory status [99], and improve aspects of sporting performance [99,100]. However, a larger meta-analysis concluded that, when considering polyphenols as a whole, they have a significant but 'trivial' effect on endurance exercise performance, with these effects restricted to males in their analysis [101]. However, it is important to note here that effects appear much more nuanced when considering groups of polyphenols separately. While grape, nitrate-depleted beetroot, French maritime pine, Montmorency cherry and pomegranate exhibited ergogenic effects (following both acute and multiple-day supplementation), no significant effects were seen for New Zealand blackcurrant, cocoa, green tea or raisins, and it is likely the relative ineffectiveness of these latter polyphenol groups, on these specific outcomes, that has diluted the overall message of this recent meta-analysis. This speaks to the importance of appreciating the bespoke effects of different polyphenols/polyphenol groups and this is supported by the positive findings reported by other meta-analyses that demonstrate that polyphenols from diverse sources have significant beneficial impacts on cognitive function, including tasks assessing attention, executive function and mental fatigue [102,103]. Of particular note here, caffeine and polyphenols enjoy a number of potentially additive or interactive relationship effects. These may be due to their common affinity with the same CYP450s, or alternatively caffeine's effects on the absorption, distribution and clearance of other compounds, or caffeine's ability to form complexes with other acidic compounds, including phenolics [34]. In line with this, research has demonstrated increased functionality or bioavailability when polyphenols [104][105][106], or other phenolic compounds [34], are consumed alongside caffeine. As an example, a recent study in humans showed that co-administration of cocoa-flavanols alongside their naturally occurring caffeine/methylxanthines resulted in a synergistic effect on the bioavailability and cardiovascular effects of the cocoaflavanols [104]. The interactive effects of caffeine can also be seen across the wider literature here. As an example, a meta-analysis of 15 studies showed that, whereas green tea catechins without caffeine had no effect, in combination with caffeine they decreased body weight and/or body mass index (BMI) in comparison to caffeine-matched controls [107]. These interactive effects may underpin the results from a meta-analysis of 120 controlled trials [108] that found that flavanol-rich compound interventions with caffeine (e.g., tea and cocoa extracts) were ranked higher than those without caffeine (apple extracts) in terms of beneficial effects on BMI, waist circumference, total-cholesterol, low density lipoprotein and high density lipoprotein (LDL/HDL)-cholesterol and triglycerides. Cocoa-Flavanols The rich polyphenol content of cocoa products primarily comprises high levels of the flavanols catechin and epicatechin, and procyanidins, which are oligomers formed from these flavanols. The ultimate level of these phytochemicals is dictated by the fermenting and roasting process [109]. Cocoa also contains low levels of caffeine but higher levels of theobromine. Research generally employs high-flavanol extracts or dark chocolate products with less than 40 mg caffeine. It is unlikely that this amount of caffeine would be exceeded via the consumption of chocolate [110]. Meta-analyses of the substantial body of intervention trial data reveal a consistent beneficial effect of high-flavanol chocolate and cocoa-flavanol extracts on cardiovascular parameters, including inflammatory biomarkers, oxidative stress, gluco-regulation, lipid profiles, blood-pressure and peripheral blood-flow [91,[111][112][113][114][115]. The evidence of ergogenic benefits is less consistent, with some evidence of reduced oxidative stress and modulation of metabolism in a physical performance context, but no consistent evidence of improved exercise performance [116][117][118]. In terms of psychological function in an exercise context, two small studies have assessed the brain function effects of cocoa-flavanols at rest and after exercise. In the first, cerebral blood flow was assessed in the prefrontal cortex via near-infrared spectroscopy (NIRS). Here, cocoaflavanols increased cerebral blood-flow (specifically, oxygenated haemoglobin) during the single cognitive task only prior to exercise, with exercise itself also increasing cerebral blood-flow (oxygenated, deoxygenated and total haemoglobin), improving reaction times and engendering an increase in neurotrophic factors post-exercise [119]. It is important to note, however, that whilst NIRS provides a measure of blood flow in the top layers of cortical tissue, it is unable to measure more deeply than this. Additionally, NIRS is not considered a traditional tool in the assessment of brain perfusion (i.e., the measurement of blood taken up by specific brain regions; e.g., positron emission tomography (PET) or magnetic resonance imaging (MRI)), although many regard the modulation of deoxygenated haemoglobin as a reliable indicator of this in the same way that it denotes the blood oxygen level dependent (BOLD) signal in functional MRI (fMRI). However, utilization of imaging techniques such as fMRI/PET is not possible during the assessment of physical performance, so NIRS presents a practicable alternative here. Finally, a subsequent study reported improved executive function task performance before and after exercise as a consequence of consuming a high versus low cocoa-flavanol drink, with no interaction with exercise [120]. In terms of brain function in a wider, non-sport/exercise context, cocoa-flavanols consistently increase cerebral blood-flow and engender the synthesis of neurotrophic factors such as brain-derived neurotrophic factor (BDNF) [121]. Single-dose, crossover trials comparing high versus low cocoa-flavanol extracts have demonstrated reduced mental fatigue and improved cognitive function during cognitively demanding tasks [122], and tasks that assess attention [123,124] and spatial memory [123]. Longer-term supplementation with cocoa-flavanol extracts for 4 weeks also increased attention and executive function task performance, alongside beneficial effects on multiple biomarkers of health, in 90 heathy elderly [125] and 90 sufferers from age-related cognitive impairment [126]. The benefits of chocolate are less clear and suffer from a smaller literature. A recent study demonstrated long-term memory benefits 2 h after high-flavanol dark chocolate when compared to white chocolate, although comparative caffeine levels in the interventions were not reported [127]. Conversely, in older adults, 8 weeks' administration of highflavanol chocolate failed to improve executive function task performance in the healthy elderly [128]. However, it should be noted that the low flavanol control intervention still contained a reasonably high level of the putative active ingredient (86 vs. 410 mg flavanols). This raises the issue of not having a true control in such realistic intervention trials, a trade-off that is practically impossible to resolve, and the additional concession to trial design in having either a crossover trial (which would be the gold standard) where double-blinding (again, the gold standard) is not achievable, or a between-subjects design, to accommodate this inability to mask treatment, which then injects the potential for individual differences to buoy one treatment condition above the other/s. Taken as a whole, a recent systematic review of this literature concluded that acute or chronic administration of cocoa-flavanols most reliably enhanced executive function and memory and decreased task-related mental fatigue [129]. This was supported by a meta-analysis of only chronic supplementation studies (2-3 months), which also reported improvements in executive function task performance [130]. A complementary meta-analysis of the mood effects of acute and short-term high flavanol cocoa studies reported improved depression, anxiety and positive affect following high cocoa-flavanol interventions [131]. This summary comes with a caveat, however. Much of the cardiovascular and psychological research here has used high cocoa-flavanol extracts with low levels of caffeine and compared them to caffeine-matched low-flavanol controls. This research therefore delineates the effects of the higher doses of flavanols over and above any effects of their caffeine content and, therefore, runs the risk of underestimating the effects of the cocoa-flavanol caffeine combinations. In conclusion, and bearing the last point in mind, the evidence to date does suggest that cocoa-flavanols exert beneficial effects on mental performance that are much broader than those expected from their caffeine content alone. The extent to which this represents an interactive effect between caffeine and the polyphenol content, rather than being solely predicated on the latter, remains to be explored. Guaraná (Paullinia cupana) The phytochemistry of guaraná seed extracts has some similarities to cocoa, with significant levels of the flavanols epicatechin and catechin, and procyanidin and proanthocyanidin flavanol oligomers [132]. Extracts also contain several triterpene compounds. However, guaraná's purported ergogenic and stimulant properties are often attributed to the presence of naturally occurring caffeine, which typically comprises 2.5-5% of a crude extract's dry-weight, depending on extraction and manufacturing processes [133,134]. Whilst there is little methodologically adequate research investigating the effects of guaraná on physical performance, two single-dose, crossover studies have assessed its cognition and mood-enhancing properties in an exercise/ sport context. In the first of these, single doses of a product combining guaraná extract (40 mg caffeine or ~ 0.6 mg/kg) and multivitamins improved working memory and episodic memory task performance both before and after 30 min of treadmill running in 40 young males. The guaraná intervention also decreased ratings of perceived effort during the exercise [135]. Subsequently, in a comparatively small study (n = 10), consumption of a product containing guaraná extract with 60 mg caffeine (< 1 mg/kg) was equally as effective as 200 mg of caffeine (~ 3 mg/kg) in decreasing perceived exertion during treadmill running and in enhancing the speed of performing a single attention/executive function task after the exercise [136]. The guaraná condition also exceeded the effects of the much higher dose of caffeine in terms of improved cognitive task accuracy. In a non-sport/exercise study, the guaraná/vitamin product utilised above [135] engendered benefits to cognitive function that were greater than would be expected from the low dose of caffeine in the product. The product improved both the speed and accuracy of a demanding focussed attention task and reduced mental fatigue during extended performance of cognitively demanding tasks [137]. Subsequently, a brain-imaging study [138] confirmed the mental performance-enhancing properties of the same product and demonstrated physiological modulation of brain function using fMRI. Several, acute, placebo-controlled, crossover studies have also confirmed that caffeine is not the principal psychoactive component of guaraná extracts. In the first of these, 75 mg of guaraná extract containing very low levels of caffeine (9 mg or ~ 0.12 mg/kg) resulted in improved performance of attention, episodic memory and working memory/executive function tasks across the 6-h post-dose testing period [133]. A subsequent dose-ranging study found that doses of the same extract containing negligible caffeine (4.5 mg or < 0.1 mg/ kg) could improve episodic memory task performance and ratings of contentedness for 6 h post-dose. In this study only, the highest dose of guaraná extract, containing 35 mg caffeine, increased ratings of alertness [134]. Finally, in perhaps the clearest study, a study compared a product combining guaraná extract and multivitamins directly to its caffeine content (100 mg). This study demonstrated improvements in attention task performance following the guaraná/multivitamin condition that were significantly greater than following both placebo and the equivalent dose of caffeine alone [139]. In conclusion, these studies demonstrate that guaraná extracts are associated with broader and stronger improvements in mental functioning than their caffeine content alone would warrant. Further, these psychological benefits are seen at doses of guaraná that include quantities of caffeine well below psychoactive levels. Whilst this confirms the independent effects of guaraná's non-caffeine components, the interactive contribution of low-doses of caffeine to these effects has received little attention and so it is not possible to summarise this here. Coffee (Coffea Genus) Polyphenols Roasted coffee The process of roasting coffee leads both to the creation of novel compounds and the conversion of existing compounds. This gives roasted coffee a particularly complex phytochemistry. Despite the deleterious effects of roasting, the predominant non-methylxanthine phytochemicals in coffee are polyphenolic chlorogenic acids (CGA), alongside several simple phenolic acids and their derivatives [140]. The process of decaffeinating coffee also affects these polyphenol levels; however. Olechno et al. [141], for example, report the total phenolic content (as gallic acid equivalent) as reaching ~ 650 mg in ground Arabica coffee but only ~ 400 mg in the decaffeinated alternative. This should be taken into consideration when interpreting the effects of decaffeinated coffee discussed below. A recent umbrella review of caffeine meta-analyses [142] noted that, whilst roast coffee is often used as a source of caffeine by athletes and non-athletes, it is rarely used in research assessing the ergogenic effects of caffeine. As such, the results of the small body of research that has directly compared coffee and caffeine are somewhat equivocal [143]. The situation is similar with regard to psychological functioning, where there are no relevant studies in a sport/exercise context. More generally, whilst coffee has often been compared to a decaffeinated coffee control, purportedly to assess the effects of caffeine, there is a lack of research employing the requisite comparator arms to disentangle the effects of caffeine from those of the other bioactive components; an important consideration bearing in mind the potential reduction in polyphenol levels noted above. One recent study though did compare the cognitive and mood effects of caffeinated and decaffeinated coffee to an inert coffee-flavoured placebo [144]. The results showed that both the caffeinated and the decaffeinated coffee drinks led to increased alertness, but that the caffeine-containing drink alone evinced significant cognitive effects. However, the overall pattern of results showed that the decaffeinated drink fell between the placebo and caffeinated drink on most measures, leading the authors to surmise a modulatory effect of the non-caffeine components of coffee. However, one must also consider the anticipatory effects of consuming a coffee drink, with most people aware of the psychophysiological effects of the caffeine component and perhaps even subjectively detecting (or, alternatively, expecting) its presence or absence in the investigational drink. Green coffee Unroasted 'green' coffee benefits from retaining much higher levels of CGA. The consumption of high CGA coffee, made by combining roasted and green beans, for 8 weeks has previously been shown to have beneficial effects on multifarious cardiovascular parameters [145,146]. For example, two acute dosing studies have demonstrated improved postprandial hyperglycaemia and vascular endothelial function following high CGA coffee in comparison to a caffeine-matched placebo [147,148]. However, a single physical exercise study found that a light roast high-CGA/caffeine coffee (1066 mg/474 mg, respectively) improved overall mood, but was no more effective than a dark roast low CGA/caffeine (187 mg/33 mg, respectively) coffee in terms of cycling performance or post-exercise oxidative stress and inflammatory status [149]. With such a small literature, however, it is not possible to draw very firm conclusions here and, as will be seen in subsequent sections, the effects of caffeine and/or its co-consumed compounds can often be very activity/sport specific. As such, cycling may not be benefited here but, again, with just one trial, this conclusion is probably premature. With regard to brain function, one crossover study [150] showed that a decaffeinated high-CGA green coffee (521 mg CGA, 11 mg caffeine) improved the performance of attention tasks, subjective alertness and other aspects of psychological state in 39 healthy older adults, whereas a standard decaffeinated instant coffee (224 mg CGA) had no effect. A subsequent study [151] replicated the beneficial psychological effects of the decaffeinated high-CGA coffee, but found that neither a placebo drink nor a control drink, containing the chlorogenic acid and caffeine components, had any effects. Taken together, these studies show that a low caffeine, high CGA green coffee has beneficial psychoactive effects, but that these effects may depend in part on interactions with components other than just caffeine and CGA. Two studies have also assessed the effects of caffeinefree green coffee extracts. The first, a parallel-groups study, measured cognitive function in 38 middle-aged/older adults who reported subjective memory decline. The results showed that, after 16 weeks' supplementation, a drink containing 300 mg CGA had cognitive effects limited to a psychomotor task and an executive function task, in comparison to placebo. Effects were not seen earlier than this (i.e., at the 8-week assessment point) [152]. In a follow-up, crossover study in 34 individuals with mild cognitive impairment, 12 weeks' administration of a similar high CGA (550 mg) beverage resulted in improved performance on a single executive function/attention task [153]. Together, these findings might suggest that a minimum of 12 weeks is required to exert effects on these cognitive performance outcomes. Coffee berry A small but recently growing body of research has also investigated the behavioural effects of coffee berry extracts made from the fruit pulp surrounding the coffee bean. These extracts are particularly high in chlorogenic acids and a typical dose delivers levels of caffeine similar to decaffeinated coffee (~ 1 to 2% caffeine). To date, there has been little research investigating the ergogenic effects of coffee berry, although one small study found that this intervention caused an improvement in antioxidant status but had no effect on exercise parameters [154]. In a non-exercise context, single doses of coffee berry extracts have been shown to increase the synthesis of neurotrophic factors such as BDNF [155][156][157], and a range of coffee berry extract doses (100, 300, 1100 mg) reduced the mental fatigue, and attenuated the decreased alertness, associated with extended performance of demanding cognitive tasks [158,159]. A follow-up study contrasting the cognitive and psychological state effects of 1100 mg coffee berry extract alone, and combined with apple polyphenol extract, found that coffee berry alone increased alertness and vigour and decreased fatigue across the 6 h of post-dose assessments. However, this effect was blunted by the addition of apple polyphenols, although this extract did improve the performance of an executive function task [160], demonstrating the importance of, where possible, considering the effects of treatment arms in isolation. Brain imaging studies have also demonstrated that drinks containing coffee berry extract (1100 mg) can increase cerebral blood flow in the frontal cortex during cognitive tasks [159] and increase functional connectivity between brain regions implicated in task performance [157]. One study has also investigated chronic effects. In this case, when coffee berry extract was taken in the morning or twice per day for 7 and 28 days by sufferers from mild age-related cognitive impairment, there was a significant improvement in speed and a trend towards improved accuracy on a demanding working memory/executive function task [161]. This effect was not seen when the extract was only taken in the evening and may speak to the benefits of the alerting effects of coffee berry in the morning when the psychophysiological effects of this are more impactful (i.e., the daytime period comprises tasks that require increased arousal, as opposed to the evening and overnight) and welcomed (i.e., participants actively desire the subjective and objective increase in alertness in the morning when rousing from sleep). This also fits with the peak plasma levels of caffeine, which would be anticipated between 15 and 30 min in most consumers and may adversely incite wakefulness when coffee berry is consumed in the evening. In conclusion, whilst little research has been conducted in such a way as to disentangle caffeine's effects, evidence does suggest that coffee's non-caffeine phytochemical components exert some independent effects on psychological functioning and may modulate caffeine's effects, or vice versa. Although neither green coffee nor coffee berry products have benefitted from substantial research efforts as yet, both contain higher levels of potentially bioactive CGAs, and might be expected to exert greater independent effects on function. It is also possible that the functionality of these low caffeine extracts might be increased by higher levels of caffeine. Green Tea (Camellia sinensis) Green tea contains significant levels of flavanols, including catechin, epicatechin and the tea-specific polyphenol epigallocatechin gallate (EGCG). It also contains the teaspecific amino-acid ʟ-theanine and caffeine. Meta-analyses of controlled trial data show that the consumption of green tea extracts is associated with a number of cardiovascular and anthropomorphic benefits, including enhanced total antioxidant status [162], improved glucoregulation [163] and significant benefits to weight, BMI and waist circumference, irrespective of caffeine content [164]. Whilst the exercise performance effects of green tea extracts remain unclear, consumption of caffeinated green tea extracts for more than 1 week has been shown to reduce exercise-induced oxidative stress [165]. To date there is little research assessing the brain function effects of green tea extracts or tea catechins, and no studies directly assessing brain function in a sport/exercise context. Two fMRI studies demonstrated modulation of brain function following single doses of a whey milk drink supplemented with green tea extract [166,167], but failed to match their whey control drink for caffeine. Two studies also demonstrated cerebral blood flow and electroencephalogram (EEG) effects of single doses of the tea polyphenol EGCG [122,168], but in the absence of any cognitive performance effects. There are rather more data with regard to the green tea components caffeine and ʟ-theanine and their potential interactions. Whilst ʟ-theanine by itself is not associated with any significant benefits to mood or cognitive function, a meta-analysis of the data from seven acute dose studies found that caffeine and ʟ-theanine combinations increased alertness and attention task performance for the first 2 h after consumption. The disparate doses employed in these studies ranged from 30 to 250 mg caffeine and from 12 to 200 mg theanine, with a stronger relationship between caffeine dose and performance of the two [169]. Several studies have also directly investigated the potential interactive effects of caffeine and l-theanine. One study [170] found that a single dose of 50 mg caffeine had its expected effects in terms of improved alertness and increased accuracy on an attention task, but that the combination of caffeine with 100 mg ʟ-theanine resulted in additional benefits in terms of improved attention task performance and improved long-term memory, an outcome not typically associated with caffeine alone. A subsequent study [171] found that whereas caffeine (150 mg) and caffeine combined with ʟ-theanine (250 mg) elicited common improvements in the performance of a rapid visual information processing (RVIP) task and decreased subjective 'mental fatigue', the caffeine/ʟ-theanine combination also led to a number of significant benefits over those seen following caffeine alone, including improved alertness and tiredness and enhanced working memory performance; again, this was an outcome not typically associated with caffeine alone. Similarly, whilst single doses of 200 mg of ʟ-theanine and 160 mg of caffeine improved the performance of one of two attention tasks, their combination resulted in a numerically more significant effect than either treatment alone [172]. In contrast to these previous studies, a further study [173] found that whereas both 200 mg caffeine and 200 mg ʟ-theanine had significant but markedly different effects on attention task performance, their combination had no cognitive effects. In this instance, caffeine both alone and in combination with theanine modulated mood, but theanine alone had no effect. Similarly, a further study showed that a low dose of ʟ-theanine (75 mg) attenuated some of caffeine's (50 mg) cognitive and mood effects [174]. This study also found that the reduction in cerebral blood-flow in the frontal cortex during task performance caused by caffeine was abolished by the addition of ʟ-theanine. Finally, a recent brain imaging (fMRI) study showed that whilst both ʟ-theanine (200 mg) and caffeine (160 mg) exerted different, independent effects on brain activation, the two compounds taken together elicited a synergistic, interactive effect on activation in brain regions associated with task performance [175]. In conclusion, given the global popularity of teas, the effects of green tea extracts/infusions are comparatively under-researched. There is evidence that caffeine increases the bioavailability of tea flavanols [105,106] and evidence of synergistic relationships between caffeine and the tea amino-acid ʟ-theanine with respect to brain function. The mental performance effects of tea extracts or infusions and the interactive contributions of their caffeine, ʟ-theanine and flavanol components deserve greater attention. Additionally, the delivery of caffeine in its naturally consumed state within tea and coffee drinks arguably offers a much more realistic insight into its effects than an isolated, encapsulated dose of caffeine. However, this raises the question of whether additions of milk and sugar, in particular, are permitted. These may make the drink more palatable for many consumers, but may also alter the plasma kinetic profile of phenolics. Zhang et al. [176], for example, reported that carbohydrate and fat can increase the absorption and time to reach maximum concentration (t max ) of co-consumed polyphenols, and so it is unsurprising that Renouf et al. [177] observed that the addition of creamer and sugar to black coffee significantly reduced the concentration maximum (C max ) and t max of chlorogenic acids. Further, a small amount of research suggests that this can negatively impact some of the mechanisms relevant to this review; Lorenz et al. [178], for example, report a blunting of positive vascular effects, induced by tea, with the addition of milk. As a result, the findings of studies that permitted the use of milk and sugar should likely be considered differently to those trials that administered black coffee alone. Moving forwards, it is important for future trials to decide whether the trade-off between having a more palatable investigational product, especially with older participants, outweighs the benefits of having a macronutrient-free caffeine drink. Energy Drinks and Shots Caffeinated energy products include a wide range of gels, bars and drink powders. However, ready-made energy drinks and shots have lately attracted the majority of relevant, product-specific research and it could be argued that this is due to the fact that these products still dominate the market. More recently, functional chewing gums containing caffeine have entered the market, and a meta-analysis published this year [179], comprising 14 studies and over 200 participants, reported a significant ergogenic effect of caffeine gum, relative to placebo, especially for trained individuals, in both endurance and strength/power activities. This was evinced when consumed within 15 min of commencing the activity and at doses of 3 mg/kg and above only. However, to the best of our knowledge, no functional caffeinated gums containing additional compounds (even glucose) have yet been investigated with randomised controlled trials, and so the effects here are exclusively attributed to caffeine. As such, caffeineonly gums do not fall within the purview of this review. Energy drinks and shots typically contain caffeine and taurine, often in combination with glucose, amino acids, vitamins or herbal extracts. However, it is a common practice to simply attribute all of the effects of energy drinks/ shots to their caffeine content alone, even if this attribution is not based on any evidence (i.e., the trial does not include a caffeine-only arm). Conversely, meta-analyses purportedly investigating the ergogenic effects of caffeine have often conflated pure caffeine and energy drink studies (e.g., [59,66]), ignoring the likely key role that caffeine plays within the drink matrices. In reality, there is increasing evidence of interactive effects between caffeine and the other bioactive components of these products. In terms of ergogenic effects, a recent meta-analysis of the data from 34 studies [180] found that energy drinks containing caffeine and taurine resulted in significantly improved endurance exercise test performance, jumping, muscle strength and endurance, and cycling and running performance. Importantly, these effects were evident from doses of caffeine (~ 1 mg/kg) that were lower than those that would typically be considered ergogenic [39]. The benefits following the energy drinks were also significantly related to the amount of taurine in the drinks rather than caffeine. These findings suggest that taurine plays a pivotal role in the effects of products combining caffeine and taurine, and finds support in a subsequent meta-analysis confirming the ergogenic effects of taurine mono-treatments [181]. However, a more recent review suggests that the effects of taurine alone, in the absence of caffeine, are equivocal [182]. Whilst this review of 19 trials did observe positive effects of taurine supplementation across a range of activities (VO 2max , time to exhaustion, 3-and 4-km time-trial, anaerobic performance, muscle damage, peak power and recovery), this appeared hugely buoyed by timing of ingestion and the type of exercise protocol. The ergogenic effects of taurine were most effective at 1-3 g/day, when consumed over sub-chronic dosing regimens of 6-15 days, and where doses were consumed 1-3 h prior to activity. Given that plasma taurine concentrations peak at approximately 1-h post oral consumption, it is likely that the above acute ergogenic effects are due to mechanisms unrelated to muscular changes but rather directly related to effects within the central nervous system. It is also likely that glucose plays a pivotal role in caffeinated energy drinks above and beyond the effects of caffeine, or indeed glucose, in isolation. Carbohydrate ingestion has a well-established ergogenic effect on endurance exercise [183] and, more recently, resistance exercise performance [184], and so it is unsurprising that a recent meta-analysis of energy drinks, containing both caffeine and glucose, observed similar ergogenic benefits across a range of exercise types [185]. These included cycling, power-based activities (including within team sports) and more finemotor abilities like serving and strokes in racket sports and performance in golf and fencing. The effects appeared fairly nuanced, depending on the sport, but 3 mg/kg was the minimum dose required and consuming this prior to long-duration exercise produced the most positive results. The authors raise the interesting point here that consumption under these conditions serves to both supplement ergogenic compounds like caffeine and glucose, as well as to rehydrate. As such, any psychophysiological effects of these compounds could not be disentangled from the effects of hydration alone, a function which, in itself, has a huge impact on endurance exercise in particular [186]. There are very few studies specifically investigating the effects of energy drinks on psychological functioning in a sport/exercise context. However, an energy drink (comprising 200 mg caffeine, taurine, carbohydrate, amino acids) improved performance on a 35-km cycling time-trial, and enhanced attention/psychomotor function tasks measured before and after the exercise, in non-caffeine-withdrawn athletes [187]. A recent study comparing an energy drink to an isocaloric control drink also demonstrated ergogenic benefits plus improved performance on a simple reaction task that was interposed between warm-up and a bout of maximal exertion. These effects were seen alongside improved mood, vigour and ratings of perceived effort measured postexercise [188]. In contrast, two studies failed to establish any energy drink-related benefits to cognitive task performance following physical exercise [189] or a session of eSports [190], although this is most likely to be due to the very small samples employed. It should also be noted that the sport/exercise studies noted above only employed cognitive tasks sensitive to caffeine. This therefore leaves open the question of whether the combination with other bioactive ingredients resulted in broader effects than those expected following caffeine alone. In this respect the non-sport/exercise literature is more helpful. Taken as a whole, caffeine-containing energy drinks have consistent beneficial effects on attention task performance [191]. Studies comparing energy drinks to an isocaloric glucose containing placebo have also demonstrated improved simulated driving performance [192] and benefits that would not be expected from caffeine alone, including improved memory performance [193] and enhanced working memory in the absence of improved attention [194]. A particularly thorough crossover study, involving a large sample (n = 94) of healthy adults, compared a carbohydrate-free energy shot to placebo over 6 h post-dose. The results demonstrated broad cognitive benefits that included improved accuracy and speed of attention task performance and improved alertness. More importantly, improvements were also seen on measures that would not be sensitive to caffeine, including across working memory and episodic memory tasks and in ratings of depression and anxiety. All of these improvements were also seen during the later assessments, when the effects of caffeine might be expected to be waning. A subsequent, smaller and less methodologically stringent study broadly confirmed the findings of this trial and demonstrated that the effects of the same energy shot were broader and more pronounced than either caffeine alone or coffee with a similar level of caffeine [195]. With regard to specific ingredients, two studies have compared caffeine, taurine and their combination. In one of these studies taurine attenuated the increased alertness associated with caffeine alone [196], and in the other taurine blunted the increased speed of attention task performance associated with caffeine [197]. However, it should be noted that these two studies used a very limited selection of caffeinesensitive attention tasks, and do not preclude the possibility that caffeine/taurine might have had unmeasured interaction effects in terms of alternative cognitive domains that are not sensitive to caffeine. Irrespective of the direction of the functional relationship seen here, these results also confirm that both taurine and caffeine contribute to the effects of products that combine them. From a mechanistic perspective, the competition between caffeine's metabolites and taurine for access to the common CYP450 enzyme that metabolises them [198] could theoretically contribute to any modulation of caffeine's or taurine's effects in either direction. Interestingly, a cocktail of herbal extracts, several phytochemical components of which are common substrates for caffeine's CYP450s, also attenuated the beneficial cognitive and mood effects of caffeine [199]. Overall, there is no evidence to support the contention that the psychological effects of energy drinks are solely attributable to their caffeine content. In contrast, the small amount of available evidence suggests that multi-component energy drinks and shots will have beneficial physical and psychological effects that are either stronger, or in the case of cognition, broader, than would be expected from their caffeine content alone. Other Phytochemicals (Non-caffeinated) In terms of potential benefits to mental performance, a number of phytochemicals and herbal extracts derived from noncaffeinated plants engender mental performance benefits that are broader than those seen following caffeine. Whilst these extracts are most often consumed by themselves, several are commonly found in functional drink products. However, the levels of bioactive components in the extracts used in energy drink products are unclear, and no research has attempted to disentangle any interactions with caffeine. The following is a brief summary of evidence regarding some potential candidates for enhancing mental performance. Polyphenols Two 'atypical' polyphenols have garnered potentially relevant evidence suggesting mental performance benefits: curcumin and mangiferin. Meta-analyses of early data suggested that curcumin, the principal polyphenol in turmeric, may be effective in treating mood disorders [200]. Of relevance here, more recent evidence showed that both a single dose [201] as well as 4 weeks' [201,202] and 12 weeks' [202] administration of an enhanced bioavailability curcumin (400 mg) engendered improvements in attention and working memory, with additional improvements in mental fatigue or alertness seen after 4 weeks. After 12 weeks' administration these improvements also extended to the performance of cognitively demanding tasks. Another potential candidate is mangiferin, the principal polyphenol in mango leaf extracts. A number of studies have demonstrated acute and chronic ergogenic benefits of a mango leaf extract comprising 60% mangiferin administered in combination with luteolin or quercetin [203][204][205][206]. A recent single-dose study extended these findings to brain function [207] and demonstrated wide-ranging improvements to overall accuracy of cognitive task performance, including specific benefits to attention, memory and executive function tasks, across 6 h post-dose, following the consumption of 300 mg of the same mango leaf extract. The performance of cognitively demanding tasks was also improved. Monoterpenes Volatile terpenes, which comprise the principal component of essential oils, have a number of significant direct (e.g., receptor binding) and indirect (e.g., enzyme inhibition) effects on neurotransmission. The beneficial psychological effects of single oral doses of encapsulated monoterpene rich sage (Salvia officinalis/lavandulaefolia) essential oils, in comparison to placebo, include demonstrations of improved long-term memory task performance [208,209], working memory/executive function [210], and improved attention task performance and alertness, alongside reduced mental fatigue during the performance of mentally demanding tasks [209]. Similarly, an oral single dose of encapsulated peppermint (Mentha piperita) essential oil resulted in improved performance of cognitively demanding attention/executive function tasks, alongside reduced mental fatigue [211]. Volatile terpenes are readily absorbed by mucosal membranes, and in a later study, the wearing of a peppermint infused non-transdermal skin patch for 6 h resulted in improvements in memory, attention and alertness in comparison to a nonaroma skin-patch in young adults [212]. Diterpenes Extracts of Ginkgo biloba leaves contain high levels of both diterpenes (ginkgolides/bilobalide) and polyphenols. Research demonstrates cognitive enhancement, including in terms of memory and attention task performance, following single doses of ginkgo extract [213][214][215][216][217][218] in young adults, and following supplementation for 7 days or longer in both younger [219] and older [220][221][222] participants. Across these trials, doses ranged between 30, 120, 150, 180, 240, 300 and 600 mg/day. The latter includes two sizeable studies in middle-aged participants that demonstrated improved memory and attention performance after 6 weeks' supplementation [223,224]. These latter trials were performed by the same research team (as were [216,217] and [221,222]), and this may contribute to the relatively clear pattern of effects attributed to Ginkgo biloba, relative to many of the other compounds covered within this review. In all three cases, key methodological principles, such as dose and the source of the investigational product, were maintained between trials and this allows for a more robust comparison across the field. It is rarely possible for one research team to develop a consistent research profile with just one compound like this, but it would be prudent for disparate teams to try, where possible, to align methodological practices and make it easier to compare effects across the literature. Triterpenes The primary bioactive component of Asian (Panax ginseng) and American ginseng (Panax quinquefolius) extracts are triterpene ginsenosides. Compounds from this class owe their bioactivity to a structural similarity to many animal hormones. Single doses, ranging between 100 and 960 mg of Asian ginseng are associated with improved accuracy of memory tasks [133,[225][226][227][228], increased speed of attention task performance [133,229], and concomitantly improved performance and mental fatigue during performance of cognitively demanding executive function/attention tasks [230,231]. Single doses of a standardized American ginseng extract also resulted in improved working memory and dosedependent increases in speed of task performance [227] and improved working memory performance [228]. Again, the consistency of these effects can partly be attributed to methodological consistency across many of these individual trials, with the majority of studies conducted within two labs, utilizing standardised extracts at the same or similar doses. Novel Caffeine/Phytochemical Combinations It is quite possible that co-administration of caffeine alongside other psychoactive phytochemicals, including those noted above, will result in additive or interactive effects with regard to the nervous system. This may be either as a consequence of bi-directional modulation of any neurotransmission effects or due to caffeine's multifarious effects on the absorption, distribution, metabolism and excretion of other compounds [34]. In this regard it is notable that, for instance, monoterpenes [232,233] and triterpenes, including ginsenosides [234,235], alongside many other phytochemicals, are substrates for the same CYP450 enzymes as caffeine (e.g., CYP1A2). This mechanism alone may modulate either of the components' bioavailability and predispose their combination to have different, potentially greater effects than either component alone. As an example of this, a study in rats found that ginsenosides and caffeine had a synergistic effect with regard to antidepressant effects [236]. However, to date, the question of caffeine's interactive properties with the phytochemicals that are not found in caffeine synthesising plants is largely unexplored. Conclusions and Future Directions Globally, caffeine is the most widely consumed psychoactive compound and ergogenic aid. When taken in a purified form in a research context caffeine has reasonably well delineated ergogenic and psychological effects. However, in mental performance terms, these effects do not consistently extend beyond improving attention, including psychomotor function and vigilance. Caffeine alone has little impact on the other cognitive domains intrinsic to aspects of sporting performance. Additionally, research almost exclusively investigates the effects of single, acute doses of caffeine, and so the extent to which these findings can be applied to realworld scenarios of repeated daily consumption is arguably limited. In the real world, athletes and participants in sport typically consume caffeine alongside a complex mixture of other potentially bioactive compounds, either in the form of products derived from caffeine-synthesising plants or as an additive to multi-ingredient products. Given caffeine's many potentially interactive mechanisms of action, it is unsurprising that the psychological effects of these complex combinations of potentially bioactive compounds often exceed those that would be expected from caffeine alone. As an aside, this does raise important questions of safety and, whilst the trials included within this narrative review report no serious adverse events, it would be remiss of this review not to highlight the continued need to question the potential unanticipated effects of combining different compounds within one product and the effects of cumulative doses of the ingredients. The individual doses of each compound alone may not result in an adverse event, but there is scope for unforeseen negative effects following their combination. This is especially concerning when one considers that caffeine may enhance the absorption and distribution of other co-consumed ingredients (and vice versa), and so, whilst doses might seem safe in isolation, their enhanced pharmacokinetics may evince unsafe psychophysiological effects. As an example, this has been reported to a small extent with over-consumption of energy drinks in adolescent groups [237]. Controlled trial evidence in humans has directly confirmed functional interactions between caffeine and polyphenols, l-theanine and taurine. Additionally, high polyphenol extracts from several caffeine-synthesising plants with low levels of caffeine engender broader benefits to mental performance than expected from caffeine, even at much higher doses. This is certainly the case for high-flavanol cocoa and guaraná extracts, and high-CGA coffee berry. In the case of high-flavanol cocoa extracts the benefits to psychological functioning are evident when directly compared to caffeinematched control treatments. This gives a clear indication of the added value of cocoa-flavanols, but does not disentangle any interactions in the combination. Herein lies the problem for this research area. Very few of the many controlled trials assessing the psychological or ergogenic effects of caffeinated products have been designed with the requisite comparator arms to disentangle the interactive effects of caffeine. Future trials should focus both on combination products and their caffeine content alone and/or the combination minus caffeine. Ideally, studies could also instigate a full fractionation of all possible permutations of combination products. Further, trials rarely investigate both the acute and chronic effects of consuming caffeine alongside these additional ingredients. It may therefore be the case that where unconvincing acute effects of these combinations do exist, that longer term administration may result in more pronounced, or at least different, effects. Given the often sport/activity-specific effects of the compounds covered in this narrative review, it may also be the case that research has yet to home in on the most effective matching between physical performance requirements, caffeine and its co-consumed compounds. As such, where these unconvincing effects exist, it is probably too premature to discount them entirely. As an additional caveat, future research should undoubtedly be more representative of nonmale participants. It seems likely that consuming pure anhydrous caffeine is the most impoverished method of delivering caffeine for the enhancement of either physical or psychological functioning. However, more research is needed on a number of fronts. First, to disentangle the contributions of caffeine and the non-caffeine bioactive compounds in caffeinated products. Second, to establish the optimal level of caffeine in caffeinated products, including the potential for additional caffeine to further enhance the functional benefits of low caffeine extracts. Finally, to explore the potential for caffeine to potentiate the benefits seen following multifarious other psychoactive phytochemicals that have not been meaningfully combined with caffeine to date. Acknowledgements This supplement is supported by the Gatorade Sports Science Institute (GSSI). The supplement was guest edited by Lawrence L. Spriet, who convened a virtual meeting of the GSSI Expert Panel in October 2021 and received honoraria from the GSSI, a division of PepsiCo, Inc., for his participation in the meeting. Dr Spriet received no honoraria for guest editing this supplement. Dr. Spriet suggested peer reviewers for each paper, which were sent to the Sports Medicine Editor-in-Chief for approval, prior to any reviewers being approached. Dr. Spriet provided comments on each paper and made an editorial decision based on comments from the peer reviewers and the Editor-in-Chief. Where decisions were uncertain, Dr. Spriet consulted with the Editor-in-Chief. The views expressed in this article are those of the authors and do not necessarily reflect the position or policy of PepsiCo, Inc. Declarations Funding This article is based on a presentation by David Kennedy to the Gatorade Sports Science Institute (GSSI) Expert Panel in October 2021. Funding for participation in that meeting together with an honorarium for preparation of this article were provided by the GSSI. No other sources of funding were used to assist in the preparation of this article. Author contributions DOK was primarily responsible for writing the first draft of this manuscript. ELW revised this draft in response to reviewer comments. DOK and ELW read and approved the final version. Conflict of interest Within the last 5 years David Kennedy and Emma Wightman have been the recipient of funding from a number of companies with an interest in phytochemicals. These include research grants from Mibelle AG, Evolva, Activ'inside, Frutarom, Nexira, Finzelberg, PGT Healthcare and Pepsico, and honoraria or payment for other services from Sanofi, Oakland Law Group, PGT Healthcare and Pfizer. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http:// creat iveco mmons. org/ licen ses/ by/4. 0/.
2022-12-01T06:17:52.744Z
2022-11-30T00:00:00.000
{ "year": 2022, "sha1": "974f5d0ca5dc84abd1c96e9a9796a14f4edfbc0a", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s40279-022-01796-8.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "314c65cfc0099683365e5f30e5af1d5eb2873e63", "s2fieldsofstudy": [ "Medicine", "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
226607588
pes2o/s2orc
v3-fos-license
Life Satisfaction in childhood: Latin American Immigrant Children in Chile The goal of the current study was to evaluate life satisfaction in a sample of 300 immigrant children aged between 8 and 13 years old. Satisfaction in different domains and overall life, was evaluated using the General Domain Satisfaction Index and the Overall Life Satisfaction index, respectively. These instruments were also applied to a sample of 300 non-immigrant children of similar age. Statistically significant differences were found only in the 12–13 years group, where the mean scores for immigrants were lower than those for natives, on the domains of family and home, material goods, interpersonal relationships, health, and use of time. Additionally, immigrants had higher mean scores on the domains of area of residence, school, and personal satisfaction. These results allow us to reflect on the influence of society in all domains throughout their lives. Thus, these findings contribute toward the creation of policies that integrate migrants. Subjective well-being (SWB) can be defined as the positive or negative evaluation of one's life as a whole, or of specific domains of one's life (Oyanedel, Alfaro, & Mella, 2015). SWB has two dimensions: affective, which is linked to the emotional aspects and the balance between positive and negative aspects; and cognitive, which is referred to as "life satisfaction" (Diener, 2006;Ferrer et al., 2014). The present study focuses on the latter dimension. The results of research on SWB among migrant children are inconclusive. On one hand, it has been reported that immigrant children in Chile have lower levels of well-being than do nonimmigrants (United Nations International Children's Emergency Fund [UNICEF], 2012), due to disadvantages stemming from being a minority group within the general population (Katz & Redmond, 2010), higher incidence behavioral problems and low self-esteem (Chen, 2014) and a high degree of internalization of mental health problems (Rogers, Ryce, & Sirin, 2014). On the other hand, it has also been reported that health and life satisfaction among immigrants and nonimmigrants do not differ significantly (Molcho et al., 2010) and that immigrants report higher levels of social adjustment than their non-immigrant counterparts (Dimitrova & Chasiotis, 2014). However, it is possible that some differences between immigrant and non-immigrant children can be attributed to socioeconomic factors than to the immigration status in itself (Molcho et al., 2010). The literature clearly shows a number of variables that influence the adaptation of migrant children and adolescents such as Family Capital (parental education, poverty, whether they are legal migrants or not); Student Resources (mental health, their facility in acquiring a second language) and the Type of School (good quality or not, school segregation, quality of the teacher's preparation) (Suárez-Orozco & Suárez Orozco, 2014). Most of the research on SWB in migrant children focuses on their experiences following migration. These experiences include a large number of acculturation stress factors (Suárez-Orozco et al., 2008), such as learning a new language, facing changes in family roles and responsibilities, removing from predictable context and social ties, legal status of family members, and dealing with racism or discrimination and other social threats to the migration process (Motti-Stefanidi & Masten, 2017;Suárez-Orozco, 2018). Though these stress factors are common, their influence on children's health can vary enormously depending on the amount of time the child has lived in the destination country, their social context, the age of the child, or the child's stage of development at the time of migration. Nevertheless, these studies establish that there is a lack of information regarding the situation of migrant children in other parts of the world, such as Latin-America (Perreira & Ornellas, 2011) and also the evidence on this topic on immigrant and their health has focused on adults and families rather that children and adolescents (Landsford, Deater-Deckard, & Bornstien, 2007). In Chile, there is only one study by Oyanedel, Alfaro, and Mella (2015) that has reported that Chilean children are 'very satisfied' with their lives as a whole, which confirms the results of international studies (Lee, Lamb, & Kenneth, 2009;Strózik, Strózik, & Szwarc, 2015). Based on this, the present study aimed to describe and analyze the factors related to the cognitive component of well-being (life satisfaction) in immigrant children. We hypothesized that there is a lower level of subjective well-being among immigrant children than among nonimmigrant children, due to their current situation or migration factors described earlier. Method This research is an observational, survey-based study with a cross-sectional design. Participants Criteria for inclusion were that the participants be immigrants (regardless of country of origin) or non-immigrants, aged between 8 and 13 years, attending public schools, with consent provided by both parents and children. There were no exclusion criteria. well-being in terms of satisfaction with: home and family (GDSI 1), material goods (GDSI 2), interpersonal relationships (GDSI 3), area of residence (GDSI 4), health (GDSI 5), the use of time (GDSI 6), school (GDSI 7), and personal satisfaction (GDSI 8) (Montserrat, Casas, & Ferreria, 2015). There are different versions for the age ranges of 8-9 years, 10-11 years, and 12-13 years. An 11-point rating scale ranging from 0 (completely unsatisfied) to 10 (completely satisfied) was used for children aged 10-13 years, and a 5-point rating scale ranging from 1 (completely unsatisfied) to 5 (completely satisfied), represented by emoticons, was used with children aged 8-9 years. The global index was calculated by summing the mean score on each domain. The Cronbach's alpha was 0.88 for the 8-9 years group, and .87 for the 10-13 years group. Overall Life Satisfaction (OLS). This is a single item that evaluates general life satisfaction (Campbel, Converse, & Rodgers, 1976). Children in the 8-9 years group were asked, "How happy do you feel with your life in general?" Responses were made on a 5-point scale ranging from 1 (completely unsatisfied) to 5 (completely satisfied), represented by emoticons. Procedure The study was approved by ethics committees of the Universidad Católica del Norte The participants were selected using non-probabilistic, purposive sampling, with similar quotas for immigrants and non-immigrants and three different age ranges (8-9 years, 10-11 years, and 12-13 years). To access the enrollment lists of the schools with the greatest number of foreign students, requisite paperwork was completed through the Ministry of Education. In compliance with ethics protocols, permission to conduct the study was sought from the principals of the schools. The guardians of the participants were provided with an informed consent form; once signed, this form permitted the minors to participate in the survey. The surveys took place in the school, either in the respective classrooms or in another location where the children could fill out their corresponding questionnaires. Statistical Analysis The data were entered in SPSS 20 (IBM, 2011). Descriptive statistical analyses were carried out for sociodemographic variables and the GDSI, and OLS scores. The Multivariate Analysis of Variance (MANOVA) was used to evaluate the differences in scale means between immigrants and non-immigrants, as well as to evaluate the effect of gender, and the interaction between gender and nationality. The measure of the effect size was calculated through the partial square eta. The Bonferroni correction was applied, which involved making decisions with a significance level of .05/3 = .017. That is, we will only consider as statistically significant differences when the p value is lower than .017. Finally, regressions were carried out to estimate the capacity of the different domains to predict general satisfaction (OLS score). Results Results are presented by age range. The differences between non-immigrants and immigrants and between males and females were not statistically significant in the age ranges of 8 -9 years and 10-11 years. The gender × nationality interaction was not statistically significant in any of the age ranges. Table 2 shows that, for this range, the mean score of the non-immigrants was higher than that for immigrants on the domains of satisfaction with home and family, material goods, interpersonal relationships, and area of residence. The immigrants' group had higher mean scores than non-immigrants on the domains of satisfaction with health, use of time, and school. It was found that immigrants had a higher general life satisfaction score. 8-9 years old [Insert table 2] Table 3 shows the regression coefficients of the different domains of satisfaction overlaid on the general life satisfaction score. Only dimensions that were significant to the model were included in the table. It was found that the domains for satisfaction with family and home (p = .013) and school (p = .031) explained 15.6% of the variance in the general life satisfaction score in the total sample. When segmented by nationality, satisfaction with family and home (p = .004) explained 21.8% of the variance in the general life satisfaction score for non-immigrants, while the satisfaction with the use of time (p = .040) explained 10.8% of the variance in the general life satisfaction score for immigrants. [Insert table 3] Table 4 shows that the mean scores of immigrants was lower than that for the nonimmigrants on the domains of satisfaction with family and home, material goods, area of residence, and health; while immigrants had higher mean scores on the domains of satisfaction with interpersonal relationships, use of time, school, and personal satisfaction. It was found that nonimmigrants had a higher general life satisfaction score as compared to immigrants. [Insert table 4] For this age range, satisfaction with material goods (p = .021) and personal satisfaction (p = .000) were significant for the model, explaining 36.6% of the variance in the general life satisfaction score. For non-immigrants, satisfaction with family and home (p = .037), satisfaction with material goods (p = .035), and personal satisfaction (p = .000) explained 43.7% of the variance in the general life satisfaction score. For immigrants, personal satisfaction alone (p = .027) explained 27.7% of the variance in the general satisfaction score (Table 5). [Insert table 5] Table 6 shows the mean scores on the different domains. It is evident that the mean scores of the immigrants were lower than those of non-immigrants on the domains of satisfaction with family and home, material goods, interpersonal relationships, health, and use of time; while immigrants had higher mean scores with regard to satisfaction with the area of residence, school, and personal satisfaction. It was found that non-immigrants had a higher general life satisfaction score as compared to immigrants. -13 years old The mean score of the non-immigrants was significantly higher than that of immigrants only in domains of satisfaction with family and home, (F(1) = 6.168; p = .014; p 2 = 0.030) and satisfaction with health (F(1) = 5.806; p = .017; p 2 = 0.029). On satisfaction with the school domain, there were no statistically significant differences between males and females, immigrants, and non-immigrants, or due to the sex × nationality interaction. With regard to the general life satisfaction score, the boys' mean score was significantly higher than of girls (F(1) = 15.816; p = .000; p 2 = 0,074), with a medium size effect. [Insert table 6] In this range, the variables that fitted the model-material goods (p =.000), use of time (p = .023), and personal satisfaction (p =.000)explained 57.2% of the variance in the general life satisfaction score (Table 7). When grouped by nationality, satisfaction with health (p = .009), satisfaction with the use of time (p = .008) and personal satisfaction (p = .007) explained 52.9% of the variance for immigrants. For non-immigrants, 68.1% of the variance was explained by satisfaction with material goods (p = .002) and personal satisfaction (p = .000). Discussion Based on the results, the initial hypothesis, that there would be a lower level of SWB among immigrant children compared to their non-immigrant counterparts, was partially rejected, since the level of SWB was found to depend largely on the age range under consideration. On one hand, we found differences in the general perception of life satisfaction, but these were not significant. On the other hand, we found significant differences in the satisfaction regarding certain domains of life; satisfaction with family and home showed differences in all the age ranges. The domain of satisfaction with family and home has often been found to be important for children (Rees & Dinisman, 2015;Strózik et al., 2015;UNICEF, 2012), given that it is a variable with emotional and relational factors that has a strong effect. This is since family is the first socializing agent for children, and thus, plays an important role in their later development (Casas, et al., 2008). Furthermore, their family system often facilitates appropriate psychological development and provides children with tools for personal well-being and confidence (Parra & Villadiego, 2008;Flores, 2005;Tejedor, 2012). Migration could therefore be considered a risk factor for the families and, most importantly, the children who experience it, due to issues such as familial disintegration, loss of references, and the formation of new family units (UNICEF, 2012). Parent-child and also parent-adolescent separation has long-term effects on their well-being, even if there is subsequent reunification. These children and adolescents can experience difficulty with emotional attachment to their parents, self-esteem, and physical and psychological health (Smith, Lalonde, & Johnson, 2004;Gubernskaya & Debry, 2017;Suárez-Orozco, Bang, & Kim;Rush & Reyes, 2013). All these circumstances can influence children's later evaluation of their satisfaction with family. We must consider that children have little influence on the migration decision; however, it is necessary to know if they can form paths of subsequent migration of families (Hoang, Lam, Yeoh, & Graham, 2015). Satisfaction with health was another area that showed significant differences in the mean scores: immigrant children were less satisfied as compared to non-immigrant children. This difference may stem from a variety of factors, such as the delay in parents having the capacity to provide socioemotional attention to children (Perreira & Ornellas, 2011), the difficulties that arise in the healthcare system, the slowness of the process of legal paperwork and obtaining a visa (which delays access to health care), the lack of knowledge of procedures for making requests or asking questions regarding health care and discrimination (Férnandez et al., 2014;Llop, Vargas, García, Beatriz, & Vásques, 2014;Mosquera, 2013;Rojas et al., 2011). These factors can prevent immigrants from having an objective view of the benefits of the health system. The results of this study are opposite to that found in immigrant children in Germany, where, perceived health was slightly higher in migrants compared to native-born German children, even though migrant children were characterized by a lower socio-economic status. The authors note that it is possible the existence of a change of pattern of migration in Europe with more migrants capable to migrate with healthy profiles (Villalonga-Olives, et al., 2014). More research is required in this area and in mental health outcomes in migrant children (Chan, Mercer, Yue, Wong, & Griffiths, 2009). It was interesting to note the effects of gender across the sample. As children grow older, there are discrepancies between boys' and girls' perceptions of well-being, often inversely proportional to their age (Dinisman & Ben-Arieh, 2015). Satisfaction diminishes as the differences in gender become apparent (Strózik et al., 2015). Empirical evidence (Casas, Bello, González, & Aligué, 2013;Javaloy et al., 2007) suggests that males tend to report greater subjective well-being than do females, and that females tend to indicate greater negative affect and greater affective intensity than males do. This may explain why females experience positive and negative emotions simultaneously, with greater intensity. This may be related to different gender roles. Also another possible hypothesis of these differences has to do with the effect of socialization and culture that teaches a privileged position to male status (Sanhueza & Lessard, 2018). From another point of view and as pointed out by Suárez-Orozco & Suárez Orozco (2014) some immigrants children are asked to take on 'parentified' roles including translation and advocacy whose role usually falls to the daughters (Faulstich-Orellana, 2001). These results indicated that at the global level, and in the different age groups, both immigrant and non-immigrant children have a high subjective well-being index, matching findings from other developing countries (Casas et al., 2008;2012b;2013;Dinisman & Ben-Arieh, 2015;Rees & Dinisman 2015;Strózik et al., 2015). However, some studies indicate that these high levels of well-being could be influenced by the so-called Optimism Bias, which explains why, when asked any question related to satisfaction, people tend to report higher percentages of satisfaction, as compared to indices of dissatisfaction, which is independent of their actual sociodemographic situation; this bias increasing significantly in children and youth (Oyanedel et al., 2015;Pascual-Roig & Castro-Lamela, 2014;UNICEF, 2012;). It is important to clarify that these high indices of satisfaction and well-being are not decisive; even though scores for level of satisfaction are above average for all domains, the scores are not the same across all domains. About variables that can predict global life satisfaction for immigrants, different models explain the effect of the satisfaction domains on general life satisfaction score. OLS can be estimated using satisfaction with school for immigrant children in the 8-9 years group; personal satisfaction for the 10-11 years group; and satisfaction with health, personal satisfaction, and satisfaction with the use of time for the 12-13 years group. These predictor variables are important-especially the personal satisfaction domain, as it is the indicator that best estimates general well-being-when making changes to policies that address immigrant children (Casas et al., 2013;UNICEF, 2012). It is important to note that adolescents arriving in a new country must face the difficulties inherent in the process, but also the high expectations that the family usually has regarding their adaptation and performance at school (Smith, Brown, Tran, & Suárez-Orozco, 2020). There are few studies regarding the subjective well-being of Hispanic children, and this is particularly true for immigrant children. As a result, the data obtained in our study can only be compared to similar studies by Oyanedel et al. (2015) and UNICEF (2012). Therefore, we believe that it is important to continue this line of research, and to continue to support the efforts to understand the well-being of children that have been made since the 1970s, which, to date, are insufficient (Ben-Arieh, 2000). Therefore, it is necessary to eliminate the "adult-centric" view of childhood (UNICEF, 2012;Oyanedel et al., 2015) and pay attention to what children tell us regarding their perceptions and evaluations of their current life. This is the only way to analyze their situation and to take measures that can help and support them in improving or maintaining current levels of satisfaction and well-being. Furthermore, though it is important to know in which domains and in which situations children have low levels of satisfaction, it is also important to focus our attention on what makes children happy, and to promote the areas that produce the greatest satisfaction over those that do not. Using this type of study, different stakeholders can contribute to policies that directly benefit children, which respond to demands that are related to maintaining high levels of well-being and how children migrants can define strategies to take advantage from the opportunities that the country of reception may give them (Smith et al., 2014). It is important to mention that there were some limitations while conducting this study, such as obtaining access to the population of immigrant children. Some parents did not provide their consent due to their undocumented status, as they believed that they might be exposed and harmed in some way if their children were to participate. The lack of studies from Latin America and, most importantly, Chile on this topic was a limitation that prevented us from comparing our results with others from similar contexts. It is necessary to have different methodological skills within various sub-disciplines to a more in-depth study of the dynamics of immigration in children (Holloway, 2014). One limitation that should also be considered is that no factor invariance analysis has been performed for the instrument used, so differences or non-differences could be due to differential effects given by the instrument. The language used is an area that should also be explored, since even if the language is the same (Spanish), there could be different understandings of the questions depending on the country of origin. As a conclusion, these results can contribute to actions that take well-being into account and incorporate elements that favorably affect development and facilitate the development of
2020-10-28T18:14:49.648Z
2021-01-01T00:00:00.000
{ "year": 2021, "sha1": "15b104a13fca038d61ed5c82feef5b80ada6541b", "oa_license": "CCBYSA", "oa_url": "http://www.doiserbia.nb.rs/ft.aspx?id=0048-57052000021U", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "00c4e26e1b265c9aeab0d430a4fa6a5d9c2e1101", "s2fieldsofstudy": [ "Sociology" ], "extfieldsofstudy": [ "Sociology" ] }
254152002
pes2o/s2orc
v3-fos-license
The Secretome of Irradiated Peripheral Mononuclear Cells Attenuates Hypertrophic Skin Scarring Hypertrophic scars can cause pain, movement restrictions, and reduction in the quality of life. Despite numerous options to treat hypertrophic scarring, efficient therapies are still scarce, and cellular mechanisms are not well understood. Factors secreted by peripheral blood mononuclear cells (PBMCsec) have been previously described for their beneficial effects on tissue regeneration. In this study, we investigated the effects of PBMCsec on skin scarring in mouse models and human scar explant cultures at single-cell resolution (scRNAseq). Mouse wounds and scars, and human mature scars were treated with PBMCsec intradermally and topically. The topical and intradermal application of PBMCsec regulated the expression of various genes involved in pro-fibrotic processes and tissue remodeling. We identified elastin as a common linchpin of anti-fibrotic action in both mouse and human scars. In vitro, we found that PBMCsec prevents TGFβ-mediated myofibroblast differentiation and attenuates abundant elastin expression with non-canonical signaling inhibition. Furthermore, the TGFβ-induced breakdown of elastic fibers was strongly inhibited by the addition of PBMCsec. In conclusion, we conducted an extensive study with multiple experimental approaches and ample scRNAseq data demonstrating the anti-fibrotic effect of PBMCsec on cutaneous scars in mouse and human experimental settings. These findings point at PBMCsec as a novel therapeutic option to treat skin scarring. Introduction Skin scarring after surgery, trauma, or burn injury is a major problem affecting 100 million people every year, causing a significant global disease burden [1]. Patients with hypertrophic scars, occurring in 40-90% of cases after injury [2], suffer from pain, pruritus, and reduced quality of life [3,4]. Skin scarring has been extensively studied [5,6], and recently, we were able to elucidate hypertrophic scar formation at the single-cell level [7]. However, many cellular mechanisms remain unclear, and for most conservative therapeutic options, we have low evidence of their efficacy [8]. Wound healing and scar formation are complex, rigidly coordinated processes, with multiple cell types being involved [9]. Wound healing is characterized by an acute inflammatory phase, a proliferative phase, and a remodeling phase [9]. Prolonged inflammation results in increased fibroblast (FB) activity, with enhanced secretion of transforming growth factor beta 1 (TGFβ1), TGFβ2, insulin-like growth factor (IGF1), and other cytokines [10,11]. TGFβ1 induces the differentiation of Patient Material Resected scar tissue was obtained from three patients who underwent elective scar resection surgery after giving informed consent. Scars were previously classified as hypertrophic, pathological scars according to the Patient and Observer Scar Assessment Scale (POSAS) [38] by a plastic surgeon. All scars were mature scars, i.e., they were at least two years old; had not been operated on; and had not been previously treated with corticosteroids, 5-FU, irradiation, or similar treatments. All scar samples were obtained from male and female patients younger than 45 years old, with no chronic diseases nor chronic medication. Healthy skin was obtained from three healthy female donors between 25 and 45 years of age from surplus abdominal skin removed during elective abdominoplasty. Animals In all mouse experiments, 8-12-week-old female Balb/c mice (Medical University of Vienna Animal Breeding Facility, Himberg, Austria) were used. Mice were housed in a selected pathogen-free environment according to enhanced standard husbandry with a 12/12 h light/dark cycle and ad libitum access to food and water. Full-Thickness Wound and Scarring Model in Mice For the full-thickness skin wound and scarring model, mice were deeply anesthetized with ketamine 80-100 mg/kg and xylazine 10-12.5 mg/kg i.p. They were given postoperative analgesia with the s.c. injection of 0.1 mL/10 mg Buprenorphin and 7.5 mg/mL Piritramid in drinking water. A 9 × 9 mm square area was marked on the back and excised with sharp scissors. The wounds were left to heal uncovered without any further intervention for 4 weeks, and the resulting scar tissue was observed and photographed. Production of Irradiated Mononuclear Cell Secretome (PBMCsec) The secretome of human PBMCs was produced in compliance with good manufacturing practice (GMP) by the Austrian Red Cross, Blood Transfusion Service for Upper Austria (Linz, Austria), as previously described [26,39] (Figure S1). PBMCs were obtained with Ficoll-Paque PLUS (GE Healthcare, Chicago, IL, USA)-assisted density gradient centrifugation, adjusted to a concentration of 25 × 106 cells/mL (25 U/mL; 1 Unit = secretome of 1 million cells) and exposed to 60 Gy cesium 137 gamma irradiation (IBL 437C; Isotopen Diagnostik CIS GmbH, Dreieich, Germany). Cells were cultured in phenol red-free CellGenix GMP DC medium (CellGenix GmbH, Freiburg, Germany) for 24 ± 2 h. Cells and cellular debris were removed with centrifugation, and supernatants were passed through a 0.2 µm filter. Methylene blue treatment was performed as described [40] for viral clearance. The secretome was lyophilized, terminally sterilized with high-dose gamma irradiation, and stored at −80 • C. All experiments were performed using secretomes of the following batches produced under GMP: A000918399086, A000918399095, A000918399098, A000918399101, A000918399102, and A000918399105. Immediately before performing the experiments, the lyophilizate was reconstituted in 0.9% NaCl to the original concentration of 25 U/mL. PBMCsec Injection into Mouse Scars Starting on day 29 after skin wounding, mice were injected with 100 µL 0.9% NaCl, medium (phenol red-free CellGenix GMP DC medium), or PBMCsec, which was prepared as described above, every second day for two weeks. Subsequently, half of the mice from each group (n = 2) were sacrificed and analyzed, while the other half (n = 2) were left for another two weeks without further intervention and then sacrificed. PBMCsec Topical Application on Mouse Scars Starting on the day of skin wounding (d0), mouse scars were treated with PBMCsec, medium, or NaCl 0.9%. Ultrasicc/Ultrabas ointment (1:1; Hecht-Pharma, Bremervörde, Germany) was used as a carrier substance for all treatments. Four parts of Ultrasicc/Ultrabas (50:50) and one part of water were mixed and used as control treatment (i.e., 100 µL contained 40 µL of Ultrasicc, 40 µL of Ultrabas, and 20 µL of agent or control). Then, 5 U/mL (200 µL of dissolved lyophilizate) PBMCsec or 200 µL/mL medium was mixed with ointment. Mice were treated with control or inhibitors by applying 100 µL of ointment on each wound immediately after wounding. After application, mice were individually placed in empty cages without litter for 30 min and closely monitored to prevent immediate removal of the treatments and achieve sufficient tissue resorption. Scabs were left intact to prevent wound infections. Mice were treated daily for the first 7 days and thrice a week for 7 weeks. After scar formation, 4 mm biopsies of the scar tissue were taken and cut in half. One half of each scar sample was used for histological analysis, and the other biopsy halves from each treatment group were pooled and analyzed together with scRNAseq as described below. Ex Vivo Skin and Scar Stimulation From human skin and scar tissue, 6 mm punch biopsies were taken; subcutaneous adipose tissue was removed; and biopsies were placed in 12-well plates supplemented with 400 µL of DMEM (Gibco, Thermo Fisher, Waltham, MA, USA; with 10% fetal bovine serum and 1% penicillin/streptomycin) and 100 µL of CellGenix medium or 100 µL of PBMCsec. In addition, 100 mL of medium or PBMCsec was injected into the upper dermis in the middle of the biopsy. Biopsies were incubated for 24 h and then harvested for scRNAseq analysis. Sample "Skin 1 medium" was lost due to technical difficulties during preparation. Skin and Scar PBMCsec Stimulation, Cell Isolation, and Droplet-Based scRNAseq Mouse scars and stimulated human skin and scar samples were digested using Miltenyi Whole Skin dissociation Kit (Miltenyi Biotec, Bergisch-Gladbach, Germany) for 2.5 h according to the manufacturer's protocol and processed using GentleMACS OctoDissociator (Miltenyi). The cell suspension was filtered through a 100 µm filter and a 40 µm filter, centrifuged for 10 min at 1500 rpm, washed twice, and resuspended in 0.04% FBS in phosphate-buffered saline (PBS). DAPI was added at 1 µL/1 million cells for 30 s; cells were washed twice and sorted for viability using a MoFlo Astrios high-speed cell-sorting device (Beckman-Coulter, Indianapolis, IN, USA). Only distinctly DAPI-negative cells were used for further processing. Immediately after sorting, viable cells were loaded onto a 10X-chromium instrument (Single cell gene expression 3 v 2/3; 10X Genomics, Pleasanton, CA, USA) to generate a gel bead in emulsion (GEM). GEM generation, library preparation, RNA sequencing, demultiplexing, and counting were performed at Biomedical Sequencing Core Facility of Center for Molecular Medicine (CeMM; Vienna, Austria). Sequencing was performed using 2 × 75 bp, paired-end, with Illumina HiSeq 3000/4000 (Illumina, San Diego, CA, USA). Cell-Gene Matrix Preparation and Downstream Analysis Raw sequencing reads were demultiplexed and aligned to the human (GrChH38) and mouse (mm10) reference genomes using the Cell Ranger mqfast and count pipelines (v4.0; 10× Genomics, Pleasanton, CA, USA) to generate cell-gene matrices. The cell-gene matrices were then loaded into "Seurat" (v4.0; Satija Lab, New York, USA) in an R environment (v4.1.2; R Foundation for Statistical Computing, Vienna, Austria) and processed according to the recommended standard workflow for the integration of several datasets [41,42]. All human skin and scar samples were integrated in a single integration; likewise, all mouse samples were integrated in a single integration. Cells with less than 500 or more than 4000 detected genes, more than 20,000 reads per cell, or a mitochondrial gene count higher than 5% were removed from the dataset to ensure high data quality. After principal component analysis and the identification of significant principal components using the Jackstraw procedure [43], cells were clustered using non-linear dimensional reduction with uniform manifold approximation and projection (UMAP). Differentially expressed genes were calculated in Seurat using the Wilcoxon rank-sum test with Bonferroni correction. In all datasets, normalized count numbers were used for differential gene expression analysis and for visualization in violin plots, feature plots, and dot plots, as recommended by the guidelines [44]. In all datasets, cell types were identified according to well-established marker gene expression. To avoid the calculation of batch effects, the normalized count numbers of genes present in the integrated dataset were used to identify differentially expressed genes (DEGs). As keratin and collagen genes were previously found to contaminate skin biopsy datasets and potentially provide a false-positive signal [45], these genes (COL1A1, COL1A2, and COL3A1; KRT1, KRT5, KRT10, KRT14, and KRTDAP) were excluded from DEG calculation in non-fibroblast clusters (collagens) or non-keratinocyte clusters (keratins), respectively. Moreover, genes Gm42418, Gm17056, and Gm26917 caused technical background noise and batch effect in mouse scRNAseq, as previously described [46], and were thus excluded from the dataset. Gene Ontology (GO) Calculation and Dot Plots Gene lists of significantly regulated genes (adjusted p-value < 0.05; average log fold change (avg_logFC) > 0.1) were inputted into "GO_Biological_Process_2018" in the EnrichR package in R (v3.0; MayanLab, Icahn School of Medicine at Mount Sinai, New York, NY, USA). Dot plots were generated using ggplot2 (H. Wickham. ggplot2: Elegant Graphics for Data Analysis. Springer-Verlag New York, 2016) with color indicating adjusted p-value and size showing the odds ratio, sorted by adjusted p-value. TGFβ Injection Fibrosis Model in Mouse Skin Mice were anesthetized with 3% isoflurane for three minutes. An intrascapular area of approximately 1 × 1 cm area was marked on the skin with a permanent marker. In total, 800 ng of TGFβ1 dissolved in 100 µL of NaCl 0.9%, medium, or PBMCsec (2.5 U) was injected in the marked area for 5 consecutive days, and mice sacrificed on the 6th day. The marked injection areas were biopsied and prepared for histological analysis. Isolation of Primary Skin FBs Primary skin and scar FBs were isolated as previously described [7]. In brief, skin or scar samples were incubated overnight in Dipase II (Roche, Basel, Switzerland). Subsequently, the epidermis was removed, and the dermis was incubated in Liberase (Merck Millipore, Burlington, MA, USA) for two hours at 37 • C. Afterwards, the tissue was filtered and rinsed with PBS, and the cells were plated in a T175 cell culture flask and cultured until they reached 90% confluency. Western Blots Western blotting was performed as previously described [7]. In brief, after cell lysis in 1× Laemmli buffer, the lysates were separated on SDS-PAGE gels (Bio-Rad Laboratories, Inc., Hercules, CA, USA), and proteins were transferred to nitrocellulose membranes and blocked with non-fat milk. After overnight incubation at 4 • C with the primary antibody (table of antibodies used reported in Figure S1B), the membranes were incubated with a horseradish peroxidase-conjugated secondary antibody and imaged. Immunofluorescence, H&E, and EvG Staining Immunofluorescence staining on formalin-fixed, paraffin-embedded (FFPE) sections of human and mouse skin and scar tissues was performed according to the protocol provided Figure S1B) as previously described [7]. Hematoxylin and eosin (H&E) staining and Elastica van Gieson (EvG)-staining were performed at Department of Pathology of Medical University of Vienna according to standardized clinical staining protocols. TGFβ1-Induced Myofibroblast Differentiation TGFβ1 stimulation of primary FBs was performed as previously reported [7]. Isolated primary FBs were plated in 6-well plates after the first passage and grown until they reached 100% confluency. FBs were then stimulated with 10 ng/mL TGFβ1 (HEK-293-derived; Peprotech, Rocky Hill, NJ, USA) and with medium or PBMCsec for 24 h. The supernatants were removed, and medium or PBMCsec was resupplied for another 24 h. The supernatants were collected and stored at −80 • C, and cells were lysed in 1× Laemmli buffer (Bio-Rad Laboratories, Inc.) for further analysis. Elastase Assay To measure elastase activity, a commercial kit (EnzChek ® Elastase Assay Kit; E-12056; Thermo Fisher) was used according to the manufacturer's instructions. Elastase was applied at 250 mU/mL and incubated with NaCl 0.9% ("Ctrl"), medium, or PBMCsec at 1:1 with assay buffer. Fluorescence intensity was measured with a BMG Fluostar Optima plate reader (BMG Labtech, Ortenberg, Germany) at 505/515 nm wavelength (excitation/emission). Raw values were blank-corrected and normalized to the % of the averaged 4 h of the Ctrl samples. Samples were measured 10 min, 1 h, 2 h, 3 h, and 4 h after elastase application. Statistical analysis was performed with a mixed-effects model for the time factor, with Tukey's multiple comparisons test. ELISA The supernatants of TGFβ1-stimulated FBs after treatment with PBMCsec or controls were collected, centrifuged, and stored at −20 • C for further use. The protein levels of human elastin (ELISA; LS-F4567; LSBio, Seattle, WA, USA) were measured according to the manufacturer's manual. Absorbance was detected with a FluoStar Optima microplate reader (BMG Labtech). PBMCsec Improves Scar Formation in Mice after Topical Treatment during Wound Healing and Intradermal Injection of Preformed Scars As our previous study on wound healing in pig burn wounds revealed a trend towards better tissue elasticity and less stiffness in early pig burn scars [27], we aimed to investigate the effect of PBMCsec on scar formation and on already existing scars in more detail at the single-cell level. To achieve this, we created full-thickness excision wounds on the back of 6-8-week-old female Balb/c mice and immediately treated with the topical application of PBMCsec for 8 weeks ( Figure 1A). In a separate set of experiments, we allowed the scars to develop for 4 weeks after wounding without further intervention and treated the formed scars with intradermal injection for 2 weeks. Scars were either analyzed right after the two weeks of treatment or after two additional weeks without further treatment to determine whether treatment-associated changes were permanent ( Figure 1B). As previously demonstrated with the secretome of non-irradiated PBMCs [25,26] or PBMCsec in diabetic mice [25,26], we found enhanced wound healing in wild-type mice after the topical application of an emulsion containing PBMCsec ( Figure 1C). PBMCsec reduced the wound size significantly more (40 ± 14% of the wound size) than NaCl (72 ± 16) and the control medium alone (60 ± 19%) ( Figure 1D). Compared with the intradermal injection of controls, scars appeared softer and reduced in size after the injection of PBMCsec ( Figure 1E). Histologically, scars showed a looser structure and reduced fiber density after topical PBMCsec treatment, as evidenced by hematoxylin/eosin ( Figure 1F) and Elastica van Gieson (EvG) staining ( Figure 1G). Of note, scars treated with intradermal injection exhibited a high number of infiltrating leukocytes ( Figure 1H), presumably due to repeated tissue irritation with injections. However, the matrix was looser, and the orientation of collagen fibers showed more vertical structures after the injection of PBMCsec ( Figure 1I). These results indicate that PBMCsec not only improves wound healing but also scar formation and the quality of already existing scars in mice. PBMCsec Induces Significant Changes in the Transcriptome after Topical and Intradermal Application Next, we performed scRNAseq on scar tissue from the different experimental settings. After quality control ( Figure S2A-C,E-G) we defined clusters based on well-established marker genes [7,48] from scRNAseq of topically treated scars ( Figure S2D (Figure 2A). Notably, one fibroblast cluster, FB 4, was expanded after topical treatment with PBMCsec compared with the controls (Figure 2A,B), suggesting an important role in the anti-fibrotic action of PBMCsec. Furthermore, the relative numbers of DCs and TCs were increased with the control medium but slightly reduced with PBMCsec ( Figure 2B). We then calculated the differentially expressed genes (DEGs) of all cell populations in PBMCsec-treated scars compared with medium-and NaCltreated scars. Interestingly, significantly more genes were downregulated than upregulated after the topical application of PBMCsec ( Figure 2C), and the highest numbers of regulated genes were found in FBs (red bars), macrophages (pale-green bars), and KCs (yellow bars) ( Figure 2C) [35]. To provide an overview of the overall regulation in all cell types, we show the top 50 DEGs per cluster group in Figure S3A-I. The upregulation of numerous genes, previously described to be increased in scar tissue [7,49,50], was significantly inhibited after PBMCsec application. (F) Number of significantly upregulated (positive y-axis) and downregulated (negative y-axis) genes ("nDEG") per cluster in "inject" mice, split in 6w and 8w (G) Gene ontology (GO) term calculation of genes downregulated by PBMCsec compared with medium in "topical" FBs. (H) GO term calculation of genes downregulated by PBMCsec vs. medium in 6w "inject" FBs. DEGs were calculated per cluster comparing 8-and 6-week-old scars using a two-sided Wilcoxon-signed rank As the highest number of regulated genes was observed in FBs and FBs are the main cell type involved in fibrotic processes, we further performed a gene ontology analysis of genes downregulated by PBMCsec application in FBs in both experimental settings ( Figure 2G,H). Our analysis revealed that genes downregulated by PBMCsec mainly showed a strong association with the response to growth factors, integrin activation, monocyte chemotaxis, and extracellular matrix organization, suggesting that the activation of these processes was, at least partially, reduced with topical application ( Figure 2G). GO term calculation of downregulated genes in FBs after the injection of PBMCsec revealed changes in ECM and collagen organization, the response to growth factor stimulus, and Wnt signaling ( Figure 2H). Taken together, these bioinformatic data suggest an anti-fibrotic, anti-inflammatory effect of PBMCsec on scar formation, primarily reducing excessive matrix deposition. PBMCsec Significantly Alters the Matrisome Since FBs contributed the most to transcriptome alterations induced by PBMCsec and the GO analysis indicated that genes associated with the ECM were highly affected, we further assessed genes of the matrisome in more detail. Differentially regulated genes in all FBs after topical ( Figure 3A-D) and intradermal injection ( Figure 3E-H) were analyzed using the curated matrisome gene set enrichment analysis (GSEA) gene lists [51]. For better visualization, the whole matrisome was split into the main components, i.e., collagens, proteoglycans, glycoproteins, and ECM regulators. Interestingly, most of the matrisomerelated genes were strongly downregulated by PBMCsec after topical and intradermal application (Figure 3). Similarly, most of the proteoglycans, glycoproteins, and ECM regulators showed reduced expression after PBMCsec treatment. However, some of the glycoproteins and ECM regulators, including Fn1, Igfbp4/5, Ecm1, Postn, and Mfap5, were even enhanced after PBMCsec treatment ( Figure 3C,D), suggesting the targeted regulation of these factors. Importantly, we also identified a variety of proteases, including Mmp19 (matrix metalloprotease 19), Ppcsk5/6 (Subtilisin/Kexin-Like Protease PC5/6), and Adamts1 (A disintegrin-like and metallopeptidase with thrombospondin type 1 motif), regulated by PBMCsec. Furthermore, plasminogen activator/urokinase (Plau) and the plasminogen activator/tissue type (Plat), as well as serine proteases Htra1, Htra3, and Aebp1, were elevated after the topical application and intradermal injection of PBMCsec. However, a variety of protease inhibitors, including Timp1 and -3 (Metallopeptidase Inhibitor 1 and 3), and Slpi (Secretory Leukocyte Protease Inhibitor), and the potent urokinase inhibitors Serpine1, Serpinb2, and Serpinb5 were also increased ( Figure 3C,D). These findings confirm our previous work, highlighting the role of proteases and their inhibitors in skin fibrosis [7], and indicate that PBMCsec is able to interfere with the protease system that contributes to scar formation. Scars Treated with PBMCsec Ex Vivo Show Strong Similarities to Mouse Models As we showed an anti-fibrotic effect of PBMCsec during scar formation in mice, we next investigated its effect on human skin and ex vivo cultures of scar tissue. Therefore, we treated biopsies of human skin and human hypertrophic scars with medium or PBMCsec and cultivated them for 24 h ( Figure 4A). After quality control and cluster identification ( Figure S5A-D), clusters aligned homogeneously across donors and conditions ( Figures 4B and S5E). As described in our previous work, the ratio of FBs was increased in scars compared with skin [7], and several FB clusters (here, clusters FB5 and FB7) were specifically found in scars ( Figure 4B,C). Remarkably, the percentages of FBs, DCs, and T cells were reduced in scars after PBMCsec treatment ( Figure 4C). Next, we calculated DEGs separately for skin ( Figure S6) and scars ( Figure S7) and found a much higher number of DEGs in scars than in normal skin, indicating a strong effect of PBMCsec on fibrotic tissue ( Figure 4D). In line with our mouse datasets, most regulated genes were found in the FB clusters, and slightly more genes were downregulated than upregulated, particularly in skin tissue ( Figure 4D). Numerous genes that we previously described for their regulation in hypertrophic scars [7] were also favorably regulated by PBMCsec ( Figures S6 and S7). Next, we performed the GO term analysis of the DEGs in FBs treated with PBMCsec compared with medium. In line with the mouse data, downregulated terms ( Figure 4F) included collagen fibril and ECM organization, cytokine signaling pathway, negative regulation of signal transduction, regulation of extrinsic apoptotic signaling pathway, and type I interferon signaling pathway. Intriguingly, among the upregulated terms ( Figure 4E), negative regulation of neuron differentiation and generation of neurons were present. As we previously demonstrated that Schwann cells promote ECM formation in keloids and affect the M2 polarization of macrophages [52], this finding might hint at a mechanism of PBMCsec also affecting this crosstalk. Next, we assessed the genes of the matrisome in the human dataset ( Figure 4H). Similarly to the data obtained for mouse scars, collagens COL1A1, COL3A1, and COL6A1/2/3 were also strongly downregulated, more in scars than in skin, and proteases MMP1/MMP3/10 as well as protease inhibitors SERPINE1/G1/F1/B2, SLPI, and TIMP3 were upregulated ( Figure 4D). Of note, PBMCsec increased the expression of PI3, an elastase-specific protease inhibitor in human scar tissue, indicating a regulatory effect not only on collagens but also on elastic ECM components. Together, our analysis of human ex vivo skin and scars corroborated the findings of the in vivo mouse experiments, indicating an ECM-balancing, anti-fibrotic effect. PBMCsec Abolishes Myofibroblast Differentiation In Vitro After a comprehensive analysis of the effects of PBMCsec in mouse and human models at the single-cell level, we investigated the underlying mechanisms of the observed anti-fibrotic activity in vitro. Using a well-established in vitro fibrosis model [7,53], we stimulated primary human skin FBs with TGFβ1 and investigated the effect of PBMCsec on myofibroblast (myoFB) formation [54]. Upon the stimulation of FBs with TGFβ1, FBs showed robust differentiation to αSMA-expressing myoFBs in all control treatments (NaCl and medium) ( Figure 5A). In contrast, the addition of PBMCsec completely abolished myoFB differentiation and αSMA expression ( Figure 5A,B). As our scRNAseq revealed that of all major ECM components, Eln/ELN was the most consistently downregulated one in the matrisome of both mice and humans, we further assessed the effect of PBMCsec on the expression of elastin in vitro in FBs. Strikingly, elastin protein and mRNA expression were strongly downregulated by PBMCsec ( Figure 5A,C), and the secretion of ELN in the supernatant was significantly inhibited ( Figure 5D). Next, we investigated whether PBMCsec contains TGFβ inhibitors. Therefore, we used an HEK-cell-based reporter assay to assess the activity of canonical TGFβ1 signaling. While PBMCsec showed little-to-no TGFβ1 activity, the addition of PBMCsec to active TGFβ1 did not inhibit canonical TGFβ1 activity ( Figure S8A). These data indicate that PBMCsec does not inhibit myoFB differentiation by inhibiting Smad2/3-mediated TGFβ1 activity, suggesting a more downstream inhibitory or non-canonical action. To confirm the observed TGFβ effects in vivo, we injected TGFβ1 into murine skin (modified after Thielitz et al. [53]) for 5 consecutive days ( Figure S8B). Although no morphological changes were visible in hematoxylin-eosin staining ( Figure S8C), the immunostaining of Collagen I and III showed patches of increased matrix deposition in all samples (arrows in Figure S8D,E), which were not present in mice also treated with PBMCsec. Remarkably, we also observed accumulations of αSMA-expressing cells in the TGFβ1-injected deep murine dermis (squares), but not in PBMCsec-treated mice ( Figure S8F). Next, we aimed to further investigate changes in ECM composition, particularly elastin, in a human model. Thus, we injected TGFβ intradermally in human skin explants with and without NaCl, medium, or PBMCsec ( Figure 5E). Morphologically, no changes were observed in H&E staining ( Figure 5F); however, when we stained for overall ECM configuration using Elastica van Giesson staining ( Figure 5G) and with immunofluorescence for elastin ( Figure 5H), we noticed specific subepidermal alterations in elastic fibers. In untreated skin, elastin showed vertical fibers reaching into the dermal papillae with parallel, horizontal fibers in the deeper dermis. These vertical, papillary fibers disappeared after TGFβ1 treatment but were preserved when PBMCsec was added ( Figure 5G,H). These data suggest that PBMCsec is able to reduce the breakdown of elastic fibers, which occurs after TGFβ stimulation. Combined Analysis of Murine and Human scRNAseq Datasets Reveals Elastin and TXNIP as Joint Key Players of Beneficial PBMCsec Effects To better understand the mutual mechanisms of action of ECM balancing and antifibrotic mechanisms of PBMCsec, we performed the subclustering of the FBs of all scR-NAseq datasets ( Figure S9D) and performed a combined analysis ( Figure 6A). As myoFB, i.e., Acta2/ACTA2-positive FBs, disappear in mature scars [54], these cells were not detected in most of our datasets. Therefore, we were not able to investigate the effects of PBMCsec on myoFB differentiation in our scar models in detail ( Figure S9A-C). However, we detected a significant reduction in ACTA2 in ex vivo PBMCsec-treated human scars ( Figure S9C), indicating that even in mature scars, PBMCsec can reduce myoFB content. When overlaying DEGs from FBs from all three experiments, no genes were mutually upregulated ( Figure 6B). Interestingly, Eln/ELN and Txnip/TXNIP were mutually downregulated in all experimental settings ( Figure 6C). Elastin and TXNIP were solidly reduced in all three scRNAseq, at both time points after injection, and in human scars ( Figure 6D,E). uration using Elastica van Giesson staining ( Figure 5G) and with immunofluorescence for elastin ( Figure 5H), we noticed specific subepidermal alterations in elastic fibers. In untreated skin, elastin showed vertical fibers reaching into the dermal papillae with parallel, horizontal fibers in the deeper dermis. These vertical, papillary fibers disappeared after TGFβ1 treatment but were preserved when PBMCsec was added ( Figure 5G,H). These data suggest that PBMCsec is able to reduce the breakdown of elastic fibers, which occurs after TGFβ stimulation. For violin plots, a two-sided Wilcoxon signed-rank test was used in R. ns p > 0.05, * p > 0.05, ** p > 0.01, and *** p > 0.001. As we have shown that PBMCsec does not interfere with canonical TGFB1 activity, we next wanted to know how TGFβ signaling is inhibited by PBMCsec. TGFβ is one of the most pleiotropic signaling molecules, and its interaction via the regulation of its release and activation by elastin was previously described [55]. TGFβ is secreted as inactive and bound to latent TGFβ binding proteins (LTBP1-4), together forming the large latent complex (LLC) [56]. The activation of TGFβ occurs via a tightly controlled process involving the cleavage of LTBPs or protease-independent activation via integrins [56,57]. We, therefore, wondered whether PBMCsec also regulates molecules indirectly involved in TGFβ activation. Surprisingly, we found that Ltbp4/LTBP4 was decreased by PBMCsec in both mouse and human experimental settings ( Figure 6F). Ltbp4/LTBP4 is involved in both elastogenesis and the regulation of TGFβ signaling [57,58], and an increase in Ltbp4 is associated with fibrosis in scleroderma via TGF-β/SMAD signaling [59]. Additionally, we found that the expression of integrin subunits beta 1 and beta 5 (Itgb/ITGB 1/5) was also decreased upon PBMCsec treatment ( Figure 6G,H). As both participate in the activation of TGFβ [56], these data indicate that their downregulation might indirectly contribute to the reduction in TGFβ-mediated fibrotic effects. Finally, we investigated whether PBMCsec contains endogenous elastase inhibitors that inhibit elastin breakdown and the release of TGFβ [60], further enhancing the anti-TGFβ feedback loop induced by PBMCsec. However, the elastase activity assay showed only a weak reduction in elastase activity after the addition of PBMCsec ( Figure 6H). We, therefore, propose a multi-effect model for the attenuation of fibrosis with PBMCsec ( Figure 6J): PBMCsec directly inhibits TGFβ1-mediated myoFB differentiation, but not via canonical signaling. PBMCsec attenuates the expression of numerous matrix genes and significantly reduces elastin secretion. PBMCsec prevents elastin breakdown, shows mild elastase inhibition, and interferes with TGFβ-induced gene expression ( Figure 6J). Discussion For patients, scars, particularly hypertrophic scars, not only represent an aesthetic problem but often lead to significantly reduced quality of life due to associated limitations of movement, itching, and pain [8]. As the treatment of hypertrophic scars remains difficult, the development of new therapeutic options is of particular interest. Here, we present a multi-model approach to assessing the effects of a secretome-based drug (PBMCsec) on scar formation and treatment in mice and humans. The strong tissue-regenerative activity of PBMCsec has already been demonstrated not only in cutaneous wounds [25][26][27] but also in various other organs, such as focal brain ischemia [31], spinal cord injury [32], and infarcted myocardium [33]. Interestingly, in all organs mentioned above, PBMCsec significantly reduced the size of the damaged areas and reduced the developing fibrotic tissue, suggesting its potential use in the treatment of cutaneous scars [27,33], In this study, we compared the effect of PBMCsec on scar formation in mice in vivo and in human ex vivo explant cultures. In mice, we performed the intradermal injection of the secretome into mature scars and applied it topically during wound healing and scar formation. Only a few studies have investigated the effects of paracrine factors on cutaneous scarring using cell secretomes from different stem cell types, including umbilical cord stem cells, adipose tissue-derived stem cells, or mesenchymal stem cells [61][62][63]. Arjunan et al. and Liu et al. showed that conditioned medium from umbilical cord Wharton's jelly stem cells or adipose tissue-derived stem cells reduced the activation and growth of keloidal fibroblasts in in vitro and in vivo keloid models [62]. In addition, Hu et al. suggested a combined treatment of conditioned medium from MSC and botulinum toxin for the treatment of hypertrophic scars [62]. However, in-depth analyses of the underlying mechanisms are still lacking. Thus, our study is the first to use scRNAseq to unravel mechanisms important for improved scar formation after the application of a cell secretome. Generally, scRNAseq generates large datasets with tens of thousands of cells, which helps to smooth out donor and technical variances. Therefore, low donor numbers, as used in our study, are widely acceptable [64][65][66]. In our mouse experiments, both application routes, topical and intradermal application, showed promising effects on scar formation and treatment. Of note, significantly more genes were regulated after the topical application of PBMCsec, suggesting higher efficacy after wound application than after injection. However, the improved wound healing process per se after PBMCsec application might already be decisive for better scar quality. Therefore, a direct comparison of the two application routes is difficult and requires further experiments where PBMCsec is topically applied to already existing scars. Furthermore, other potential treatment options, such as application after laser treatment [67], microneedling [68,69], or in combination with nanocarriers [68] should be tested in future experiments. Most importantly, and in line with the data on mouse scar formation, we also identified a significant anti-fibrotic effect of PBMCsec on human mature hypertrophic scars in explant cultures. In fact, the treatment of scars with PBMCsec in mice and humans showed high similarities. In both species, we found the strongest transcriptome alterations in FB clusters, specifically in genes of the so-called matrisome, which includes collagens, proteoglycans, glycoproteins, and ECM regulators [22][23][24]. The matrisome, which was recently defined for large-scale in silico analyses, provides a comprehensive overview of the components of the ECM [51,70,71]. Although several characteristics of ECM alterations in (hypertrophic) scars have already been described [72], our study provides the first large dataset analyzing changes in the entire matrisome in mice and humans during wound healing and scar formation. These highly valuable datasets could be the basis for many future studies on the pathophysiology of wound healing and scar formation, as well as on the effects of secretome-based scar treatment. In the present study, we further focused on elastin, which was similarly downregulated by PBMCsec under all conditions and in all species investigated. Elastin fibril sequences interact with microfibrils and bind to cell surface receptors [73]. Elastin is extremely durable and has a half-life of~70 years [73,74]. While intact elastin is inert and insoluble, it can be degraded by a plethora of elastases [74], including MMPs, aspartic proteases, serine proteases, and cysteine proteases [74]. In our ex vivo assays, we found strong degradation of elastic fibers in human skin induced by TGFβ, which was completely inhibited by PBMCsec, suggesting an elastase-inhibiting effect of PBMCsec. Intriguingly, this effect of TGFβ on elastic fibers appears to be counterintuitive, and we did not find any other study describing this phenomenon. The interaction of TGFβ and elastin is complex. TGFβ is generally known to induce elastogenesis [47]), stabilize elastin mRNA [47,48]), and increase elastin secretion ( Figure 5), which is most likely due to the post-transcriptional control of elastin [47]. This is in line with our in vitro findings, as we could show the strong upregulation of elastin production in fibroblasts treated with TGFβ. Interestingly, this upregulation was also significantly inhibited by PBMCsec at the mRNA and protein level. So far, we cannot offer an explanation for this phenomenon. It is tempting to speculate that the proteolytic breakdown of elastin triggers the de novo synthesis of elastin. Furthermore, whether the TGFβ-induced overproduction of elastin also leads to the assembly of new functional elastic fibers is still not fully understood. Therefore, the mechanisms by which PBMCsec inhibits elastin breakdown need further investigations. Interestingly, our in vitro elastase assay showed only weak anti-elastase activity of PBMCsec, suggesting that either the specific enzyme inhibited by PBMCsec is not detected by the in vitro assay or PBMCsec leads to the induction of endogenous protease inhibitors. In line with the second hypothesis, Copic et al. recently showed that PBMCsec is indeed able to induce the production of SERPINB2, a serine protease inhibitor, in human mononuclear cells [75]. Furthermore, with scRNAseq, we showed that some elastase inhibitors, such as PI3 (peptidase inhibitor 3) and SLPI (secretory leukocyte protease inhibitor), were significantly upregulated by PBMCsec in FBs in scars ( Figure 4H). Despite having been well investigated for their beneficial effects in cystic fibrosis [76], these elastase inhibitors have been hardly assessed for their role in cutaneous scar formation so far. Further, more sophisticated experiments are needed to fully address the role of these enzyme inhibitors in scar formation. Aside from elastin, the only other gene consistently regulated by PBMCsec in all three scRNAseq experimental approaches was TXNIP (Thioredoxin interacting protein). TXNIP is critically involved in the regulation of reactive oxygen species (ROS) and cellular oxidative stress [77] and was shown to contribute to disturbed wound healing under ischemic conditions [78]. With regard to scar formation, TXNIP was shown to be elevated in a murine model of pulmonary fibrosis, and the inhibition of TXNIP in this model led to the reduction in ROS and myoFB differentiation [79]. The exact role of TXNIP in skin pathologies and in scars, however, has been scarcely investigated [80]. Our finding that the downregulation of TXNIP was conserved across all our experimental approaches suggests that PBMCsec-induced TXNIP downregulation might be an important mechanism contributing to the anti-fibrotic action of PBMCsec. However, further studies are needed to fully decipher the mechanism of TXNIP-regulation as well as its impact on cutaneous scar formation. Interestingly, PBMCsec also prevented FB activation and myoFB differentiation. In line with our results, previous studies showed that treatment of FBs with conditioned medium of mesenchymal or pluripotent stem cells was able to reduce myoFB differentiation [81,82]. In contrast to these studies, we were not able to identify a direct inhibitory action of PBMCsec on canonical TGFβ/Smad signaling [82]. However, TGFβ has been shown to also induce fibrosis via non-canonical (non-SMAD) signaling pathways [83], and blocking noncanonical signaling prevents pro-fibrotic phenotypes [84]. Possible non-canonical pathways might include glycogen synthase kinase-3β (GSK-3β) [85], a pathway we previously found to be regulated upon non-SMAD TGFβ-mediated abolishment of myoFB differentiation [7]. Hitherto, only few secreted molecules inhibiting non-canonical TGF-signaling have been described. Del-1 (Developmentally-Regulated Endothelial Cell Locus 1 Protein) was shown to inhibit TGFβ and attenuate fibrosis by suppressing the α v integrin-mediated activation of TGFβ [86]. In addition, several proteins, such as fibroblast growth factor (FGF), epidermal growth factor (IGF), interferon gamma, and IL-10, all of which are present in PBMCsec, are known to inhibit myoFB differentiation [6]. To identify the exact pathway of TGFβ inhibition induced by PBMCsec, a detailed proteomic approach and the assessment of multiple pathways will be necessary in the future. As previously discussed [7], there are some limitations to the current study that need to be considered. There are significant differences between the wound healing mechanisms of mice and humans. While mice mainly rely on the contraction of the subcutaneous panniculus carnosus, human wound healing is characterized by the deposition of extracellular matrix (ECM) followed by re-epithelialization [86,87]. However, recent research has shown that both processes contribute to a similar extent in mice [88]. Therefore, mouse wound models may be considered a valid model for human wound healing. However, it is important to note that the current mouse models of scarring do not fully replicate the pathological fibrotic state observed in human hypertrophic scars. Although mouse models for hypertrophic scars have been developed, such as subcutaneous bleomycin injection [89] and tight-skin mice [90], the comparability of the transcriptome of these models with human hypertrophic scars is not yet fully understood. In conclusion, we provide an extensive study with multiple experimental approaches and ample scRNAseq data. Comprehensive analyses suggest a solid anti-fibrotic, ECM reducing, and myoFB-inhibiting effect of PBMCsec. We identified the prevention of elastin breakdown as a putative major underlying mechanism of PBMCsec-mediated scar attenuation. We thus propose future clinical assessment of PBMCsec to attenuate skin scarring during wound healing and to treat already existing mature scars [37]. Informed Consent Statement: Informed consent was obtained from all subjects involved in the study. Data Availability Statement: The scRNAseq data generated in this study have been deposited in the NCBI GEO database under accession numbers GSE156326 and GSE202544. The raw sequencing data are protected and are not available due to data privacy laws. If raw sequencing data are absolutely necessary for the replication or extension of our research, they will be made available upon request to the corresponding author in a 2-week timeframe. All other relevant data supporting the key findings of this study are available within the article and its Supplementary Information files or from the corresponding author upon reasonable request.
2022-12-03T14:12:29.931Z
2022-12-01T00:00:00.000
{ "year": 2023, "sha1": "0b37fa7727ee7de6f6232aa1c19f83b3cbe0b10a", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1999-4923/15/4/1065/pdf?version=1679738707", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "77b2a48fc060583b26ae6410141169f8468f0983", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
226984840
pes2o/s2orc
v3-fos-license
DLNR-SIQA: Deep Learning-Based No-Reference Stitched Image Quality Assessment Due to recent advancements in virtual reality (VR) and augmented reality (AR), the demand for high quality immersive contents is a primary concern for production companies and consumers. Similarly, the topical record-breaking performance of deep learning in various domains of artificial intelligence has extended the attention of researchers to contribute to different fields of computer vision. To ensure the quality of immersive media contents using these advanced deep learning technologies, several learning based Stitched Image Quality Assessment methods have been proposed with reasonable performances. However, these methods are unable to localize, segment, and extract the stitching errors in panoramic images. Further, these methods used computationally complex procedures for quality assessment of panoramic images. With these motivations, in this paper, we propose a novel three-fold Deep Learning based No-Reference Stitched Image Quality Assessment (DLNR-SIQA) approach to evaluate the quality of immersive contents. In the first fold, we fined-tuned the state-of-the-art Mask R-CNN (Regional Convolutional Neural Network) on manually annotated various stitching error-based cropped images from the two publicly available datasets. In the second fold, we segment and localize various stitching errors present in the immersive contents. Finally, based on the distorted regions present in the immersive contents, we measured the overall quality of the stitched images. Unlike existing methods that only measure the quality of the images using deep features, our proposed method can efficiently segment and localize stitching errors and estimate the image quality by investigating segmented regions. We also carried out extensive qualitative and quantitative comparison with full reference image quality assessment (FR-IQA) and no reference image quality assessment (NR-IQA) on two publicly available datasets, where the proposed system outperformed the existing state-of-the-art techniques. Introduction The recent rapid development of the field of virtual reality (VR) [1] has gained immense attention from researchers around the globe who have contributed to the VR community with new ideas and algorithms. These advancements in VR technologies have significantly developed simulation and interaction techniques for a variety of tasks including realistic battlefield simulations for military training [2], virtual assistance in production sectors [3], and enhancement of immersive and interactive user experience via advanced user interfaces. However, the performance of these advancements is heavily depending on the quality of the immersive contents that enable the users to view VR contents via freely moving inside the virtual world. These immersive contents are usually obtained by stitching multiple images captured through different cameras with varying viewpoints, overlapping gaps, suffer from various stitching errors [4,5]. One of the key advantages of the immersive contents experience is the wide field of view (FoV) perception, create with the help of panoramic images where a single wide-angle stitched image is produced from multiple smaller viewpoints images captured via various cameras [6,7]. The image stitching pipeline involves two main steps, such as geometric alignment and photometric correction. The Geometric alignment step computes the homography between adjacent images and performs image alignment based on the computed homography, where the photometric correction step is responsible for the color correction near the stitching region. Primarily stitching errors caused by the geometric alignment are due to the inaccurate measurement of the homographic transformation parameters that results in commonly observed stitching artifacts including parallax, blending, and blur errors, as shown in Figure 1, where the error specific regions are highlighted with red bounding boxes. In order to avoid such erroneous panoramic contents, the perceptual quality of the generated panoramic image must be assessed, and error-free images be selected for high quality immersive contents generation. However, the quality assessment panoramic contents based on these stitching errors is a very challenging task, especially when a single panoramic image contains numerous stitching errors. Each stitching error has their own impact on the quality of the panoramic/stitched image. For instance, parallax distortion disturbs pixel coordination, blending distortion introduces color variance near the stitching boundaries, and blur distortion reduces the visibility of panoramic contents. To better estimate the perceptual quality of the stitched image, these stitching errors be localized and analyzed based on their geometrical and photometrical properties. The geometric errors mostly occur due to inaccurate estimation of homography between two images, while the photometric errors are usually caused by the dissimilar lighting variations between two adjacent images. Generally, the area of image quality assessment (IQA) has been actively researched in the last two decades, where a variety of methods are presented to assess the image quality. The early IQA Generally, the area of image quality assessment (IQA) has been actively researched in the last two decades, where a variety of methods are presented to assess the image quality. The early IQA approaches were focused on the quality of 2D images with different visual artifacts including Gaussian blur (BLUR) [8], JPEG compression (JPEG) [9], JPEG2000 compression (JP2k) [10], white noise (WN) [11], and fast fading (FF) [12]. These quality reduction artifacts have been assessed with both image fidelity metrics and learnable IQA methods. As for image fidelity metrics approaches, structural similarity index matrix (SSIM) [13], feature-similarity index matrix (FSIM) [14], peak signal-to-Noise ratio (PSNR) [15], and mean square error (MSE) [16] are used to measure the similarity between an original image and a distorted image. Besides these conventional image fidelity metrics, several learnable IQA models have been proposed [17][18][19][20] to predict image quality. For instance, Yan et al. [17] presented a multi-task CNN (Convolutional Neural Network) model to estimate the quality of an input image without any reference image. In their proposed model, first they computed natural scene statistics (NSS) and then predicted the image quality. Similarly, Liu et al. [19] proposed a deep-driven IQA method that focused on spatial dependency in the perceptual quality of an observed distorted image. Recently, Kim et al. [20] presented a receptive field generation-oriented IQA approach that performs image quality estimation in two steps. In the first step, receptive fields are generated from the given distorted image. Next, the generated receptive fields and visual sensitivity maps are utilized to weight the visual quality of the observed image. Despite providing promising performance in terms of quality estimation, these methods are still limited to 2D IQA tasks and unable to capture the stitching artifacts in panoramic images., Since stitching artifacts are more complex and eye-catching as compared to conventional artifacts in 2D images, which greatly reduces the overall quality of a stitched image. To specifically assess the visual quality of panoramic images, numerous stitched image quality assessment (SIQA) methods have been presented in the past decade. Among the diversity of the stitching literature, a number of various researchers focus on the quality assessment of the stitched images by either using conventional handcrafted features based [21,22] methods or making subjective comparisons [23][24][25]. Broadly, the area of stitch image quality assessment (SIQA) is different from classical IQA in two perspectives. Firstly, the panoramic stitched images mostly suffer from geometric errors such as shape breakage and objects displacement, whereas classical IQA techniques are unable to assess the image quality. Secondly, unlike classical image distortions, stitching errors are local distortions including color seams near the stitching boundary, blur, and parallax error. The subjective SIQA methods [23][24][25][26][27] involve user studies where users are provided with a set of images and are asked to assign a quality score to each image. The participants analyze the given panoramic image in an HMD (Head Mounted Device) device in detail and assign a quality score to each image based on the visual quality of panoramic contents. Although subjective SIQA methods are very accurate in terms of quality prediction, these methods are expensive, time consuming, and difficult to use in practical applications. In addition, these methods have poor consistency because user opinion about image quality varies from person to person. On the other hand, objective SIQA methods [22,[28][29][30] automatically estimate and predict the perceptual quality of given images using computer vision algorithms. These objective SIQA approaches take stitched images as an input and extract pixel-level information near the stitched regions. The extracted features can be used to predict the quality of stitched images. The objective SIQA methods are further classified into two classes: FR-SIQA (Full-Reference SIQA) and NR-SIQA (No-Reference SIQA) methods. The FR-SIQA methods usually take two input images: (1) a distorted stitched image and (2) a reference image, where the distortion-free reference image provides additional detail for evaluating the perceptual quality of the distorted stitched image. In contrast, NR-SIQA methods predict the quality of stitched images without any reference image. Instead of computing the similarity between a distorted stitched image and reference distortion-free image, NR-SIQA methods exploit different image properties, namely chrominance, structural consistency, histogram statistics of stitched image, and visibility of panoramic contents. The coming subsections presents the detailed literature review of state-of-the-art methods of the FR-SIQA and NR-SIQA domains, respectively. Full-Reference Stitched Image Quality Assessment The early objective SIQA work was based on FR-SIQA methods, where they estimated the perceptual quality of the given stitched images using image fidelity metrics in the presence of distortion-free reference images. For example, Yang et al. [31] proposed a content-aware SIQA method that captured the ghosting and structure inconsistency errors in panoramic images. Their proposed technique estimated the perceptual quality of the given stitched image in two steps. First, they estimated the local variance of optical flow field for reference images and distorted stitched images. In the second step, they computed the intensity and chrominance gradient of both pairs of images in highly structured patches. Finally, the outputs of both error estimation modules (ghosting and structure inconsistency) are combined, and the weighted perceptual quality score is predicted. To form a unified SIQA metric, they combined these measures using an optimally weighted linear combination. Zhou et al. [32] presented a two-couple feature point matching-based approach for the quality estimation of urban scenery stitched images. They used image fidelity metrics including SSIM and high frequency information SSIM (HFI-SSIM) to estimate the difference between distorted stitched images and reference images. Similarly, Li et al. [21] proposed an omnidirectional image quality assessment framework that estimates the perceptual quality of omnidirectional contents. While estimating the quality of the stitched image, they used 0 • and 180 • as a target and 90 • and 270 • as cross-reference regions. The target stitched regions are then assessed by exploiting the relationship between target and reference stitched regions using perceptual hash, sparse reconstruction, and histogram statistics. Yan et al. [22] proposed a perceptual quality estimation metric for stereoscopic stitched images that captured common stitching errors including color distortion, structure inconsistency, ghost distortion, and disparity distortion. For quality estimation in the presence of these distortions, they used information loss, points distance, color difference coefficient, matched line inclination degree, and disparity variance. Although these FR-SIQA methods are fast and accurate, it is usually difficult and sometimes impossible to have panoramic reference images in advance. Due to the requirement of huge amounts of reference image data, these methods are limited to subject quality assessment of panoramic images and unable to assess the quality of a panoramic image without a reference image. No-Reference Stitched Image Quality Assessment Recently, several NR-SIQA methods [33][34][35][36][37] have been proposed to automate the SIQA process. These methods estimate the perceptual quality of a given stitched image without using any stimulus information. For example, in [33], the authors introduced a convolutional sparse coding (CSC) technique to learn the pattern of stitching relevant distortion in a target image. They used different sets of convolution filters to localize the distortion region and, later, quantified the compound effect of these localized distortions using trained kernels. Madhusudana et al. [34] presented a steerable pyramid decomposition framework that estimated the perceptual quality of stitched images. Their proposed method used a gaussian mixture model and bivariate statistics to capture the ghosting, blur, and structure inconsistency in panoramic images. However, the performance of their system is limited for the color image distortion. To evaluate the visual quality of omnidirectional images, Li et al. [35] proposed an attention-driven omnidirectional IQA framework. Their work is focused on the perceptual quality of stitching regions and attention regions, where they used both local and global metrices to inspect those regions for stitching artifacts, color distortion, and resolution of stitched regions. Sun et al. [36] presented a learning-based framework for a no-reference 360 IQA using a multi-channel CNN. Their proposed method consists of two individual modules including a multi-channel CNN architecture followed by a regressor, where a CNN architecture extracts discriminative features from the intermediate layer and the image quality regressor processes the extracted features and predicts the quality score. Xu et al. [37] presented a learning based approach called Viewport-oriented Graph Convolutional Neural Network (VGCN) to estimate the perceptual quality of omnidirectional images. Inspired by the human vision system (HVS), first a spatial viewport graph was created to select a viewport with higher probabilities. Next, they used a graph convolutional network to perform reasoning Sensors 2020, 20, 6457 5 of 20 on their proposed viewport selection graph. Finally, they obtained the global quality of omnidirectional images using the selection viewpoint and viewing experience of the user. These NR-SIQA methods are more realistic than FR-SIQA approaches and can predict the perceptual quality of panoramic contents. However, these methods are not consistent for a certain type of stitching error and some are focused on geometric distortions, while other studies examined photometric errors. In addition, [33,34,37] used computationally expensive procedures to capture stitching-specific distortions that are unable to localize specific distortions. The localization of stitching-relevant distortion can greatly improve the SIQA performance and compute the weighted magnitude of each distortion. To address these issues in the existing SIQA methods, we introduce a learning-based NR-SIQA framework that first segments stitching distortion (i.e., parallax, blending, and blur) and then extracts specific distorted regions from the panoramic image. The proposed framework estimates the perceptual quality of stitched images using extracted distorted regions. To this end, the main contribution of this paper can be summarized as follows: • Visually assessing the quality of 360 • images is a very challenging problem where the existing SIQA approaches use deep features and a regressor model to find only the final score of the immersive images. To address this problem, we propose a novel three-fold DLNR-SIQA framework to localize stitching errors and recognize the type of errors present in the 360 • images. • To localize and find the type of stitching error present in the panoramic 360 • images, we fine-tuned a Mask R-CNN [38] network on a publicly available Google Street View dataset. In the dataset, various types of stitching errors are manually annotated where the Mask R-CNN is retrained on the annotated data to localize and classify the stitching distortions. • We develop a post-surgery technique that efficiently extracts specific distorted regions from the panoramic contents. The extracted information is then further analyzed to assess the essential characteristics of each distorted region, for example, the number of distorted pixels that help the image quality estimation module to measure the optimal perceptual quality. Further, we conduct extensive experiments on two benchmark SIQA datasets, where the obtained quantitative and qualitative results demonstrated the effectiveness of the proposed DLNR-SIQA framework against the existing SIQA methods. The rest of this article is arranged as follows. Section 2 explains the major components of the proposed framework. A detailed experimental evaluation and comparative analysis of our framework is given in Section 3. Finally, this article is concluded in Section 4 with possible future directions. Proposed Framework To the best of our knowledge, there is no single SIQA method that has examined the characteristics of individual stitching errors. With these motivations, we propose a learning-based NR-SIQA framework in this paper that first analyzes the individual stitching error and then obtains a weighted quality score by fusing the ratio of all errors. For better understanding, the proposed framework is divided into three main phases: (1) finetuning Mask R-CNN, (2) localization of distorted region, and (3) image quality estimation. The proposed framework along with technical components are illustrated in Figure 2. Sensors 2020, 20, x FOR PEER REVIEW 6 of 21 Figure 2. A detailed overview of our proposed DLNR-SIQA framework for stitching distortion localization and image quality estimation, which involves three main steps: training, distortion region localization, and image quality estimation. Step 1: The training processing procedure of the Mask R-CNN is demonstrated. Step 2 involves the segmentation of stitching distortions, where an input panoramic image is first converted into a set of patches and individual patches are forwarded to the fine-tuned Mask R-CNN. The output of the distortion region localization phase is a segmented distorted panoramic image, which is then forwarded to Step 3, where each segmented region is investigated individually and the perceptual quality is estimated by estimating the total distorted area over the total area of the panoramic image. Fine-Tuning Mask R-CNN for Stitching Distortion Segmentation Lately, numerous CNN-assisted approaches have been proposed for a variety of applications, including activity recognition [39,40], video summarization [41,42], autonomous vehicle [43,44], and disaster management applications [45,46]. Considering the generalization and strength of CNNs in various research areas, in this paper, we proposed a Mask R-CNN-based solution to segment the distorted regions in panoramic stitched images. A detailed overview of Mask R-CNN architecture is given in Section 2.1.1, while the model training and loss function is explained in Section 2.1.2. Overview of Mask R-CNN Architecture Mask R-CNN was originally introduced as a generic framework for object localization and object instance segmentation in natural images [38]. The standard Mask R-CNN has been derived from the Faster R-CNN [47] architecture by adding a new branch called a mask branch in parallel with bounding box prediction and a classification branch at the tail of the network. The extended Mask R-CNN has the ability to detect, segment and generate high quality masks for each segmented region. Due to easy adaptation, Mask R-CNN is used for variety of computer vision tasks and has obtained reasonable results. The Mask R-CNN architecture consists of three major components: a backbone feature pyramid network (FPN), region proposal network, and ROI selection followed by bounding box recognition and mask prediction modules. The selection of an efficient backbone network for the Figure 2. A detailed overview of our proposed DLNR-SIQA framework for stitching distortion localization and image quality estimation, which involves three main steps: training, distortion region localization, and image quality estimation. Step 1: The training processing procedure of the Mask R-CNN is demonstrated. Step 2 involves the segmentation of stitching distortions, where an input panoramic image is first converted into a set of patches and individual patches are forwarded to the fine-tuned Mask R-CNN. The output of the distortion region localization phase is a segmented distorted panoramic image, which is then forwarded to Step 3, where each segmented region is investigated individually and the perceptual quality is estimated by estimating the total distorted area over the total area of the panoramic image. Fine-Tuning Mask R-CNN for Stitching Distortion Segmentation Lately, numerous CNN-assisted approaches have been proposed for a variety of applications, including activity recognition [39,40], video summarization [41,42], autonomous vehicle [43,44], and disaster management applications [45,46]. Considering the generalization and strength of CNNs in various research areas, in this paper, we proposed a Mask R-CNN-based solution to segment the distorted regions in panoramic stitched images. A detailed overview of Mask R-CNN architecture is given in Section 2.1.1, while the model training and loss function is explained in Section 2.1.2. Overview of Mask R-CNN Architecture Mask R-CNN was originally introduced as a generic framework for object localization and object instance segmentation in natural images [38]. The standard Mask R-CNN has been derived from the Faster R-CNN [47] architecture by adding a new branch called a mask branch in parallel with bounding box prediction and a classification branch at the tail of the network. The extended Mask R-CNN has the ability to detect, segment and generate high quality masks for each segmented region. Due to easy adaptation, Mask R-CNN is used for variety of computer vision tasks and has obtained reasonable results. The Mask R-CNN architecture consists of three major components: a backbone feature pyramid network (FPN), region proposal network, and ROI selection followed by bounding box recognition and mask prediction modules. The selection of an efficient backbone network for the Sensors 2020, 20, 6457 7 of 20 feature extraction phase is a challenging step, where the complexity of the network is greatly related to the behavior of training data. We are targeting stitching distortions in panoramic stitched images and the structures of these distortions have irregular boundaries that require a robust feature representation network. Having a deep hierarchical nature with multi-scale characteristics, a residual neural network (ResNet) [48] is the best candidate for the backbone feature extractor. Our proposed method adopts both ResNet-50 and ResNet-101 in individual training stages and evaluates the performance of Mask R-CNN with both architectures for training and testing, respectively. The backbone CNN architecture takes a distorted stitched patch as an input and extracts patch-level discriminative features at different scales. The extracted feature maps have shaded representations of distorted regions which are then forwarded to the Region Proposal Network (RPN) module. The RPN module scans the input feature maps with a sliding window to capture the ROI with stitching distortion. In the initial stages, RPN roughly generates a cluster of anchors (regions covered by sliding windows) with different aspect ratios and sizes. The roughly estimated anchors are then inspected by the RPN regressor where the best candidate anchors with the highest foreground scores are selected. After the region proposal process, selected anchors are then propagated to the ROI align layer which adjusts the alignment and spatial dimensionality of all selected anchors. Finally, the processed anchors are forwarded to two different submodules: (1) a bounding box recognition (prediction and classification) module and (2) mask generation module. The bounding box recognition module processes the input features using fully connected layers and forwards the processed features to the regression and classification head. The regression head predicts the final coordinates of the bounding box for each ROI where a classification head classifies the target category inside the ROI area. On the other hand, instead of fully connected layers, the mask generation module contains a CNN network called a mask branch. The mask branch generates binary mask from the ROI aligned feature maps. The overall flow of a typical Mask R-CNN is shown in Figure 2 (Training module). Model Training and Loss Function To train the network, we used the existing open-source implementation of Mask R-CNN implemented by Matterport, Inc. [49]. The original network was trained on a benchmark common objects in context (COCO) dataset [50] widely used for object detection, object instance segmentation, and super pixel stuff segmentation. To fine tune the Mask R-CNN on our dataset, we select distorted stitched images from the Google Street View dataset [51] and the LS2N IPI (Image Perception Interaction) Stitched Patched dataset [33]. We collected a total of 1370 distorted patches from both datasets and divided them into training and validation sets with a split ratio of 70% and 30%, respectively. To meet the input dimensionality requirement of the network, all the images are cropped to m×n×c image size, where m = 256, n = 256 and c = 3. Before training, we manually annotated both training and validation data, where we selected the exact coordinates of the stitching distortions using an online annotation tool called VGG (Visual Geometry Group) Image Annotator (VIA). Our proposed framework was trained with two different backbone CNN architectures, ResNet50 and Resnet101. During training, Mask R-CNN used a joint loss function for distortion classification, bounding box regression, and mask prediction, respectively. Mathematically, the joint loss function can be expressed as follows: Here, class is the classification loss, bbox is the bounding box regression loss, and mask indicates the mask prediction loss. The classification loss can be computed by: Here, η class indicates the number of the class, p i is the predicted probability of the ith ROI, whether it is predicted as positive (foreground) or negative (background). Where p i is the ground truth probability of ith ROI, the ground truth value for positive ROI (foreground) is 1, while for negative ROI (background), the ground truth value is 0. The computation of bounding box regression loss can be expressed as follows: where, η nop indicates the total number of pixels in the observed feature map, and R is the smooth L1 loss function commonly used for bounding box regression with less sensitivity for outlier regions. Mathematically, the R function can be expressed as follows: Here, t i holds the difference between the four coordinates (including horizontal coordinate, vertical coordinate, width, and height) of the predicted anchor/bounding box and ground truth bounding box, where t i ' represents the difference between ground truth bounding box and the positive bounding boxes. Furthermore, the mask prediction loss can be computed by: Here, m 2 is the m × m distorted region, y (i,j) is the ground truth label of the pixel at the (i,j) location in the distorted region, and y k (i,j) is the predicted label of the pixel at the (i,j) location for the kth class. For instance, y 0 (i,j) = 1 indicates the misclassification of the background pixel as foreground class, while y 1 (i,j) = 1 represents the correct classification of the foreground pixel. Similarly, y 0 (i,j) = 0 indicates the correct classification of the background pixel, while y 1 (i,j) = 0 represents the misclassification of the foreground pixel. Distorted Region Segmentation and Mask Generation In this phase, we deployed a fine-tuned trained Mask R-CNN for segmenting distortions in stitched images. The panoramic stitched images have a wider FOV compared to normal 2D images, which cannot be input to the proposed network in the original resolution. Therefore, before forwarding to the network, we fragmented the high-resolution panoramic image into 128 patches with a dimensionality of m × n × c, where, m = 256, n = 256 and c = 3. The finetuned Mask R-CNN takes a panoramic stitched image as a batch of patches, where each patch is processed as an individual image. During distortion segmentation, the trained network traverses each patch for stitching distortion and captures the location of distorted regions. The captured locations of distorted regions are then enhanced by processing them at multiple convolutional layers of the generate binary masks for each captured distorted region. Finally, all processed patches are merged together and form a final segmented image, where each distorted region is specified by a separate binary mask. Image Quality Estimation The image quality estimation module is responsible for the perceptual quality estimation of the segmented stitched image. The proposed mechanism of image quality estimation involves three steps: region fragmentation, extraction of the distorted region from the original image using the fragmented region, and average distorted area in the stitched image. Each step of the proposed image quality estimation mechanism is explained in Algorithm 1. The first step fragments the binary mask map of a received segmented image into multiple mask maps and fragmentation is performed so that each fragmented mask map contains the mask of an individual distorted region. The fragmentation process facilitates the proposed system to individually investigate each distorted region in a separate mask map, thereby providing ease for the next module to process the fragmented mask maps in a more efficient way. The second step extracts the distorted regions from the original stitched image using fragmented mask maps. During the region extraction phase, we first estimate the contour of each distorted region using the corresponding mask. The computed contours are then used to extract the distorted regions from the original image. In the last step, the extracted regions are forwarded to average the distorted area estimation module, which calculates the area of individual distorted regions. Algorithm 1: Quality Estimation of Stitched Image Input: S i = Segmented Image Output: Quality Score Q s Prepossessing: Steps: 1: Read the segmented image and perform regions fragmentation using binary masks. Fragment i = image_fregmentation (S i ) 2: Extract the distorted region using fragmented regions. Region i = region_extraction (Fragment i ) 3: Compute the pixel wise ratio of distortion-free image area. The area of each extracted distorted region is computed one after another and added together. Finally, the target image quality score is obtained by dividing the total distorted area by the total area of the stitched image. Mathematically, the average distorted area estimation module can be expressed as: Here, R l is the lth region, i and j represent the ith row and jth column of a specific region; similarly, W and H are the corresponding width and height of the patch. Experimental Results and Discussion In this section, we present a detailed experimental evaluation of the proposed framework, both quantitatively and qualitatively. For quantitative perspective, we used different segmentation performance evaluation metrics including Precision (P), Recall(R), Dice (DSC), Jaccard Index (JI), Mean Pixel Accuracy (mPA), Mean Average Precision (mAP) and Mean Absolute Error (mAE). For qualitative evaluation, the obtained segmentation masks, distortion-specific regions and final segmented images are visually inspected. For experimental evaluation, we used two test sets: the patches test set (test set A) and the panoramic images test set (test set B) from the Google Street View Dataset [51] and the SUN360 Dataset [52], respectively. The test set A consists of 300 distorted stitched patches of size 256 × 256 × 3, test set B comprises 160 panoramic images of size 4096 × 2048 × 3. During the segmentation process, each panoramic image is first divided into 128 patches, where we conduct a series of experiments on different patch sizes and choose the optimal size for patch extraction. The statistical details of both test sets are listed in Table 1, whereas the representative samples of both patches test sets and panoramic images test sets are depicted in Figures 3 and 4, respectively. Furthermore, we evaluated the performance of the fine-tuned Mask R-CNN with two different backbone architectures, i.e., ResNet-50 and ResNet101. Furthermore, we evaluated the performance of the fine-tuned Mask R-CNN with two different backbone architectures, i.e., ResNet-50 and ResNet101. Experimental Details The proposed DLNR-SIQA framework was implemented using Python version 3, Tensorflow and Keras on a machine. The training and experimental evaluation of our proposed framework was performed on a PC with the following hardware specifications: Nvidia GTX 1060 GPU (6 GB), 3.3 GHz processor, and 8 GB onboard memory. The proposed training strategy adopted two main modifications in the original implementation of Mask R-CNN [49]. (1) Rather than training the complete network from the very first layer, we squeezed the rest of the layers and trained only the network head by using the already learned weights of the COCO dataset. (2) We modified the hyper-parameters for fine tuning the Mask R-CNN on our custom stitched images dataset. The fine-tuned Mask R-CNN was trained for 50 epochs using an Adam optimizer with 100 training steps per epoch, a batch size of eight, a learning rate of 0.0001, and a momentum of 0.9. Quantitative Evaluation In this section, we present the quantitative evaluation of our proposed framework on two different types of images, i.e., stitched patches from test set A and panoramic images from test set B. The proposed quantitative evaluation protocol contains a set of metrices that are commonly used for estimating object instance segmentation performance, i.e., P, R, DSC, JI, mPA, mAP and mAE. The first two evaluation metrics P and R are used to evaluate the per-pixel binary mask prediction performance of our proposed DLNR-SIQA. Mathematically, P and R can be expressed by: Here, TP represents a group of pixels that are foreground pixels and also predicted as foreground pixels, FP represents a group of pixels that are background pixels but predicted as foreground, and the term FN represents a group of pixels that are foreground pixels but predicted as background, as shown in Figure 5. To estimate the similarity between the predicted segmentation mask and the ground truth mask, DSC and JI are used as evaluation metrics. Mathematically, DSC and JI can be expressed by: Sensors 2020, 20, x FOR PEER REVIEW 12 of 21 Here, GSM and PSM are the ground truth and predicted segmentation mask, respectively. The values of both DSC and JI vary from 0 to 1, where high values indicate better segmentation performance while low values indicate worse segmentation performance. To estimate the percentage of correctly classified pixels per segmentation mask, we used a well-known segmentation evaluation metric called mPA. Mathematically, mPA can be expressed by: Here, c is the number of classes including the background, and Pii is the total number of pixels that are correctly classified, where Pij indicates misclassified pixels. Furthermore, we examined the performance of our method using mAP and mAE metrics, which are commonly used for object detection and segmentation performance evaluation. Average precision (AP) represents the amount of area under the precision-recall curve, where mAP can be obtained by computing the mean of AP over the total number of classes/categories. Here, GSM and PSM are the ground truth and predicted segmentation mask, respectively. The values of both DSC and JI vary from 0 to 1, where high values indicate better segmentation performance while low values indicate worse segmentation performance. To estimate the percentage of correctly classified pixels per segmentation mask, we used a well-known segmentation evaluation metric called mPA. Mathematically, mPA can be expressed by: Here, c is the number of classes including the background, and P ii is the total number of pixels that are correctly classified, where P ij indicates misclassified pixels. Furthermore, we examined the performance of our method using mAP and mAE metrics, which are commonly used for object detection and segmentation performance evaluation. Average precision (AP) represents the amount of area under the precision-recall curve, where mAP can be obtained by computing the mean of AP over the total number of classes/categories. Here, AP is the average precision, and n is the total number of classes. On the other hand, mAE calculates the absolute difference between pixels of the predicted segmentation mask and corresponding ground truth segmentation mask. Mathematically, mAE can be expressed by: Here, y i is the ith pixel of the predicted segmentation mask, x i is the ith pixel of the ground truth segmentation mask, and n indicates the total prediction made by the network. The obtained results from quantitative evaluation for the stitched patches test set and 360 • image test set are depicted in Figure 6. Qualitative Evaluation Besides quantitative evaluation, we further evaluated the qualitative performance of our proposed framework by visually inspecting the segmentation masks obtained and the final segmented distorted images. To assess the generalization of our stitching distortion segmentation framework, we validated the proposed framework with two different types of stitched images, i.e., stitched patches and panoramic 360° images. For stitched patches, we selected distorted stitched patches from a Google Street View dataset [51]. The proposed framework processes the input patches in several stages (including feature extraction, ROI selection, ROI alignment, box predictionclassification, and mask generation) and returns two outputs for each input, i.e., binary mask and final distortion segmented image. The visual results obtained for stitched patches and full panoramic images are shown in Figures 7 and 8, where the first column represents the input images, the second column represents the generated mask maps, the third column represents the distortion-specific images, and the last column represents the final segmented images. Qualitative Evaluation Besides quantitative evaluation, we further evaluated the qualitative performance of our proposed framework by visually inspecting the segmentation masks obtained and the final segmented distorted images. To assess the generalization of our stitching distortion segmentation framework, we validated the proposed framework with two different types of stitched images, i.e., stitched patches and panoramic 360 • images. For stitched patches, we selected distorted stitched patches from a Google Street View dataset [51]. The proposed framework processes the input patches in several stages (including feature extraction, ROI selection, ROI alignment, box prediction-classification, and mask generation) and returns two outputs for each input, i.e., binary mask and final distortion segmented image. The visual results obtained for stitched patches and full panoramic images are shown in Figures 7 and 8, where the first column represents the input images, the second column represents the generated mask maps, the third column represents the distortion-specific images, and the last column represents the final segmented images. in several stages (including feature extraction, ROI selection, ROI alignment, box predictionclassification, and mask generation) and returns two outputs for each input, i.e., binary mask and final distortion segmented image. The visual results obtained for stitched patches and full panoramic images are shown in Figures 7 and 8, where the first column represents the input images, the second column represents the generated mask maps, the third column represents the distortion-specific images, and the last column represents the final segmented images. Distorted Region Extraction and Quality Estimation After the segmentation of stitching distortions in panoramic images, we extracted the segmented distortion-specific regions from panoramic images using their corresponding masks. For each distorted region, we used the binary mask pixel value and selected the segmented area pixel from Distorted Region Extraction and Quality Estimation After the segmentation of stitching distortions in panoramic images, we extracted the segmented distortion-specific regions from panoramic images using their corresponding masks. For each distorted region, we used the binary mask pixel value and selected the segmented area pixel from the original RGB image as shown in Figure 9. The extracted distorted regions were then used for the quality estimation of panoramic images. The perceptual quality of a given panoramic image was estimated using our own quality estimation scheme where we assessed the quality of panoramic images by computing the number of distorted pixels, the number of distortion-free pixels, the ratio of distortion, and the ratio of the distortion-free panoramic image. For this purpose, we first calculated the number of pixels for each distorted region. Next, the calculated pixels for all distorted regions were combined and divided by the total number of pixels of the original image, as given in Equation (5). The estimated perceptual quality of stitched patches and 360 • panoramic images are given in Table 2, where the second and third columns list the values of distorted and distortion-free pixels, while the fourth and fifth columns list the percentage of distorted and distortion-free images. Using simple and pixel-level assessments of panoramic images, the proposed method provides an accurate estimation of perceptual quality, thereby exploiting the disturbance of pixels only in distortion-specific regions rather than traversing the entire panoramic image. Comparison of Our Proposed Method with State-Of-The-Art SIQA Methods In order to validate the effectiveness and generalization of the proposed DLNR-SIQA framework, we conducted a comparative analysis with existing deep learning-based FR-SIQA and NR-SIQA approaches [31,33,34]. The comparison was performed on two publicly available stitched images datasets: the SIQA [31,33] and the ISIQA (Indian Institute of Science Stitched Image QA) [34] dataset. The proposed framework is compared with the existing SIQA methods using three standard metrics including SRCC (Spearman's Rank Correlation Coefficient), PLCC (Pearson's Linear Correlation Coefficient), and RMSE (Root Mean Square Error). The metric SRCC estimates prediction similarity, while PLCC and RMSE estimates the prediction accuracy. The high value of SRCC and PLCC indicates the better performance where the lower RMSE reflects the better performance. Since the proposed framework is trained on images with three types of stitching distortion, parallax, blending, and blur distortion, we selected only those images that contained the aforementioned stitching distortions. Moreover, for the performance assessment of our proposed method along with other comparative methods, and furthermore, to obtain better correlation between MOS values and the objective scores predicted by the models, we followed the strategy of [37] by utilizing the five parameter logistic function: where variable x indicates the prediction made by objective models and variable y represents the corresponding MOS score. Further, variables β 1 to β 5 are the controllable parameters to optimize the logistic function. To emphasize the effect of the logistic function, we evaluated the performance of our method with and without the use of the logistic function. The obtained results from the conducted experimental study on both the SIQA and ISIQA datasets are listed in Table 3. From the results, it can be perceived that the proposed method without the logistic optimization function dominated [31] in terms of the SRCC, PLCC, and RMSE on SIQA datasets; however, it performed better [33], obtaining the lowest SRCC, PLCC, and RMSE score, on the SIQA dataset. However, in the second attempt, with the use of the logistic optimization function, the proposed method outperformed the existing SIQA methods in terms of SRCC, PLCC, and RMSE. Significance of Patch Size During experimental evaluation, we perceived that parallax, blending, and blur error are difficult to capture at the patch level near smooth boundary regions such as white background, and are easily catchable in highly textured regions. In order to obtain optimal patch size, we conducted a series of experiments and evaluated the performance of our method across different patch sizes. The obtained results using different patch sizes are listed in Table 4. From the statistics presented in Table 4, it can be observed that using small patch sizes reduces the later quality estimation performance due to inaccurate localization of low texture regions at patch boundaries. In contrast, a very large patch size also negatively affected the overall performance of our system due to insufficient localization of stitching errors. Thus, we achieved a better tradeoff by choosing a suitable patch size for stitching induced error segmentation and overall quality estimation of the panoramic image. Dominancy Analysis of Stitching Errors To analyze the effect of specific stitching error in the quality reduction of panoramic images, we conducted an experimental study and investigated the dominancy of three different types of stitching errors, including parallax error, blending error, and blur error. For experimental purposes, we collected a total of 60 images (20 images per stitching error) and estimated the natural scene statistic of the selected test images using No-Reference IQA methods including BRISQUE (Blind/Reference Image Spatial Quality Evaluator) [53], DIIVINE (Distortion Identification-based Image Verity and Integrity Evaluation) [54], NIQE (Natural Image Quality Evaluator) [55], and BIQI (Blind Image Quality Indices) [56]. The main motivation behind the selection of these four methods was the fact that they do not compute the distortion specific features, i.e., blur distortion or blocking artifacts, but use scene statistics of locally normalized luminance coefficient of the image. The quality of the selected set of images was estimated using these four No-Reference IQA methods and computation of the average quality score of each method per stitching error. Besides, we also estimated the average quality score per stitching error using our proposed DLNR-SIQA method and compared the obtained score with other No-Reference IQA methods. The dominancy analysis of three different type of stitching error is depicted in Figure 10 where it can be observed that the quality score for each type of error ranges from 0 to 100. It is worth noticing that blur distorted images have the highest average quality score across all method, which shows the lowest dominancy of blur error/distortion on the quality of panoramic images. On the other hand, parallax images have the lowest average quality score through each method, showing the highest dominancy of parallax error/distortion on the perceptual quality of panoramic images. The blended distorted images have an average quality score between parallax and blur distortion, reflecting the average dominancy of the blended distortion on the quality of distorted images. Thus, the experimental study verified that parallax distortion has the highest dominancy, while blur distortion has the lowest dominancy on the quality of panoramic contents. stitching error is depicted in Figure 10 where it can be observed that the quality score for each type of error ranges from 0 to 100. It is worth noticing that blur distorted images have the highest average quality score across all method, which shows the lowest dominancy of blur error/distortion on the quality of panoramic images. On the other hand, parallax images have the lowest average quality score through each method, showing the highest dominancy of parallax error/distortion on the perceptual quality of panoramic images. The blended distorted images have an average quality score between parallax and blur distortion, reflecting the average dominancy of the blended distortion on the quality of distorted images. Thus, the experimental study verified that parallax distortion has the highest dominancy, while blur distortion has the lowest dominancy on the quality of panoramic contents. Limitations of Our Proposed System Besides the effective performance for various types of stitching errors, there are certain limitations of the proposed method. Foremost, the proposed method is based on the Mask R-CNN network for error segmentation present in the panoramic images, where the size of these immersive contents ranges from 2k to 16k. As a result, the time complexity of the system is very high, thus limiting the performance of the proposed method in real-time. Further, in addition to the common stitching errors i.e., blur, parallax, and blending, there are other types of panoramic errors including Limitations of Our Proposed System Besides the effective performance for various types of stitching errors, there are certain limitations of the proposed method. Foremost, the proposed method is based on the Mask R-CNN network for error segmentation present in the panoramic images, where the size of these immersive contents ranges from 2k to 16k. As a result, the time complexity of the system is very high, thus limiting the performance of the proposed method in real-time. Further, in addition to the common stitching errors i.e., blur, parallax, and blending, there are other types of panoramic errors including ghosting effect, contrast variance, vignetting, and curved horizon, where the proposed method has limitations while dealing such errors in the panoramic images. Conclusions and Future Work Deep learning gained record-breaking performance recently in various fields of computer vision including object detection and segmentation, abnormal events detection, and activity recognition. Besides the trending fields of computer vision, errors analysis in images have recently been studied by researchers and enormous deep learning techniques have been proposed to automatically validate the quality of various traditional images. However, these methods are limited to evaluating the quality of traditional images and cannot be applied to panoramic images to evaluate the quality of their immersive contents. With these motivations, in this paper, we proposed a novel DLNR-SIQA framework to segment and extract three common types of stitching distorted regions (blend, parallax, and blur) present in panoramic images. In addition, we manually annotated three types of error using the Google Street View dataset and fine-tuned the Mask R-CNN for the segmentation and localization of the distorted regions. Finally, the areas of the distorted regions per pixel were measured to estimate the overall final quality of the panoramic image. To validate the performance of the proposed method, we used a set of well-known image segmentation performance evaluation metrics, including P, R, DSC, JI, mPA, mAP, and mAE, where our proposed method has dominance over the state-of-the-art methods. To further verify the generalization of our method, we also compared our method with existing SIQA methods using SRCC, PLCC, and RMSE measures. The obtained results revealed the effectiveness of our DLNR-SIQA framework, indicating that it is the most suitable aspirant for both visual inspection and quality assessment of panoramic images. Further, the proposed system can be used as part of the VR systems to segment and extract stitching distorted regions and measure the quality score of the immersive contents. Currently, our proposed framework is only focused on a certain type of stitching distortion in panoramic images. In the future, we will extend this work to investigate the stitching induced distortions in 360 • and VR videos. Further, this work can be intelligently merged with real-time stitching error detection and tracking.
2020-11-18T14:06:57.715Z
2020-11-01T00:00:00.000
{ "year": 2020, "sha1": "46e32717b63632bb411362bee1a7e4235cc63ced", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1424-8220/20/22/6457/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "ecb6b61d9cde2c1136a04fb3876d21328a03ea2d", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science", "Medicine" ] }
231822638
pes2o/s2orc
v3-fos-license
Impact of involving the community in entomological surveillance of Triatoma infestans (Klug, 1834) (Hemiptera, Triatominae) vectorial control Background Vectorial transmission is the principal path of infection by Trypanosoma cruzi, the parasite that causes Chagas disease. In Argentina, Triatoma infestans is the principal vector; therefore, vector control is the main strategy for the prevention of this illness. The Provincial Program of Chagas La Rioja (PPCHLR) carries out entomological evaluation of domiciliary units (DUs) and spraying of those where T. infestans is found. The lack of government funds has led to low visitation frequency by the PPCHLR, especially in areas with a low infestation rate, which are not prioritized. Therefore, seeking possible alternatives to complement control activities is necessary. Involving householders in entomological evaluation could be a control alternative. The major objective was to determine the cost of entomological evaluation with and without community participation. Methods For entomological evaluation without community participation, PPCHLR data collected in February 2017 over 359 DUs of the Castro Barros Department (CBD) were used. For entomological evaluation with community participation, 434 DUs of the same department were selected in November 2017. Each householder was trained in collecting insects, which were kept in labeled plastic bags, recovered after 2 weeks, and analyzed in the laboratory for the presence of T. cruzi. Using householders' collection data, a spatial scan statistic was used to detect clusters of different T. infestans infestations. Entomological evaluation costs with and without community participation related to the numbers of DUs visited, DUs evaluated, and DUs sprayed were calculated and compared between methodologies. In addition, the number of DUs evaluated of the DUs visited was compared. Results According to the results, the triatomines did not show evidence of T. cruzi infection. Spatial analysis detected heterogeneity of T. infestans infestation in the area. Costs related to the DUs visited, evaluated, and sprayed were lower with community participation (p < 0.05). In addition, more DUs were evaluated in relation to those visited and a greater surface area was covered with community participation. Conclusion Participation of the community in the infestation survey is an efficient complement to vertical control, allowing the spraying to be focused on infested houses and thus reducing the PPCHLR's costs and intervention times. Background Chagas disease continues to be an important public health problem in Latin America where an estimated 6 to 7 million people have been infected with the Trypanosoma cruzi (Chagas, 1909) (Kinetoplastida, Trypanosomatidae) parasite, the causative agent of this disease [1]. In Argentina, it is assumed that 1 to 3 million people Open Access Parasites & Vectors *Correspondence: lucianaabrahan@conicet.gov.ar 1 Centro Regional de Investigaciones Científicas y Transferencia Tecnológica de La Rioja (CRILAR), UNLAR, SEGEMAR, UNCa, CONICET, Entre Ríos y Mendoza s/n, Anillaco (5301), La Rioja, Provincia de La Rioja, Argentina Full list of author information is available at the end of the article could have this disease, although there are currently no official data on the number of people infected or at risk of T. cruzi infection [2]. This parasite can be transmitted through different ways, but mainly through vector transmission, that is, by contact with feces of infected triatomines [1], so the vectorial control of this infestation is the central strategy for the prevention of the illness. In Argentina, Triatoma infestans (Klug, 1834) (Hemiptera, Triatominae) is the triatomine species with the greatest epidemiological importance, given its ability to inhabit inside and on the periphery of houses. In this country, Chagas disease vector control is focused on T. infestans infestation. La Rioja province is endemic for Chagas disease and considered of medium risk for the transmission of this disease by T. infestans [3]. The Provincial Program of Chagas La Rioja (PPCHLR) works on entomological evaluation and insecticide spraying in positive houses to eliminate T. infestans infestation. When vector control actions are carried out in a sustained and committed way over time, triatomines' presence in houses is reduced and consequently the risk of vector transmission decreases. However, in areas where the infestation is reduced, a paradox occurs as these areas lose surveillance priority and are visited less frequently, and their chemical treatments are postponed. This misconception produces a huge setback in achieving the main objective, which is vectorial transmission interruption. Another factor is that the householder's claims are not considered, so the houses where the PPCHLR does not find T. infestans do not receive treatment. For entomological evaluation of houses, the PPCHLR staff moves from the capital city to the different departments and returns to them depending on political decisions about economic resource utilization. Therefore, the PPCHLR focuses surveillance on departments with high infestation rates. The Castro Barros Department (CBD) has had a low frequency of vector control interventions, that is, every 3 or 4 years. This situation is a consequence of the lower T. infestans infestation rate in relation to other areas that have been a priority for the PPCHLR, such as the San Martin Department [4,5]. It is known that longer intervention intervals increase the risk of recovery of T. infestans populations [6]. Simultaneously, the community demands vector control activities because of the frequent T. infestans presence as it is impossible to consolidate control and surveillance actions with vertical strategy methods in extended rural areas [7]. A theoretical vertical vector control model would be annual interventions by specialized technicians who evaluate and spray houses [8]. However, the logistical capacity does not exist in La Rioja Province. Given the actual situation, the advantages and disadvantages of maintaining only vertical PPCHLR interventions in low infestation areas need to be re-evaluated. Therefore, in this area, community participation in entomological surveillance would be an essential tool given this complex scenario, but the cost of this type of control activity needs to be evaluated. Several authors have reported the advantages of more active community participation by having householders collect triatomines in their homes in response to reports on the presence and control of T. infestans populations (e.g., [9][10][11]). The use of this method allows extending the search time and obtaining infestation data in the houses, which are often not possible because of the limited time allocated to them [11]. Householder collection of triatomines may be more sensitive than active searches in some areas [11][12][13][14], especially when the infestation is of low density [15]. In addition, triatomines were reported more by householders than by active searches [11]. Several authors have mentioned that community participation should be considered instead of active searches to detect T. infestans infestation foci, although recognizing that the effect is more pronounced in dwellings than in their surrounding structures [13][14][15][16]. Community-based vector control is the most costeffective alternative in rural areas with limited resources [14,17]. In a previous study, our work team concluded that real field data on the costs of different control methods were needed to complete the entomological surveillance analysis [13]. This study originated within a larger project in response to the community's request to our research team, which resides in CBD. The main objectives of this work were to determine the cost of vectorial control activities with and without community participation and to analyze the spatial distribution of T. infestans infestation in the study area. Although part of the study aimed to reduce costs by involving the community in entomological surveillance, the main objective was to quantify the costs of each methodology with real field values to determine the magnitude of this difference in detail. It is important to clarify that the work does not compare the methodologies' sensitivities or infestation detection differences among the field samples. Study area CBD is located to the northeast of La Rioja Province, Argentina. Its departmental head is the Aminga locality, 95 km from the capital city. It is located in the biogeographic region of Monte Desert. The population density is around three inhabitants/km 2 . It is a rural population, concentrated in ten localities that function as an oasis due to the availability of surface water. The total population of CBD is 4268 inhabitants [18]. Domestic infestation from 2009 to 2013 ranged from 1.18% to 9.87%. In 2013, the latest departmental intervention was carried out by the PPCHLR without community participation (PPCHLR unpublished data). Each intradomicile (ID) with its peridomestic structures (PDs) was defined as a domiciliary unit (DU). A DU was recorded as "infested or positive" when at least one T. infestans individual was found in the DU. Entomological evaluation without community participation In February 2017, 359 DUs localized in Aminga (one of ten localities of the department) were visited for entomological evaluation, carried out by 20 PPCHLR technicians. A dislocating agent (tetramethrin 2%) was used in the search, which was interrupted when an insect was found or until an hour of capture effort had been completed (hour/person method). The infestation status and treatment of each georeferenced DU were registered. This type of control was defined as vertical intervention. An evaluated DU was defined as a DU in which the householder was present and approved entomological evaluation, while a DU visited was defined as all DUs, including those in which the householder was not present at the time of entomological evaluation carried out by PPCHLR technicians. Entomological evaluation by community participation In December 2017, 434 DUs from 9 localities in CBD ( Fig. 1) were visited, and T. infestans infestation was evaluated by the householders [13]. The samples were selected in three steps. First, the total number of DUs in each locality was counted. Second, the minimum DU number was estimated to guarantee coverage of at least 20% of the total DU number in each locality. Finally, DU samples from each locality were included considering two factors: the best representation of all points in the locality and the householder's disposition and interest in participating in this study ( Table 1). Inhabitants of the selected DUs received a detailed explanation of the study and were invited to participate. Inhabitants that accepted the invitation were trained in triatomine identification, places to search, and careful collection methods to avoid the risk of accidental infection. Each participating family received plastic bags labeled with the DU identification code for different ecotopes. A registry was made with the participating families and the characteristics of their DU. The householders' collection period was 2 weeks in November and December 2017 (from delivery to plastic bag collection). Simultaneously, health agents from local hospitals worked alongside the householders. The collection bags were transferred to the laboratory where the species, gender, and developmental stage [19] were determined, and T. cruzi detection was performed using rectal material. The number of T. infestans according to gender and sex was quantified for each locality. For T. cruzi analysis, the fresh fecal samples were examined in a drop of physiological solution A visited DU was defined as one in which the householder had been given bags for insect collection. An evaluated DU was defined as one in which the householder gave us bags with or without material. A closed DU was defined as one in which the householder was not present at the time of bag collection. The difference between DUs visited and DUs evaluated was that some residents received a collection bag but were not in their home at the sample recovery time. Costs with and without community participation To determine the cost of vectorial control activities with and without community participation, CBD campaign countable information given by PPCHLR was used (February and December 2017). The cost without community participation was only constituted by the 'expenses' (fuel and travel wage) incurred by the PPCHLR. When householders participated, the cost comprised the sum of the expenses incurred by PPCHLR and inputs used (brochures, gloves, and fuel) for traveling to the study area to train householders or to collect the bugs. The costs of the DU visited number over DU evaluated number and in relation to the DU sprayed number were calculated for each methodology. The insecticide and spraying machine costs were not considered since they were the same for both methodologies. In both cases, the supplies to carry out spraying were provided by the National Chagas Program and did not involve an additional cost for the PPCHLR. Climatic Variables To verify the climatic conditions in the study area and based on equipment availability, three data loggers (HOBO U10/003 Onset Computer Corp, Bourne, MA, USA) were respectively placed in the Pinchas, Anillaco, and Santa Vera Cruz localities (Fig. 1). Temperature (°C) and relative humidity (%) were recorded at 15-min intervals between 7:00 p.m. and 10:30 p.m., corresponding to the moment of peak active dispersion of T. infestans [20][21][22]. Data analysis The percentage of infested DUs was calculated over the total evaluated DUs by locality. For cost analysis, data were compared between methodologies using chi-square of the Infostat program [23]. A spatial scan statistic with a Poisson model was used to detect clusters (geographically aggregated groups of localities with higher or lower infestation compared with the regional average). Locality was the analysis unit. Analysis was performed using SaTScan v. 9.4.4 [24]. Climatic variables were compared with a non-parametric Kruskal-Wallis test using the Infostat program [23]. The present study was focused on environmental variables because in a previous work the cleaning degree, peridomestic area, and dwelling typology were not associated with T. infestans infestation [25]. In December 2017, 81.6% of DUs were evaluated (354/434) by householders because 18.4% of DUs were closed. The general infestation by T. infestans in the study area was 13.8%, varying between 0 and 50% among Householders collected 79 specimens of T. infestans (Additional file 1: Figure S1), 52 individuals in IDs and 27 in PDs, none with presence of T. cruzi infection. The number of T. infestans collected in localities could not be compared because collection time per householder was not standardized. Moreover, more DUs were evaluated in relation to those visited (χ 2 = 23.43, p < 0.0001) and a larger surface area was covered (163.9 vs. 0.8 km²) with community participation. The spatial analysis allowed detection of differences in infestation with respect to the average area. Three clusters were identified in the area (Fig. 2). The first cluster, called the North Zone, presented an infestation rate of 0.02%, which is less than the average in the area (relative risk = 0.1; p = 0.02), and covered three localities, San Pedro, Santa Vera Cruz, and Anjullón, with 61 DUs and a radius of 6.33 km centered at − 28.66°S, − 66.92°W. The second group of localities comprised the Center Zone cluster, with an infestation rate of 0.07%, also lower than expected (relative risk = 0.37; p = 0.04), with two localities, Aminga and Anillaco, with 157 DUs and a 4.68 km radius centered at -28.85° S, -66.93° W. The third cluster, As possible factors that could influence zonal infestation differences, temperature, and relative humidity were compared among clusters. The South Zone had higher temperatures than the Center Zone, which in turn had higher temperatures than the North Zone (H: 96.73, gl: 2, p < 0.0001). Concerning relative humidity, the South Zone had lower humidity than the Center Zone, which in turn showed lower humidity than the North Zone (H: 59.51, gl: 2, p < 0.0001). Table 4 shows the median temperature and relative humidity. Discussion In this work, the impact of incorporating community participation in areas with low domestic infestation, which in general is neither a focus of study nor a priority when applying vector control actions, is analyzed. The low frequency of a vertical program cannot meet demand in areas where the infestation risk is known to be low [5,7], causing domestic vector persistence to continue and allowing population recovery between spraying cycles [6,26]. Incorporating participatory approaches against vector-borne diseases has been shown to be important for control program sustainability [9,13,17,[26][27][28][29][30]. Periodic inspection allows early detection of new foci of reinfestation in the intradomiciles [11,14,31]. However, a bio-ecosocial approach alone does not always reduce the infestation [32]. In addition, one of the main criticisms of the incorporation of community participation in health programs refers to the process and place given to the community in the decision-making process [8]. Different people bring different assessments to a situation and these must be taken into account [33]. In this study, community intervention was the focus in the surveillance phase to guarantee early triatomine detection. Furthermore, an active and positive attitude was promoted in the local population, and the householders were able to voice their doubts about the transmission and prevention of Chagas disease. In this work, using field data collected in the same year and without modeling for indirectly estimated variables, two intervention types were compared, showing that costs related to DUs visited, evaluated, and sprayed were lowered with community participation. In addition, more DUs were evaluated and a larger surface area was covered with community participation. Many works have shown a cost decrease when a community collaborates in surveillance [11,17,30,[34][35][36][37], although with completely different approaches that do not allow a direct comparison with our data. Some studies have focused on vectorial control costs; for example, in Mexico, the cost to evaluate a domicile entomologically to detect T. dimidiata (Latreille, 1811) was US$70 for an infested house by carrying out an active search and only US$10 when householders were involved [11]. Also, in Santiago del Estero (Argentina), a very complete analysis was carried out considering community intervention, and the cost-effectiveness was estimated in the attack phase where householders sprayed their own houses [17]. These latter results are not comparable to our data since our focus was only on entomological surveillance and spraying was only carried out by specialized personnel. In La Rioja Province, it is assumed that a house should be sprayed when PPCHLR technicians corroborate the presence of T. infestans. PPCHLR searches are carried out during the day; however, the householders can carry out searches during both the day and night. In this case, the probability of finding dispersants is higher because T. infestans' peak activity occurs between 7 and 10 p.m. [20]. Our results showed that most of the insects collected by householders were found in IDs (52/79), of which 13.5% (7/52) were found on external walls or lights or in the mosquito netting, so they were assumed to be dispersants from other sources. Therefore, it is important to establish an appropriate response to each T. infestans collected by a householder. Especially female T. infestans represent a particular epidemiological risk as colonizers of houses, justifying a control intervention. Each fertilized female can lay 100-600 eggs in her lifetime [38]. Dispersant females carry numerous eggs within their oviducts to ensure successful colonization of a new habitat [21], so it is important not to postpone control actions. In the case of triatomine dispersant collection, the possibilities of invasion can be reduced by physical protection (such as mosquito netting) [8]. To control circuit function correctly and to avoid "false-positive" reports, we proposed that householders inform to the municipal agents about the presence of T. infestans in their houses. This requires that each department count of a municipal referent should verify the presence of this species. If houses are T. infestans positive, personnel designated for this purpose should spray them and the surroundings. Although in this particular context our CRILAR medical entomology team participates in a social commitment, it is expected that this activity should be carried out routinely by health staff in the area or the Chagas municipal referent, implying that there would be no extra costs. In this way, technicians' work would be optimized, focusing on spraying positive houses already surveyed by sanitary agents, while reducing travel, wage, and fuel costs for the transfer of PPCHLR personnel to the field. Even in a hypothetical deficient detection situation, it is an advantage if houses reported positive by neighbors are sprayed. These economic resources would be designated to increase the treatment frequency by the PPCHLR in areas of higher infestation. Understanding the variables associated with infestation in the area will help design entomological surveillance implementation [8]. Due to the localities involved, the coverage, capture type, and sampling date were different between methodologies, and infestation rates could not be compared. However, the analyses of infestations using the same methodology, within the same study area on the same date, that is, infestation data obtained with community participation in CBD in December 2017, were comparable among areas and allowed detecting zones with different risks of T. infestans infestation. Heterogeneity in infestation probability is known in the areas of Gran Chaco [4,5,13,34,39]. In addition, T. infestans domestic infestation estimated with community participation allowed detecting a spatially heterogeneous infestation in CBD. Within this department, the southern zone presented the highest risk of infestation. Heterogeneity in the infestation risk could be associated with climatic conditions because the southern zone presented higher temperature and lower humidity compared to the other areas. These climatic conditions could allow optimal growth of the species, as was observed by other authors [34,40,41]. Although the climatic variable ranges in the different zones were within the optimal values, the zone with the highest temperature and lowest humidity provided greater development of T. infestans populations. The optimal levels for most triatomines are temperatures of 26-29°C and ≤ 70% relative humidity. When temperatures are higher in this range, insects need greater humidity to prevent dehydration. If the climatic conditions are not wet enough, the danger of dehydration can only be countered by increasing the number of bloodmeals, producing a life-cycle reduction with a population increase [42]. Another factor that could explain zonal differences was the presence of PDs because these provided refuge and feeding sources for triatomines [43]. In the southern zone (Agua Blanca and Pinchas), the presence of T. infestans in PDs in the evaluated DUs (6/22) was observed, but not in the northern and central zones ( Table 2). These results showed that some factors promote the presence of T. infestans, particularly in the southern zone of CBD. An orderly and efficient entomological surveillance system is necessary in rural areas far from the capital with different degrees of urbanism and PD complexity; otherwise, the feasibility of maintaining successful chemical control diminishes. For example, Los Llanos is a rural area with scattered and abundant houses and PD complexes (more than one corral, chicken coop, or warehouse) [13]. Comparatively, in CBD, the houses are aggregated and close, and PDs are lower and less complex. This work shown that involving the community in entomological surveillance reduced costs, covered a greater surface area and proportion of DUs evaluated, and encouraged early T. infestans detection. It is the first step in stimulating control interventions. However, for this strategy to be effective, municipalities should carry out sustained surveillance work and chemical control interventions to prevent T. infestans populations from recovering after an application interval. Therefore, these actions must continue to be encouraged, and the authorities must be committed to providing quick and effective responses to householder demands. Conclusion In this study, we provided important and well-founded data on the costs of entomological surveillance when carried out with community participation to complement actions of vectorial control programs between periods of vertical intervention. Community participation is recommended in low infestation areas where a vertical control strategy and adequate control frequency are difficult. This strategy is efficient in increasing collection coverage, allowing spraying to be focused on infested houses, and thus reducing costs and intervention times by control programs, integrating easily with other health programs. Additional file: Figure S1: Triatoma infestans number collected by developmental stage and gender in localities evaluated with community participation.
2021-02-06T14:16:28.662Z
2021-02-05T00:00:00.000
{ "year": 2021, "sha1": "2ec2b55a4e388709cd40f164f1a6f31f3c970377", "oa_license": "CCBY", "oa_url": "https://parasitesandvectors.biomedcentral.com/track/pdf/10.1186/s13071-021-04608-6", "oa_status": "GOLD", "pdf_src": "SpringerNature", "pdf_hash": "cd5f5ad4d696478108b694ca750548b80a2531ce", "s2fieldsofstudy": [ "Environmental Science", "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
2150376
pes2o/s2orc
v3-fos-license
Statistical Optimization of Oral Vancomycin-Eudragit RS Nanoparticles Using Response Surface Methodology. A Box-Behnken design with three replicates was used for preparation and evaluation of Eudragit vancomycin (VCM) nanoparticles prepared by double emulsion. The purpose of this work was to optimize VCM nanoparticles to improve the physicochemical properties. Nanoparticles were formed by using W1/O/W2 double-emulsion solvent evaporation method using Eudragit RS as a retardant material. Full factorial design was employed to study the effect of independent variables, RPM (X1), amount of emulsifier (X2), stirring rate (X3), volume of organic phase (X4) and volume of aqueous phase (X5), on the dependent variables as production yield, encapsulation efficiency and particle size. The optimum condition for VCM nanoparticles preparation was 1:2 drug to polymer ratio, 0.2 (%w/w) amount of emulsifier , 25 mL (volume of organic phase), 25 mL (volume of aqueous phase), 3 min (time of stirring) and 26000 RPM. RPM and emulsifier concentrations were the effective factors on the drug loading (R2 = 90.82). The highest entrapment efficiency was obtained when the ratio of drug to polymer was 1:3. Zeta (ζ) potential of the nanoparticles was fairly positive in molecular level. In vitro release study showed two phases: an initial burst for 0.5 h followed by a very slow release pattern during a period of 24 h. The release of VCM was influenced by the drug to polymer ratio and particle size and was found to be diffusion controlled. The best-fit release kinetic was achieved with Peppas model. In conclusion, the VCM nanoparticle preparations showed optimize formulation, which can be useful for oral administrations Introduction Vancomycin (VCM) is a glycopeptide antibiotic that inhibits bacterial cell wall synthesis at an earlier stage than the beta-lactam antibiotic. Since the oral absorption of VCM is minimal, it is usually given IV (1). VCM is used for the treatment of infections caused by methicillin-resistant staphylococci. It has a high molecular weight and is water-soluble and poorly absorbable from the gastrointestinal tract (2). The oral absorption of highly polar and macromolecular drugs is frequently limited by poor intestinal wall permeability. Some physicochemical properties that have been associated with poor membrane permeability are low octanol/aqueous partitioning, the presence of strongly charged functional groups, high molecular weight, a substantial number of certain drugs. Response surface methodology is a useful tool in the development and optimization of controlled release nanoparticles (9). Different steps involved in response surface methodology include experimental design, regression analysis, constraint optimization and validation. Response surface methodology (RSM) is a widely practiced approach in the development and optimization of drug delivery devices. Based on the principle of design of experiments (DoEs), the methodology encompasses the use of various types of experimental designs, generation of polymonal equations, and mapping of the response over the experimental domain to determine the optimum formulation(s) (10). The technique requires minimum experimentation and time, thus proving to be far more effective and cost-effective than the conventional methods of formulating dosage forms. In the present investigation, the effect of factors (rpm, volume of organic phase, aqueous phase, time of stirring and concentration emulsifier) that can influence the drug loading, loading efficiency, particle size and production yield of VCM nanoparticles from Eudragit RS was investigated. Experimental design The experimental design was a modified Box-Behnken design for five variables. This hydrogen-bonding functional groups and high polar surface area (3,4). Many therapeutic compounds such as antibiotics and peptide and protein drugs require the use of some kind of absorption enhancer to obtain reasonable plasma concentrations. By loading antibiotics into the nanoparticles, one can expect improved delivery to infected cells. Nanoparticles are the carriers developed for these logistic targeting strategies and are colloidal in nature, biodegradable and similar in behavior to intracellular pathogens. These colloidal carriers, when administered intravenously, are rapidly taken up by the cells of the mononuclear phagocyte system, the very cells which may constitute a sanctuary for intracellular bacteria (5, 6). Therefore, the entrapment of antibiotics within nanoparticles has been proposed for the treatment of intracellular infections (5). The encapsulation of VCM in liposomes and microspheres has been described in previous works (1-3). It has been proposed that VCM-PLGA-loaded microspheres may show a better bioavailability than the free drug (3). Eudragit RS 100 is a polymer commonly used for the preparation of controlled-release oral pharmaceutical dosage forms. Eudragit RS100 contains different amounts of quaternary ammonium groups ranging from 4.5-6.8% and is a neutral copolymer of poly (chlorotrimethylammonioethyl methacrylate). As Eudragit RS 100 is insoluble at physiological pH values, it has been used as a good polymer for the preparation of pH-independent sustained-release formulations of drugs (7). Various non-biodegradable polymers with good biocompatibility such as Eudragit and ethyl cellulose have been used in the preparation of microspheres. Polymethylmethacrylate microspheres were extensively used as bone cement materials in antibiotic releasing agents for bone infection and bone tumors (7). Doubleemulsion solvent extraction/evaporation technique is the most commonly used method to encapsulate hydrophilic drugs, especially protein and glycoprotein drugs, into polymeric microspheres (8). Indeed, the presence of a polymeric wall provides a protection from the gastrointestinal environment and may favor a prolonged contact with the epithelium that may be sufficient to increase the bioavailability of design was suitable for exploring quadratic response surfaces and constructing secondorder polynomial models. Four independent formulation variables analyzed during the study including the amounts of emulsifier (X 1 ), volume of organic solvent (X 2 ), and the amount of dispersing medium (X 3 ), time of stirring (X4) and rate of stirring (X5). The investigated dependent variables were the drug content (DC, Y 1 ), loading efficiency (LE, Y 2 ), particle size (PS, Y 3 ), and production yield (PY, Y 4 ). The complete design consisted of 27 experimental points, which are included three replications. The 81 experiments were carried out in random order. Data were analyzed to fit the polynomial equation to Y (9). Preparation of nanoparticles VCM-loaded Eudragit RS100 nanoparticles were prepared by W 1 /O/W 2 solvent evaporation method using different ratios of drug to polymer (1:1, 1: 2 and 1: 3). Briefly, 5 mL of aqueous internal phase (containing 100 mg VCM) was emulsified for 15 sec in 20 mL of methylene chloride (containing 100, 200 and 300 mg Eudragit RS100) using homogenizer (22000 rpm). This primary emulsion was poured into 25 mL of a 0.2% PVA aqueous solution while stirring using a homogenizer for 3 min, immersed in an ice water bath, to create the water in oil-in-water emulsion. Three to four mL of NP suspension was obtained after the solvent evaporation under reduced pressure (Evaporator, Heidolph, USA). Nanoparticles were separated from the bulk suspension by centrifugation (Hettich universal 320R, USA) at 22,000 g for 20 min. The supernatant was kept for drug assay as described later and the sediment nanoparticles were collected and washed with three portions of 30 mL water and were redispersed in 5 mL of purified water before freeze-drying. Blank nanoparticles (without drug) were prepared under the same conditions (11, 12). Micromeritic properties A laser light scattering particle size analyzer (SALD-2101, Shimadzu, Japan) was used to determine the particle size of the drug, polymer and nanoparticulate formulations. Samples were suspended in distilled water (nanoparticles and polymer) or acetone (drug) in a 1 cm cuvette and stirred continuously during the particle size analysis. Zeta potential measurement Zeta (ζ) potential measurements of diluted samples were made with a ZetaSizer (Malvern Instruments Ltd., Malvern, UK). Zeta potential values obtained from ZetaSizer were average values from twenty measurements made on the same sample. Initial measurements on several samples of the same kind showed that this number is sufficient to give a representative average value. VCM nanoparticles were diluted with deionized water before the measurement. Loading efficiency and production yield (%) determination The drug concentration in polymeric particles was determined spectrophotometrically (UV-160, Shimadzu, Japan) at 280.2 nm by measuring the amount of non-entrapped VCM in the external aqueous solution (indirect method) before freeze-drying. In the case of nanoparticles, the external aqueous solution was obtained after the centrifugation of colloidal suspension for 20 min at 22,000 g. The loading efficiency (%) was calculated Table 1. Effect of drug: polymer ratio on drug loading efficiency, production yield, particle size zeta potential and polydispersity index of vancomycin nanoparticles. according to the following equation: Loading efficiency(%) = (actual drug content in nanoparticles/theoretical drug content) × 100 The production yield of the nanoparticles was determined by accurately calculating the initial weight of the raw materials and the last weight of the polymeric particles obtained. All of the experiments were performed in triplicate (Table 1). In-vitro release study VCM dissolution patterns from freeze-dried nanoparticles were obtained under sinking conditions. Dissolution studies were carried out using a dialysis bag rotating method. A set amount of nanoparticles (20 mg of drug) was added to 200 mL dissolution medium (phosphate buffered saline, pH = 7.4), preheated and maintained at 37 ± 1°C in a water bath, then stirred at 100 rpm. Then, 3 mL of solution was withdrawn at appropriate intervals (0.5, 1, 2, 3, 4, 5, 6, 8, 12 and 24 h). The filtrate (VCM) was replaced by 3 mL of fresh buffer. The amount of VCM in the release medium was determined by UV at 279.8 nm (12,13). In order to have a better comparison between different formulations dissolution efficiency (DE), t 50% (dissolution time for 50% fraction of drug) and difference factor, f 1 (used to compare multipoint dissolution profiles) were calculated and the results are listed in Table 2. DE is defined as the area under the dissolution curve up to a certain time (t), expressed as a percentage of the area of the rectangle arising from 100% dissolution in the same time. The areas under the curve (AUC) were calculated for each dissolution profile by the trapezoidal rule (14). DE can be calculated by the following: Here, y is the drug percentage dissolved at time t. All dissolution efficiencies were obtained with t equal to 1440 min. The in-vitro release profiles of different nanoparticle formulations were compared with physical mixture formulation using difference factor (f 1 ), as defined by: Here, n is the number of time points at which %dissolved was determined. R t is the %dissolved of one formulation at a given time point and T t is the %dissolved of the formulation to be compared at the same time point. The difference factor fits the result between 0 and 15, when the test and reference profiles are identical and approaches above 15 as the dissimilarity increases. Data obtained from in-vitro release studies were fitted to various kinetic equations to find out the mechanism of drug release from the Eudragit RS100 nanoparticles. The kinetic models used were: (Higuchi equation based on Fickian diffusion) Here, Q is the amount of drug release in time t, Q 0 is the initial amount of drug in the nanoparticles, S is the surface area of the nanoparticle and k 0 , k 1 and k H are rate constants of zero order, first order and Higuchi equation, respectively. In addition to these basic release models, the release data was fitted to the Peppas and Korsmeyer equation (power law): Here, M t is the amount of drug release at time t and M ∞ is the amount release at time t = ∞, thus Mt/M ∞ is the fraction of drug released at time t, k is the kinetic constant, and n is the diffusion exponent which can be used to characterize the mechanism of drug release (14, 15). Optimization of the VCM nanoparticles Response surface methodology (RSM) is a very useful statistical technique for the optimization of VCM formulations. In this design, 5 factors were evaluated, each at 4 levels, and experimental trials were performed at all 27 possible combinations. The amounts of emulsifier (X 1 ), volume of organic solvent (X 2 ) and the amount of dispersing medium (X 3 ), were selected as independent variables. The drug content (DC), loading efficiency (LE), particle size (PS), and percentage production yield (PY) were dependent variables (Table 3). Various batches of the selected formulation (F 2 ) were made, but the stirring rate was the only parameter that was varied between 22000, 24000 and 26000 rpm. In addition, while keeping the other parameter constant, time of homogenizer stirring was changed (1.5, 3 and 4.5 min). After drying, the weighed batch of nanoparticles was subjected to drug content, loading efficiency, particle size and drug release experiments. The influence of process variables on nanoparticle formation, micromeritics and drug release characteristics, was investigated. These variables included the emulsifier concentration (0.1, 0.2 and 0.4%) and volume of organic solvent (15, 20 and 25 mL) and dispersing medium (15, 25 and 35 mL). Regression analysis The targeted response parameters were statistically analyzed by applying one-way ANOVA at 0.05 levels. Individual response parameters were evaluated using the F-test and quadratic models of the form given below were generated for each response parameter using the multiple linear regression analysis (17). Y = b0 + b1X1+ b2 X2 + b3 X3 + b4 X4 + b5X5 + b11 X12 + b22 X22 + b33 X32 + b44 X42 + b55 X52 + b12 X1 X2 + b13 X1 X3 + b14 X1 X4 + b15 X1 X5 + b23 X2 X3 + b24 X2 In this equation, Y is the predicted response, X1, X2, X3, X4 and X5 are independent variables, b0 is the intercept, b1, b2, b3, b4 and b5 are linear effects, b12, b13, b14, b15, b23, b24, b25, b34 and b45 are interaction terms. The main effects (X1, X2, X3, X4 and X5) represent the average result of changing one factor at a time from its low to high value. The interaction terms (X1X2, X1X3, X1X4, and X1X5) show how the response changes when five factors are simultaneously changed. The polynomial terms (X1X1, X2X2, X3X3, X4X4 and X5X5) are included to investigate nonlinearity. Threedimensional surface (3D) plots were drawn to illustrate the main and interactive effects of the independent variables on production yield, drug content, loading efficiency and particle size. The optimum values of the selected variables were obtained from the software and also from the response surface plots. Numerical optimization using the desirability approach was employed to locate the optimal settings of the formulation variables to obtain the desired response (17). An optimized formulation was developed by setting the constraints on the dependent and independent variables. The formulation developed was evaluated for the responses and the experimental values obtained were compared with those predicted by the mathematical models generated. Results and Discussion A W/O/W multiple emulsion solvent evaporation/extraction method is mostly used for the encapsulation of water-soluble drug and therefore was the method of choice for the watersoluble VCM drug. Droplets of the polymer in organic solution were added to a solution PVA (as stabilizer) aqueous solution. At the end, the uniform-sized beads were collected (18). In the nanoparticles prepared by the evaporation method, the amount of drug entrapped in microspheres was lower than the theoretical value. This indicates that some free drug crystals were lost in the process of encapsulation. By the increase of drug to polymer ratio, the amount of free drug lost is decreased (Table 1) so that at the ratio of 1:3 in drug to polymer, the amount of drug entrapment was 23.69% which was very close to the theoretical value (25%). The encapsulation efficiency of the drug depends on the solubility of the drug in the solvent and continuous phase (19). Important prerequisites for high encapsulation efficiencies by the W/O/W method are: (1) the insolubility of the drug in the external phase, and (2) the fine dispersion of the aqueous drug solution into the organic polymer solution to form a W/O emulsion (20). VCM is insoluble in methylene chloride used to dissolve the polymer and thus cannot part from the internal into the external phase. Entrapment efficiency of polypeptides is increased by enhancing the viscosity builders (21). Despite the hydrosolubility of VCM, favoring the leakage of the drug into the external aqueous phase, the entrapment efficiencies were rather high (22). It is assumed that VCM is localized at the interfaces (either internal water in oil or external oil in water). Therefore, a significant amount of drug is supposed to be adsorbed at the outer surface. In addition, the removal of the organic solvent under reduced pressure favors its fast evaporation followed by the polymer precipitation and thus, reduces the migration of the drug to the external phase. Indeed, the faster the solvent evaporation, the higher the encapsulation efficiency (22). One possible explanation could concern the increase of the primary emulsion viscosity due to the different VCM concentrations studied which could reduce the leakage of the drug towards the external aqueous phase (23). Generally, increasing the polymer to drug ratio increased the production yield, when the ratio of polymerdrug increased from 1:1 to 1:3, the production yield was increased (p > 0.05). Size of microspheres was found to be increased with the increase in the concentration of polymer (Table 1). It can be attributed to the fact that with the higher diffusion rate of non-solvent to polymer solution, the smaller size of microcapsules is easily obtained (22, 23). A volume-based size distribution of drug, polymer, and drugloaded nanoparticles indicated a log-probability distribution. Mean particle size of F 3 was 499 ± 110 nm. The data describing the particle sizes of the nanoparticles are given in Table 1. As it can be seen, the particle size is increased with increasing the polymer amount. It has already been reported that particle size is proportional to the viscosity of dispersed phase (10, 23-25). In fact, viscosity of dispersed phase was increased from F 1 (1:1) to F 3 (1:3). When the viscosity of the dispersed phase of these formulations was investigated, it was found that particle sizes of nanoparticles were directly proportional to the apparent viscosity of dispersed phase. The results showed that the apparent viscosities of the different drugs polymer ratios (1 : 1, 1 : 2 and 1 : 3) were 13, 16 and 18.8 mPa.S respectively. When the dispersed phase with higher viscosity was poured into the dispersion medium, bigger droplets were formed with larger mean particle size. The zeta potential of three nanosphere formulations, VCM and Eudragit RS100 are shown in Table 1. Blank nanoparticles had positive charge (15.7 mV). Drug-loaded nanoparticle indicated more positive charge, which could be ascribed to the cationic nature of VCM. Zeta potential is the potential difference, the dispersion medium and the stationary layer of fluid attached to the dispersed particle. A value of potential zeta (positive) can be taken as the arbitrary value that separates low-charged surfaces from highly-charged surfaces. The significance of zeta potential is that its value can be related to the stability of colloidal dispersion. The zeta potential indicates the degree of repulsion between adjacent, similarly charged particles in dispersion. For molecules and particles that are small enough, a high zeta potential will confer the stability, i.e. the solution or dispersion will resist aggregation (23,27). The in-vitro release profiles of VCM from nanoparticles (Table 2) exhibited initial burst effect, which may be due to the presence of some drug particles on the surface of the nanoparticles. In most cases, a biphasic dissolution profile was observed at pH of 7.4 as follows: the initial rapid drug leakage generally ended very early and for the remaining time, nearly linear behavior was observed. The first portion of the dissolution curves is due to VCM dissolution, which starts immediately after the beginning of the test for the portion of drug on the surface of nanoparticles. After such a phase, two phenomena can be combined in enhancing in the diffusion of the remaining dispersed drug into the bulk phase as well as the formation of pores within the matrix due to the initial drug dissolution; particle wetting and swelling which enhances the permeability of the polymer to the drug ( Table 2). The results indicated that some factors such as drug-polymer ratio governed the drug release from these nanoparticles. Drug release rates were decreased with increasing the amounts of polymer in the formulation (Table 2). Higher level of VCM corresponding to lower level of the polymer in the formulation resulted in an increase in the drug release rate (F 1 ). As more drugs are released from the nanoparticles, more channels are probably produced, contributing to faster drug release rates. However, Table 2 shows that the burst effect is lower when the drug to polymer ratio is 1:3 (F 3 ) compared with other formulations. In the formulation F 3 , a decreased diffusivity due to the high polymer concentration could reduce the leakage of the drug towards the dissolution medium and decrease the burst effect (to compare with F 1 and F 2 ). VCM nanoparticles of each formulation displayed an immediate and important initial drug release in the first 2 h (22-40%), followed by an 84-87% during 24 h (Table 2). An immediate high release may be due to the small diameter of nanoparticles leading to a large exchange surface and probably to a more porous structure owing to the solvent evaporation method, favoring the release of the encapsulated drug (28). Indeed, it has been already demonstrated that the slow precipitation of nanoparticles after the solvent evaporation leads to more porous particles compared to the fast polymer precipitation obtained after the solvent extraction (23). The presence of Eudragit RS100 in the matrix of nanoparticles conferred a slower and more progressive release of VCM during the time of the experiment (11). Therefore, any mechanism which is able to restrict the diffusion of VCM towards water would be easily observed, due to the slow diffusion of water into the lipophilic Eudragit RS100 matrix (7,11,(23)(24)(25). F 1 , F 2 and F 3 nanoparticles showed lower dissolution efficiency, i.e. slower dissolution in comparison with respective physical mixture (p < 0.05), (Table 2). According to Table 2, the lowest DE was observed for F 3 (66.37%) and the dissolution efficiency of the physical mixture was 98.03% (p < 0.05). The value of t 50% varied between 2.24 (F 2 formulation) and 4.85 h (F 3 formulation). The results of similarity factor (f 2 ) showed that the release profile of nanoparticle formulations is different from that of physical mixture ( Table 2). The independent variables and their levels were selected based on the preliminary trials undertaken. The %DT, %LE, PS and %PY for the formulations (F 1 to F 27 ) showed a wide variation. The data clearly indicated that the %DT, %LE, PS and %PY values are strongly dependent on the selected independent variables. The fitted equations (full and reduced) that relates the responses PY and %F to the transformed factors are shown in Table 4. The polynomial equations can be used to draw conclusions after considering the magnitude of coefficient and the mathematical sign it carries (i.e., positive or negative). Table 4 shows the results of the analysis of variance (ANOVA), which was performed to identify the insignificant factors. The high values of correlation coefficient for %DC, %LE, %PY and PS indicate a good fit i.e., good agreement between the dependent and independent variables. The equations may be used to obtain estimates of the response as a small error of variance was noticed in the replicates. The significance test regression coefficient was performed by applying the Student F-test. The results of statistical analysis are shown in Table 5. The coefficients b 1 , b 2 , and b 11 were found to be significant at p < 0.05; hence they were retained in the reduced model. The reduced model was tested in portions to determine whether the coefficients b 12 and b 22 contribute significant information for the prediction of PY or not. The results for testing the model in portions are. The results of multiple linear regression analysis (response surface regression) reveal that, on increasing the amount of emulsifier and volume of aqueous phase, increased DE is observed; the coefficients b 1 and b 3 bear a negative sign. The results of statistical analysis are shown in Table 4. The reduced model was tested in portions to determine whether the coefficients b 11 , b 22 , and b 12 contribute significant information for the prediction of %F or not. The results for testing the model in portions are depicted in Table 5. Hence, conclusions can be drawn considering the magnitude of the coefficient and the mathematical sign (positive or negative) it carries. According to Figure 1, particle size is dependent on major independent factors such as rpm and volume of organic phase. The results show that linear and interaction components in the proposed model are not very significant (R2 = 87.25). The optimum condition for VCM nanoparticles preparation was 1:2 (v/w) drug to polymer ratio, 0.2 (%w/w) amount of emulsifier, 25 mL (volume of organic phase), 25 mL (volume of aqueous phase), 3 min (time of stirring) and 26000 rpm. Emulsifier concentrations and rpm were the most effective factors on the drug loading (R2 = 90.82) ( Table 5 and Figure 2). The results obtained from the predicted model were used to create a contour plot for loading efficiency and the production yield is shown in Figure 3. Emulsifier concentration and rpm affect on the loading efficiency and production yield (R2 = 83.59). An increase in the concentration of VCM leads to an increase in drug loading and loading efficiency since the coefficient b 2 bears a positive sign. An increase in the time of stirring leads rpm and concentration of emulsifier to an increase in the mean particle size as the coefficient b 2 bears a positive sign. The increase in rpm results in decreased PS values (Table 5). The in-vitro release profiles were fitted on various kinetic models in order to find out the mechanism of drug release (29, 30). The rate constants were calculated from the slope of the respective plots. High correlation was observed for the Peppas model. The data obtained were also put in Korsmeyer-Peppas model in order to find out n-value, which describes the drug release mechanism. The n-value of microspheres of different drug to polymer ratio was in the range of 0.51-0.91 (Table 6), indicating that the mechanism of the drug release was diffusion and erosion controlled. Conclusion VCM nanoparticles were prepared using double emulsion (W 1 /O/W 2 ) solvent evaporation method. Drug polymer ratio, stirring speed, time of stirring, emulsifier, dispersing medium and organic solvent influenced the characteristics of the nanoparticles. The entrapment efficiency was high for all formulations. It was observed that at higher drug concentration, the mean particle size of the nanoparticles is high but increasing the stirring speed and emulsifier content, resulted in smaller mean particle size of nanoparticles. High correlation was observed for the Peppas model. The data obtained were also put in Korsmeyer-Peppas model in order to find out n-value, which describes the drug release mechanism. The n-value Table 5). The in-vitro release profiles were fitted on various kinetic models in order to find out the mechanism of drug release (29, 30). The rate constants were calculated from the slope of the respective plots. High correlation was observed for the Peppas model. The data obtained were of nanoparticles of different drug to polymer ratio was in the range of 0 < n < 0.5, indicating that the mechanism of the drug release was diffusion controlled. It was suggested that the mechanism of drug release from nanoparticles was diffusioncontrolled. A response surface methodology has been employed to produce VCM nanoparticles for oral drug delivery in Eudragit RS by double emulsion. The formulation variables studied exerted a significant influence on PY, DC, LE and PS. The obtained results indicate that the response surface methodology can be employed successfully to qualify the effect of several formulation and processing variables and thereby, minimize the number of experimental trials and reduce the formulation development cost. Acknowledgment The financial support from biotechnology center, Tabriz University of Medical Sciences, is greatly acknowledged.
2016-06-02T00:55:57.917Z
2012-12-08T00:00:00.000
{ "year": 2012, "sha1": "119abafd7db48f158fd750b9416286c97740f267", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "f6b391d335cc4ec5e82a0fc26c7e5fca3e7782c3", "s2fieldsofstudy": [ "Chemistry", "Engineering", "Materials Science", "Medicine" ], "extfieldsofstudy": [ "Materials Science", "Medicine" ] }
261977763
pes2o/s2orc
v3-fos-license
Hypersexual Disorder: A Comprehensive Review of Conceptualization, Etiology, Assessment and Treatment Hypersexual disorder, also known as compulsive sexual behavior or sex addiction, is a complex and clinically signifi cant condition characterized by intense and recurrent sexual fantasies, urges, or behaviors that signifi cantly disrupt an individual’s daily life and overall well-being. Despite its importance, hypersexual disorder remains a controversial and debated topic, lacking standardized diagnostic criteria in major classifi cation systems. This review paper provides a comprehensive examination of hypersexual disorder, encompassing its defi nition, conceptualization, etiology, co-occurring conditions, eff ects on mental and physical health, assessment, treatment approaches, cultural and ethical considerations, and future research directions. By synthesizing information from existing literature and research, this review aims to deepen our understanding of hypersexual disorder and contribute to the development of evidence-based interventions. The review begins by exploring the evolution of the term “hypersexual disorder” and its current status in diagnostic classifi cations. It then delves into the potential etiological factors contributing to the development of hypersexual behaviors, including neurobiological, genetic, and psychosocial factors. Furthermore, the review discusses the common comorbidities associated with hypersexual disorder, emphasizing the importance of addressing co-occurring mental health conditions in treatment planning. The psychological and physiological eff ects of hypersexual behaviors on aff ected individuals are examined, underscoring the urgency of early intervention and comprehensive treatment. The assessment and diagnosis of hypersexual disorder are thoroughly examined, considering the challenges and methodologies involved in identifying and evaluating aff ected individuals. Cultural and ethical considerations are highlighted, stressing the signifi cance of providing culturally sensitive and ethical care to diverse populations. In the context of treatment, the review discusses various therapeutic approaches, including psychotherapy, medication, support groups, and harm-reduction strategies. The need for evidence-based treatments tailored to hypersexual disorder is underscored while recognizing the challenges of developing standardized protocols in this evolving fi eld. Finally, future research directions are outlined, focusing on the standardization of diagnostic criteria, prevalence studies, neurobiological investigations, and the integration of cultural competency in treatment approaches. In conclusion, this review paper aims to contribute to a comprehensive understanding of hypersexual disorder and its implications for aff ected individuals and society. By exploring the multifaceted aspects of the condition, this review seeks to provide insights into eff ective treatment approaches and inspire further research in the study of hypersexual disorder. Introduction Hypersexual disorder, also known as compulsive sexual behavior or sex addiction, has become an area of increasing interest and concern in the ields of psychology, psychiatry, and sexology.De ined by a pattern of intense and recurrent sexual fantasies, urges, or behaviors that signi icantly interfere with an individual's daily life and overall well-being, hypersexual disorder poses unique challenges for both affected individuals and healthcare professionals [1]. Despite its clinical signi icance, the hypersexual disorder remains a controversial and complex topic.The absence of standardized diagnostic criteria in major classi ication systems like the DSM-5 and ICD-11 has contributed to ongoing debates about its classi ication and treatment.As a result, individuals struggling with hypersexual behaviors may face challenges in obtaining appropriate recognition and support for their condition.https://doi.org/10.29328/journal.apps.1001044exploration of hypersexual disorder, examining its de inition, conceptualization, etiology, co-occurring conditions, effects on mental and physical health, assessment, treatment approaches, cultural and ethical considerations, and future research directions.By synthesizing information from existing literature and research, this review aims to contribute to a deeper understanding of hypersexual disorder and its implications for affected individuals and society. The irst section of this review will delve into the de inition and conceptualization of hypersexual disorder, exploring the evolution of the term and its current status in diagnostic classi ications.This will set the foundation for understanding the complexities of identifying and diagnosing hypersexual disorder as a mental health condition [2]. Subsequently, the review will explore the potential etiology and risk factors contributing to the development of hypersexual disorder.A comprehensive understanding of the underlying factors can aid in tailoring effective treatment approaches that address the root causes of the condition. The section on co-occurring conditions will investigate the common comorbidities associated with hypersexual disorders, including mood disorders, anxiety disorders, substance use disorders, and personality disorders.Identifying and addressing these comorbid conditions are crucial for providing holistic care and improving treatment outcomes [3]. Furthermore, the review will explore the effects of hypersexual disorder on an individual's mental and physical health.Understanding the psychological and physiological impacts of compulsive sexual behaviors will highlight the urgency of early intervention and comprehensive treatment. The subsequent sections will focus on the assessment and diagnosis of hypersexual disorder, discussing the challenges and methodologies involved in identifying and evaluating affected individuals.Additionally, cultural and ethical considerations in the context of hypersexual disorder will be examined, emphasizing the importance of providing culturally sensitive and ethical care to individuals from diverse backgrounds. Finally, the review will discuss current and future treatment approaches for hypersexual disorder, analyzing evidence-based interventions and potential directions for further research.By integrating these insights, this review aims to contribute to the development of effective, patientcentered treatment strategies for individuals with hypersexual disorders. Overall, this comprehensive review seeks to shed light on the multifaceted nature of hypersexual disorder, its impact on mental and physical health, and the challenges and opportunities in its assessment and treatment.Ultimately, it is hoped that this review will foster a greater understanding and empathy for those affected by hypersexual disorder and inspire further research and advancements in this evolving ield of study [4]. Defi nition and conceptualization Hypersexual disorder, also known as compulsive sexual behavior or sex addiction, refers to a persistent pattern of intense and recurrent sexual fantasies, urges, or behaviors that signi icantly interfere with an individual's daily life, relationships, and overall well-being.The concept of hypersexual disorder has evolved over time and has been a subject of ongoing debate and research in the ields of psychology, psychiatry, and sexology [5]. Key features of hypersexual disorder Intensity and recurrence: Hypersexual individuals experience an overwhelming and frequent desire for sexual activity or engagement in sexual fantasies.These urges may be distressing and dif icult to control. Impaired control: Individuals with hypersexual disorder struggle to control their sexual behaviors, often engaging in excessive sexual activities despite negative consequences. Interference with life functioning: Hypersexual behaviors disrupt various areas of an individual's life, including work, school, relationships, and social activities. Escalation: Over time, hypersexual behaviors may escalate, leading individuals to seek more extreme or risky sexual experiences to achieve the same level of satisfaction. Distress and shame: Hypersexual individuals often experience distress, guilt, or shame related to their sexual behaviors, which may contribute to a cycle of compulsive sexual behaviors. Conceptualization and controversies The inclusion of hypersexual disorder as a formal psychiatric diagnosis has been a subject of controversy.While some experts argue that it represents a valid mental health condition, others question its classi ication as a separate disorder, suggesting that it may be better understood as a symptom of other psychiatric conditions. The Diagnostic and Statistical Manual of Mental Disorders (DSM) and the International Classi ication of Diseases (ICD) have not of icially recognized hypersexual disorder as a standalone diagnosis in their respective classi ications.However, the DSM-5 included "hypersexual disorder" as a condition for further research, acknowledging its clinical signi icance and the need for further investigation. Critics argue that the concept of hypersexual disorder lacks clear diagnostic criteria and standardized assessment tools, making it challenging to differentiate from normative sexual behaviors or other conditions such as obsessive-compulsive disorder (OCD) or impulse control disorders [6].https://doi.org/10.29328/journal.apps.1001044 Nonetheless, many mental health professionals and researchers continue to study and treat individuals who present with problematic sexual behaviors and distress related to their sexual activities, using the concept of hypersexual disorder to guide their understanding and interventions. Treatment The treatment of hypersexual disorder typically involves a combination of therapeutic approaches, such as cognitivebehavioral therapy (CBT), psychodynamic therapy, group therapy, and, in some cases, medication.The goal of treatment is to help individuals gain control over their sexual behaviors, address underlying psychological factors, and improve overall well-being and functioning [7]. It is important to note that individuals with concerns about their sexual behaviors should seek guidance from quali ied mental health professionals experienced in treating issues related to hypersexual behaviors or compulsive sexual behavior. Etiology and risk factors The etiology of hypersexual disorder, like many mental health conditions, is complex and likely involves a combination of biological, psychological, and sociocultural factors.While research on hypersexual disorder is still relatively limited compared to other mental health conditions, several potential risk factors have been identi ied: Neurobiological factors: Certain neurobiological mechanisms may contribute to the development of hypersexual disorder.Research suggests that alterations in brain regions involved in reward processing and impulse control, such as the prefrontal cortex, amygdala, and striatum, may play a role in the dysregulation of sexual behaviors [8]. Genetics: Genetic factors may also play a role in the susceptibility to hypersexual behaviors.Studies have indicated that individuals with a family history of impulse control disorders or addiction may have a higher risk of developing hypersexual disorder. Childhood adversity and trauma: Experiences of childhood adversity, including physical or sexual abuse, neglect, or other forms of trauma, have been linked to the development of hypersexual behaviors later in life.These experiences may contribute to emotional dysregulation and coping strategies involving sexual behaviors. Comorbid mental health conditions: Hypersexual disorder often coexists with other mental health conditions, such as mood disorders (e.g., depression, bipolar disorder), anxiety disorders, substance use disorders, and personality disorders.These conditions may interact and exacerbate hypersexual behaviors. Attachment style and relationship issues: Individuals with insecure attachment styles, particularly those characterized by fear of abandonment or rejection, may use hypersexual behaviors as a way to cope with emotional distress and seek validation or connection through sexual encounters [9]. Sexual trajectories and learning: Early exposure to explicit or deviant sexual material, coupled with reinforcing sexual experiences, may shape an individual's sexual preferences and lead to the development of hypersexual behaviors. Internet and technology use: The advent of the internet and widespread access to explicit sexual content online may contribute to the development of problematic hypersexual behaviors, such as compulsive pornography use and cybersex addiction. Sociocultural factors: Cultural norms, values, and societal attitudes toward sexuality can in luence an individual's perceptions and behaviors related to sex.Societies that are more permissive or sexually restrictive may impact how hypersexual behaviors are expressed and perceived. Substance use: Substance use, especially substances that lower inhibitions or increase libido, may be associated with increased engagement in hypersexual behaviors. It is essential to note that not all individuals with hypersexual behaviors will meet the criteria for a diagnosis of hypersexual disorder.Some people may engage in heightened sexual activity without distress or impairment, and such behaviors may be considered within the realm of normal human sexual expression. Further research is needed to fully understand the complex interplay of these factors in the development and maintenance of hypersexual disorder.Effective treatment and interventions for hypersexual disorder often involve addressing underlying psychological issues, building coping skills, and developing healthier ways to manage emotions and relationships [10]. Diagnosis and assessment Diagnosis and assessment of hypersexual disorder involve a comprehensive evaluation of an individual's sexual behaviors, distress level, and impairment in functioning.Given the complexity and controversial nature of hypersexual disorder, there are no standardized diagnostic criteria in major classi ication systems such as the DSM-5 or ICD-11.However, several tools and guidelines are commonly used by mental health professionals to assess hypersexual behaviors and their impact on an individual's life.The assessment process typically includes the following components [11]. Clinical interview: A thorough clinical interview is essential to gather information about the individual's sexual behaviors, history, and current concerns.The clinician will https://doi.org/10.29328/journal.apps.1001044explore the frequency and nature of sexual activities, triggers, and any distress or impairment caused by these behaviors. Self-report questionnaires: Various self-report questionnaires are used to assess hypersexual behaviors, sexual compulsivity, and the associated distress.Some commonly used scales include the Sexual Compulsivity Scale (SCS), the Hypersexual Behavior Inventory (HBI), and the Compulsive Sexual Behavior Inventory (CSBI). Assessment of comorbidities: Assessing and diagnosing any co-occurring mental health conditions, such as mood disorders, anxiety disorders, or substance use disorders, is crucial, as these conditions may in luence or be in luenced by hypersexual behaviors. Psychological assessment: Psychological assessments may be conducted to explore underlying emotional, interpersonal, and personality factors that contribute to hypersexual behaviors.This assessment helps in tailoring the treatment approach to address speci ic needs. Screening for childhood adversity and trauma: Given the potential link between childhood trauma and hypersexual behaviors, screening for a history of childhood adversity or trauma is an essential part of the assessment process. Behavioral observation: Observation of the individual's behavior and interactions may provide valuable insights into the presence and severity of hypersexual behaviors. Differential diagnosis: Clinicians must differentiate hypersexual behaviors from normative sexual behaviors, cultural differences in sexual expression, or other conditions with similar symptoms, such as bipolar disorder with hypersexuality or impulse control disorders. Functional impairment: Evaluating the extent of impairment in various life domains, including work, social relationships, and personal life, helps determine the level of distress and functional impairment caused by hypersexual behaviors. Duration and persistence: The assessment should explore the duration and persistence of hypersexual behaviors to distinguish transient or situational behaviors from a potential hypersexual disorder. Collaboration with other professionals: In some cases, the assessment may involve collaboration with other healthcare professionals, such as sex therapists, addiction specialists, or medical practitioners, to obtain a comprehensive understanding of the individual's condition [12]. It is crucial to approach the assessment of hypersexual disorder with sensitivity, empathy, and a non-judgmental attitude, as individuals may experience shame and distress related to their sexual behaviors.The assessment process serves as the foundation for appropriate diagnosis and the development of a personalized treatment plan that addresses the individual's speci ic needs and goals [13,14]. Comorbidity and co-occurring conditions Comorbidity refers to the coexistence of multiple medical or mental health conditions in an individual.In the case of hypersexual disorder, it is common for individuals to experience co-occurring conditions that can in luence the development, expression, or consequences of hypersexual behaviors.Identifying and addressing these co-occurring conditions are crucial for providing comprehensive and effective treatment.Some of the common comorbidities and co-occurring conditions of hypersexual disorder include [15]. Mood disorders: Hypersexual individuals may also experience mood disorders such as depression or bipolar disorder.Depressive symptoms, such as low self-esteem and feelings of hopelessness, may drive individuals to seek validation or pleasure through hypersexual behaviors.Additionally, hypersexuality can be a symptom of a manic episode in individuals with bipolar disorder. Anxiety disorders: Anxiety disorders, including generalized anxiety disorder, social anxiety disorder, and obsessive-compulsive disorder (OCD), may coexist with hypersexual behaviors.Compulsive sexual behaviors can be a way to cope with anxiety or alleviate distress. Substance use disorders: Substance use and hypersexual behaviors can become intertwined, leading to a cycle of addiction.Individuals with substance use disorders may engage in hypersexual behaviors while under the in luence of drugs or alcohol, which can further impair decision-making and self-control. Personality disorders: Certain personality disorders, such as borderline personality disorder, narcissistic personality disorder, or histrionic personality disorder, may be associated with hypersexual behaviors.These individuals may use sex as a means of seeking attention, validation, or control. Impulse control disorders: Hypersexual disorder shares similarities with other impulse control disorders, such as compulsive gambling or binge eating disorder.Individuals with poor impulse control may engage in hypersexual behaviors without considering the potential consequences. Childhood trauma and Post-Traumatic Stress Disorder (PTSD): A history of childhood trauma, including sexual abuse or other forms of abuse, may contribute to the development of hypersexual behaviors.Additionally, some individuals with PTSD may engage in hypersexual behaviors as a way to cope with trauma-related distress. Relationship issues: Problems in intimate relationships, such as communication dif iculties, attachment issues, or https://doi.org/10.29328/journal.apps.1001044 in idelity, can be both a cause and consequence of hypersexual behaviors. Body image and eating disorders: Some individuals with hypersexual behaviors may also struggle with body image issues or eating disorders, particularly when sex is used as a means of coping with body image-related distress. Other sexual disorders: Hypersexual disorders can cooccur with other sexual disorders, such as sexual dysfunction or sexual aversion disorder, which may further complicate the individual's sexual and emotional well-being. It is essential to conduct a comprehensive assessment to identify and address any co-occurring conditions in individuals with hypersexual behaviors.A holistic treatment approach that considers the interplay between hypersexual disorder and comorbid conditions is crucial for achieving successful outcomes and improving overall well-being.Treatment strategies may involve a combination of psychotherapy, medication, support groups, and specialized interventions for speci ic co-occurring conditions [16][17][18]. Eff ects on mental and physical health Hypersexual disorder can have signi icant effects on an individual's mental and physical health.The compulsive and intense sexual behaviors associated with hypersexual disorder can lead to a range of adverse consequences, both psychologically and physiologically.Some of the effects on mental and physical health include: Eff ects on mental health Emotional distress: Engaging in compulsive sexual behaviors can lead to feelings of guilt, shame, and self-loathing.Individuals with hypersexual disorder may experience emotional distress due to a lack of control over their behaviors and dif iculty in managing their sexual urges. Anxiety and depression: Hypersexual individuals may be more prone to developing anxiety and depression due to the negative emotional consequences of their behaviors.These mental health conditions can exacerbate the cycle of hypersexual behaviors as individuals may turn to sex as a coping mechanism. Relationship problems: Hypersexual behaviors can strain intimate relationships.Partners may feel betrayed, hurt, or emotionally disconnected, leading to con licts and instability in the relationship. Isolation and stigma: The stigma surrounding hypersexual behaviors may lead affected individuals to withdraw from social interactions and isolate themselves to avoid judgment and negative reactions from others. Impaired self-esteem: Persistent engagement in compulsive sexual behaviors can erode an individual's self-esteem and self-worth, contributing to feelings of inadequacy and self-doubt. Cognitive distortions: Individuals with hypersexual disorder may develop cognitive distortions, such as rationalizing their behaviors or minimizing the negative consequences of their actions, to justify continued engagement in hypersexual activities. Eff ects on physical health Risky sexual practices: Hypersexual individuals may engage in risky sexual behaviors, such as unprotected sex and multiple sexual partners, increasing the risk of sexually transmitted infections (STIs) and unintended pregnancies.Substance use and abuse: Some individuals with hypersexual disorder may turn to substances (e.g., drugs or alcohol) as a means of coping with the emotional distress caused by their behaviors, leading to substance abuse problems. Physical injuries: Engaging in risky sexual activities may result in physical injuries, such as bruises or abrasions. Neglect of personal health: The preoccupation with sexual activities can lead to neglect of personal hygiene and health, affecting overall physical well-being. Impact on intimate relationships: Hypersexual behaviors can lead to strained intimate relationships, which may, in turn, negatively impact the emotional and physical health of both partners. It is essential to recognize and address the potential mental and physical health consequences of hypersexual disorder to provide appropriate treatment and support for affected individuals.Comprehensive treatment approaches should address the underlying psychological factors, promote healthy coping strategies, and provide resources for managing the physical health implications of hypersexual behaviors.Therapy, support groups, and other evidencebased interventions can help individuals manage hypersexual behaviors and improve their overall well-being [19,20]. Treatment approaches The treatment of hypersexual disorder typically involves a comprehensive and multidimensional approach that addresses the underlying psychological factors, promotes behavioral change, and improves overall well-being.It is important to tailor the treatment to the individual's speci ic https://doi.org/10.29328/journal.apps.1001044needs, taking into account the severity of hypersexual behaviors, the presence of co-occurring conditions, and any related psychosocial issues.Some common treatment approaches for hypersexual disorder include [21]. Psychotherapy: Psychotherapy, also known as talk therapy, is a fundamental component of the treatment for hypersexual disorder.Different therapeutic modalities may be used, including: b.Psychodynamic therapy: Psychodynamic therapy explores the underlying unconscious motivations and unresolved con licts contributing to hypersexual behaviors.This approach helps individuals gain insight into the emotional roots of their compulsive behaviors. c. Mindfulness-based techniques: Mindfulness practices, such as mindfulness meditation or Acceptance and Commitment Therapy (ACT), can help individuals become more aware of their thoughts and impulses without judgment and learn to respond to them in a healthier way. Medication: In some cases, medication may be prescribed to address underlying mental health conditions that co-occur with hypersexual disorder.Antidepressants, mood stabilizers, or anti-anxiety medications may be used to manage symptoms of depression, anxiety, or mood luctuations. Support groups: Participation in support groups, such as Sex Addicts Anonymous (SAA) or Sex and Love Addicts Anonymous (SLAA), can be bene icial in providing a sense of community and mutual support for individuals struggling with hypersexual behaviors.Group settings also offer a safe space to share experiences, coping strategies, and recovery insights. Relapse prevention strategies: Developing relapse prevention strategies is crucial in managing hypersexual behaviors.This may involve identifying triggers and highrisk situations, learning coping skills to manage cravings, and creating a support network for times of vulnerability. Couples or family therapy: In cases where hypersexual behaviors have signi icantly impacted intimate relationships or family dynamics, couples or family therapy can be valuable in addressing communication issues, rebuilding trust, and fostering healthier relationships. Harm reduction approach: For individuals with severe and persistent hypersexual behaviors, a harm reduction approach may be employed.This involves reducing the negative consequences of the behaviors without necessarily eliminating them entirely.The goal is to minimize harm to the individual and others while working towards healthier sexual behaviors. Addressing co-occurring conditions: Given the high likelihood of comorbid mental health conditions, addressing and treating co-occurring disorders is essential for a successful treatment outcome. Education and psychoeducation: Educating individuals and their support systems about hypersexual disorder and related issues can help reduce stigma and provide a better understanding of the condition.Psychoeducation can empower individuals to take an active role in their treatment and recovery. Lifestyle changes: Encouraging positive lifestyle changes, such as regular exercise, healthy sleep habits, and stress reduction techniques, can contribute to overall well-being and support the recovery process. It is essential to work with quali ied mental health professionals experienced in treating hypersexual disorders to determine the most suitable treatment approach for each individual.The treatment plan should be personalized, lexible, and continuously reviewed to ensure the best possible outcomes.Recovery from hypersexual disorder is a gradual process that requires dedication, support, and a commitment to self-improvement [22][23][24]. Challenges in treatment Treatment for hypersexual disorder can be challenging due to various factors that may hinder successful outcomes.Some of the common challenges in treating hypersexual disorder include [25][26][27]: Stigma and shame: The stigma surrounding hypersexual behaviors and the perception of sex addiction as a moral failing rather than a mental health condition can discourage individuals from seeking help.Feelings of shame and embarrassment may prevent them from disclosing their behaviors to healthcare professionals or participating in treatment programs. Underreporting and denial: Individuals with hypersexual disorder may underreport the extent of their behaviors or deny having a problem, making it challenging for clinicians to accurately assess the severity of the condition and develop an appropriate treatment plan. Lack of recognized diagnostic criteria: The absence of formal recognition of hypersexual disorder as a stand-alone diagnosis in major classi ication systems like the DSM-5 can create challenges in de ining clear diagnostic criteria and standardizing treatment approaches.https://doi.org/10.29328/journal.apps.1001044 Comorbidity with other conditions: Hypersexual disorder often coexists with other mental health conditions, such as mood disorders, anxiety disorders, or substance use disorders.Treating the multiple comorbidities simultaneously can complicate the treatment process and require a comprehensive approach. Dif iculty in controlling urges: The intense and compulsive nature of hypersexual behaviors can make it challenging for individuals to control their sexual urges, leading to relapses and setbacks during the treatment process. Trigger management: Identifying and managing triggers that lead to hypersexual behaviors can be dif icult, as triggers can be diverse and highly individualized.Learning to cope with triggers effectively is essential in preventing relapse. Lack of evidence-based treatments: As the understanding of hypersexual disorder is still evolving, there is a limited number of evidence-based treatments speci ically designed for this condition.This lack of standardized treatment protocols can make it challenging for clinicians to determine the most effective interventions. Resistance to treatment: Some individuals with hypersexual disorder may resist treatment due to a fear of change, a desire to maintain their current lifestyle, or feelings of hopelessness about their ability to change. Treatment reluctance from partners: Partners of individuals with hypersexual disorder may be resistant to participating in couples or family therapy, which can hinder efforts to rebuild trust and improve relationship dynamics. Relapse and recurrence: Hypersexual behaviors can be dif icult to overcome fully, and relapses may occur during the treatment process.Relapses can be discouraging for both the individual and the treatment team. Addressing these challenges requires a patient-centered and empathetic approach from mental health professionals.Individualized treatment plans, open communication, and ongoing support are essential components in overcoming the obstacles and fostering successful treatment outcomes for individuals with hypersexual disorders.Additionally, ongoing research into the nature of hypersexual disorder and its effective treatment options will be instrumental in addressing these challenges and improving therapeutic approaches [28]. Cultural and ethical considerations Cultural and ethical considerations play a signi icant role in the assessment, diagnosis, and treatment of hypersexual disorder.The cultural context in which hypersexual behaviors are understood and the ethical implications related to diagnosis and treatment should be carefully considered.Some important cultural and ethical considerations include [29]. Cultural norms and values: Cultural norms regarding sexuality and sexual behaviors vary widely across different societies.Some cultures may be more permissive about sexual expression, while others may be more conservative or repressive.Clinicians must be culturally sensitive and avoid imposing their own cultural values when assessing and treating individuals with hypersexual disorders. Stigma and shame: In certain cultures, discussing sexual behaviors or seeking help for sexual issues may be highly stigmatized.The shame associated with hypersexual behaviors may prevent individuals from seeking treatment or disclosing their concerns to healthcare professionals. Diagnosing hypersexual disorder: The absence of standardized diagnostic criteria for hypersexual disorder in major classi ication systems like the DSM-5 raises ethical considerations in diagnosing the condition.Clinicians must carefully evaluate the presence and impact of hypersexual behaviors on an individual's life while considering potential comorbidities or underlying issues. Informed consent: When providing treatment for hypersexual disorder, obtaining informed consent from the individual is crucial.This includes explaining the treatment process, potential risks and bene its, and respecting the individual's right to make decisions about their care. Con identiality and privacy: Given the sensitive nature of hypersexual behaviors, maintaining con identiality and privacy is paramount.Clinicians should ensure that information shared during assessments and treatment remains con idential and is only disclosed with the individual's consent or as required by law. Partner and family involvement: Involving partners or family members in the treatment process may be necessary for addressing relationship issues and rebuilding trust.However, clinicians must respect the individual's autonomy and seek consent before involving others in the treatment. Avoiding pathologization: Cultural and ethical considerations extend to avoiding pathologizing normative sexual behaviors or expressions.It is essential to differentiate between hypersexual disorder and culturally accepted sexual practices. Cultural competency and sensitivity: Healthcare professionals should strive to enhance their cultural competence and sensitivity in understanding and addressing the needs of individuals from diverse cultural backgrounds.This includes being aware of cultural biases and avoiding generalizations. Collaborative approach: Collaborating with cultural experts, therapists with specialized knowledge, or community leaders can be bene icial in navigating cultural considerations and providing effective treatment.https://doi.org/10.29328/journal.apps.1001044 Respect for autonomy: Respecting the autonomy and self-determination of individuals with hypersexual disorders is crucial.Treatment plans should be collaborative and developed with the individual's active involvement and consent. Considering cultural and ethical factors is essential in providing ethical and effective care for individuals with hypersexual disorder.Healthcare professionals should aim to create a safe and non-judgmental environment that respects individual differences and promotes understanding and acceptance of diverse sexual expressions and identities [30][31][32]. Future directions/recommendations Future directions in the study and management of hypersexual disorder encompass several key areas that can enhance our understanding of this complex condition and improve treatment approaches.Some potential future directions include [33]. Standardization of diagnostic criteria: Continued research is needed to establish clear and standardized diagnostic criteria for hypersexual disorder in major classi ication systems like the DSM and ICD.This would facilitate accurate diagnosis and consistency in the identi ication of affected individuals. Prevalence and epidemiological studies: Conducting large-scale prevalence and epidemiological studies will help determine the global prevalence of hypersexual disorder and its impact on different populations, shedding light on the public health implications of the condition. Longitudinal studies: Long-term studies tracking individuals with hypersexual disorder over extended periods can offer insights into the natural course of the condition, factors in luencing its trajectory, and treatment outcomes. Neurobiological research: Investigating the neurobiological underpinnings of hypersexual disorder can deepen our understanding of its mechanisms and may lead to the development of targeted pharmacological interventions. Development of evidence-based treatments: The creation and evaluation of evidence-based treatments speci ically tailored to hypersexual disorders are essential.Research should explore the effectiveness of different therapeutic modalities, including cognitive-behavioral therapy, psychodynamic therapy, and group interventions. Co-occurring conditions: Further research is needed to elucidate the relationship between hypersexual disorder and co-occurring mental health conditions, such as mood disorders, anxiety disorders, and substance use disorders.Understanding these interactions can inform more comprehensive treatment strategies. Integrating technology: Leveraging technology, such as mobile applications and virtual therapy platforms, may enhance accessibility and engagement in treatment for individuals with hypersexual disorders. Addressing childhood trauma: Investigating the role of childhood trauma and early adverse experiences in the development of hypersexual behaviors can lead to targeted prevention and intervention efforts. Cross-cultural studies: Conducting cross-cultural studies can help identify cultural variations in the presentation and impact of hypersexual disorder.It will enable clinicians to provide culturally competent care and consider culturally speci ic factors in assessment and treatment. Treatment outcome measures: Developing reliable and validated outcome measures to assess treatment ef icacy and long-term outcomes will assist in evaluating the effectiveness of interventions and re ining treatment approaches. Integration of clinical and research efforts: Establishing collaborations between researchers and clinicians can facilitate the integration of research indings into clinical practice, promoting evidence-based care for individuals with hypersexual disorders. Public education and awareness: Raising public awareness about hypersexual disorder can help reduce stigma and increase early recognition and access to appropriate treatment. By pursuing these future directions, researchers, clinicians, and policymakers can collectively contribute to the advancement of knowledge, identi ication, and treatment of hypersexual disorders, ultimately improving the quality of life for affected individuals and their loved ones [34][35][36][37][38][39]. Conclusion This comprehensive review has provided a thorough exploration of hypersexual disorder, covering various aspects ranging from its de inition and conceptualization to its etiology, co-occurring conditions, and effects on mental and physical health.The review has highlighted the importance of addressing cultural and ethical considerations in the assessment and treatment of hypersexual disorder to provide patient-centered and culturally sensitive care. The examination of assessment and diagnosis challenges emphasizes the need for standardized criteria and validated assessment tools to improve diagnostic accuracy.Identifying and addressing co-occurring mental health conditions is critical in developing holistic treatment plans that target underlying issues contributing to hypersexual behaviors. The effects of hypersexual disorder on an individual's mental and physical health underscore the urgency of early https://doi.org/10.29328/journal.apps.1001044intervention and the need for evidence-based treatments to improve overall well-being.Integrating technology and fostering collaboration between researchers and clinicians can enhance treatment accessibility and effectiveness. As research into hypersexual disorder continues to evolve, future directions in the ield include establishing standardized diagnostic criteria, conducting prevalence studies, investigating neurobiological mechanisms, and re ining culturally competent treatment approaches.Increased public awareness and education can help reduce stigma and improve support for individuals with hypersexual disorders. In conclusion, this review highlights the importance of a holistic and multidimensional approach to understanding and addressing hypersexual disorders.By synthesizing current knowledge and identifying potential future directions, this review aims to contribute to advancements in diagnosis, treatment, and support for individuals struggling with hypersexual behaviors.By fostering further research and integrating evidence-based interventions, we can work towards enhancing the quality of life for individuals with hypersexual disorders and providing them with the compassionate and effective care they deserve. Physical exhaustion : Excessive sexual activity can lead to physical exhaustion and fatigue, impacting overall health and well-being.Sleep disruptions: Hypersexual behaviors may interfere with normal sleep patterns, leading to sleep disturbances and sleep deprivation. a. Cognitive-Behavioral Therapy (CBT): CBT focuses on identifying and challenging negative thought patterns and behaviors related to hypersexual behaviors.It aims to modify distorted beliefs and develop healthier coping strategies to manage triggers and urges.
2023-09-17T15:16:29.423Z
2023-09-06T00:00:00.000
{ "year": 2023, "sha1": "22fd20073b0cf03b9c1d8187ec293815a13dd499", "oa_license": "CCBY", "oa_url": "https://www.pharmacyscijournal.com/articles/apps-aid1044.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "22ff1f590d284cb5aebd7417fd8decd0a0c3e4e0", "s2fieldsofstudy": [ "Psychology" ], "extfieldsofstudy": [] }
269934607
pes2o/s2orc
v3-fos-license
Preoperative cancer antigen-125 levels as a predictor of recurrence in early-stage endometrial cancer SUMMARY OBJECTIVE: Endometrial cancer is the most common gynecological cancer in developed countries, with a majority of cases being low-grade endometrioid endometrial cancer. Identifying risk factors for disease recurrence and poor prognosis is critical. This study aimed to assess the correlation between preoperative cancer antigen-125 levels and disease recurrence in early-stage endometrioid endometrial cancer patients. METHODS: The study was a retrospective analysis of 217 patients diagnosed with endometrioid endometrial cancer who underwent surgical treatment at a university-affiliated tertiary hospital between 2016 and 2022. Patients were divided into two groups based on their preoperative cancer antigen-125 levels and compared with clinicopathological findings and disease recurrence. Disease-free survival rates were calculated, and logistic regression analysis was performed to determine independent factors affecting disease-free survival. RESULTS: The mean age of patients was 61.59±0.75 years, and the mean follow-up time was 36.95±1.18 months. The mean cancer antigen-125 level was 27.80±37.81 IU/mL. The recurrence rate was significantly higher in the group with elevated cancer antigen-125 levels (p=0.025). Disease-free survival was lower in patients with elevated cancer antigen-125 compared with those with normal levels (p=0.005). Logistic regression analysis revealed that elevated cancer antigen-125 levels were associated with disease recurrence (OR: 3.43, 95%CI 1.13–10.37, p=0.029). CONCLUSION: The findings of this study suggest that preoperative cancer antigen-125 levels can be used as a predictor of disease recurrence in early-stage endometrioid endometrial cancer patients. cancer antigen-125 levels may be a useful tool for risk stratification and patient management in endometrial cancer. INTRODUCTION Endometrial cancer (EC) is the most common gynecological cancer in developed countries 1 .The most common type of EC is endometrioid EC (EEC), which accounts for 75-80% of cases.Most EECs are low grade (grade 1-2), diagnosed at an early stage, and have a good prognosis 2 , but up to 7% of patients may still be at risk of disease-related mortality 3 .Considering that the majority of patients are in the low-grade EEC group, the number of disease-related deaths can be considered quite high.Additionally, recurrence in early-stage EC can be as high as 15-20% 4 .Therefore, it is crucial to identify patients at risk of poor prognosis before surgery. Patients' age, tumor size, tumor grade, histological type, and lymphovascular space involvement (LVSI) have been identified as risk factors for poor prognosis in ECs 5,6 .Preoperative cancer antigen-125 (CA-125) levels have also been linked to disease recurrence 7 .While an elevation in serum CA-125 levels has been found to be correlated with advanced-stage EECs, its role in early-stage EECs is still a topic of debate [8][9][10] . In this study, our goal was to assess the correlation between preoperative CA-125 levels and disease recurrence in early-stage EEC patients. METHODS Patients diagnosed with EC who underwent surgical treatment at a university-affiliated tertiary hospital between January 2016 and March 2022 were analyzed retrospectively after obtaining approval from the local ethics committee (2011-KAEK-25 2022-11/06).This study was conducted in accordance with the Declaration of Helsinki.The sociodemographic characteristics, preoperative CA-125 levels, surgery reports, histopathology results, and postoperative follow-up data of the patients were reviewed from electronic/archival files.A total of 217 patients were examined.Patients with non-endometrioid type adenocarcinomas (n=11), high-stage endometrioid cancers (n=13), no preoperative CA-125 levels (n=32), other concurrent cancers, pelvic endometriosis or adenomyosis or adnexal mass (n=9), follow-up examinations at another center (n=18), previous chemo-radiotherapy (n=9), or incomplete data (n=27) were excluded from the study.The final study population (n=167) included the International Federation of Gynecology and Obstetrics (FIGO) stage 1-2 EEC diagnosed and operated for the first time at our hospital. Patients were divided into two groups based on their preoperative CA-125 levels: those with normal levels (<35 IU/mL) and those with elevated levels (≥35 IU/mL), and were compared against clinicopathological findings and disease recurrence. All the patients underwent surgical staging according to FIGO classification 11 , which included total hysterectomy and bilateral salpingo-oophorectomy.Selective systemic pelvic-paraaortic lymphadenectomy was performed based on intraoperative frozen section findings using Mayo-Clinic criteria 12 .All the specimens were evaluated by gynecologic pathologists in our institution.The final histopathology reports included information on histological grade and type, myometrial invasion (MI), cervical invasion, LVSI, and lymph node metastasis status.The administration of adjuvant therapy was determined by a team of experts from multiple disciplines 13 .Recurrence was diagnosed by clinicians using physical examination and imaging reports.Disease-free survival (DFS) was defined as the time from surgery to the first recurrence of the disease. Statistical analysis was conducted using SPSS version 23 (SPSS Inc., Chicago, IL, USA).The Shapiro-Wilk test was used to determine the normality of the variables.Non-parametric continuous data were compared using the Mann-Whitney U test, and categorical data were analyzed using the chi-square test.The Kaplan-Meier survival analysis was used to calculate DFS in patients based on their preoperative CA-125 levels.Logistic regression analysis was performed to identify independent factors associated with disease recurrence.A p<0.05 was considered statistically significant. RESULTS The mean age of patients was 61.59±0.75years, with a range of 36-86 years.The mean BMI was 35.97±0.31kg/m 2 .The mean follow-up time was 36.95±1.18months, with a range of 12-66 months.Complete surgical staging, including pelvic-paraaortic lymphadenectomy, was performed on 88.0% (n=147) of patients, while the remaining 12.0% (n=20) did not undergo lymphadenectomy.The mean number of lymph nodes removed was 49.37±0.94,with a range of 27-95 nodes.The mean preoperative CA-125 level was 27.80±37.81IU/ mL, with a range of 0.5-291. In this study, 167 patients were evaluated, of which 125 (74.9%) had normal preoperative CA-125 values and 42 (25.1%)had elevated CA-125 levels.The demographic and clinical characteristics of the groups are presented in Table 1.The groups were comparable in terms of age, BMI, follow-up periods, and menopausal status (Table 1).No significant differences were observed between the groups with regard to tumor grade, MI, LVSI, and tumor stage (Table 1).Disease recurrence was 1). According to Kaplan-Meier analysis, DFS was significantly lower in the elevated CA-125 group compared with the normal CA-125 group (p=0.005)(Figure 1). A logistic regression analysis was performed to identify factors associated with disease recurrence.The model included patient age, BMI, tumor grade, tumor size, LVSI, and elevated CA-125 levels.The results showed that this model was found to be significantly associated with disease recurrence (p<0.001,R 2 =0.22).LVSI positivity and elevated CA-125 levels were found to be significant independent prognosticators for disease recurrence (OR:8.64,95%CI 2.18-34.19,p=0.002 and OR:3.43, 95%CI 1.13-10.37,p=0.029, respectively). DISCUSSION In this study, we found that patients with preoperative CA-125 level ≥35 IU/mL had significantly higher rates of disease recurrence in EECs.The DFS in stage 1-2 EECs was found to be linked to preoperative CA-125 levels.Additionally, we determined that both an elevated preoperative CA-125 level and LVSI positivity were significant independent risk factors for disease recurrence in early-stage EECs. The CA-125 antigen is a large transmembrane glycoprotein found in the cells of the pericardium, pleura, peritoneum, fallopian tube, and endometrial and endocervical tissues 14 .It is mostly used to monitor epithelial ovarian cancer 15 .Although there is evidence suggesting a possible association between CA-125 and histological grade, stage, lymph node metastases, MI, and cervical involvement in EC, the clinical utility of CA-125 as a marker for EC has yet to be established 13 . Most studies of CA-125 and EC included patients with advanced-stage disease [8][9][10] .Therefore, CA-125, as an epithelial surface antigen, can be expected to be elevated in patients with advanced-stage disease and may be associated with disease recurrence.Limited studies on low-risk and early-stage EC patients have produced conflicting results, leading to a lack of clarity in the findings. In a multicenter retrospective study, Kim et al. found a significant association between elevated CA-125 levels and poor survival rates in patients with FIGO stage 1-2 EC 16 .In a prospective study with low-grade EC patients (n=240), disease recurrence was significantly higher in patients with elevated preoperative CA-125 levels compared with those with normal levels (19.4% vs. 7.9%, p=0.028) 7 .Logistic regression analysis identified age, tumor grade, LVSI positivity, and CA-125 levels as significant factors affecting DFS 7 .However, another study failed to find a relationship between preoperative serum CA-125 levels and disease recurrence in EC 17 .This study focused on early-stage EC patients with endometrioid histology and found an association between CA-125 levels and disease recurrence in this group of patients. Studies investigating the role of CA-125 levels in predicting EC prognosis have reported varying thresholds [16][17][18] .Chen et al. established a cut-off level of 40 IU/mL for predicting disease relapse in stage 1 EC 18 .In another study, the cut-off values for CA-125 were determined to be between 15.3 and 22.9 IU/L for factors such as MI, cervical invasion, lymph node metastasis, LVSI, and disease recurrence 19 .We performed receiver operating characteristic (ROC) analysis to determine the CA-125 threshold for predicting disease recurrence and found that levels above 20.05IU/mL had a specificity of 76.9% and a sensitivity of 68.4% for detecting recurrence risk (AUC: 0.714, 95%CI 0.59-0.83,p=0.002).However, this cut-off value was non-significant for other factors such as tumor grade, tumor size, MI, cervical invasion, and LVSI.Thus, we used the cutoff as 35 IU/mL in our study. In EC patients, LVSI positivity is known to be an independent risk factor for disease recurrence 5,20 .However, a study by Bendifallah et al. failed to show a statistically significant relationship between LVSI and disease recurrence in low-risk EC patients (n=213), where only 10.4% were positive for LVSI 21 .The authors attributed the lack of significance to the low incidence of LVSI in the low-risk patient subset 21 .In this study, LVSI positivity was present in 16% of subjects, and we found a significant association between LVSI positivity and disease recurrence in early-stage EC.In addition to LVSI, tumor size and MI depth are also considered risk factors for poor prognosis in EC 13,22 .However, the optimal tumor size for determining the risk of recurrence in lowrisk EC is still unclear [23][24][25] .In a retrospective survival analysis of 720 patients, Ureyen et al. did not find a statistically significant difference in disease-free survival rates between patients with tumor size ≥35 vs. <35 mm (96.6 vs. 100%; p=0.102) 23 .In contrast, a multicenter study of 302 low-risk EC patients reported a significant difference in recurrence rates between patients with tumor size ≥35 vs. <35 mm (1 vs. 8%, p=0.006) 24 .Yet another study that used a cutoff of 2 cm for tumor size found no difference in recurrence-free survival rates of the stage 1 EC patients (HR 0.702, 95%CI 0.302-1.629,p=0.41) 25 .In our study, we did not find any association between tumor size >2 cm and disease recurrence or CA-125 levels.In the field of EC, MI is considered a crucial factor in determining a patient's risk profile 13 .In a prospective study, Kim et al. found a significant association between high levels of CA-125 and a higher rate of MI>50% 7 .Our results revealed that patients with elevated CA-125 tended to have higher rates of MI>50%, but the difference was not statistically significant. Various molecular changes, including genetic mutations, can play a significant role in influencing the prognosis of EC.Specifically, the current staging system for EC places a specific emphasis on certain genetic mutations, highlighting their importance among the myriad molecular alterations that impact the prognosis of EC 26 .Ongoing research in this field is shedding light on potential risk factors.For instance, Giordana et al. have suggested that polyps characterized by the hyperexpression of MKI67 and BCL2 may pose a potential risk for EC 27 .Additionally, in a study involving women with polycystic ovary syndrome (PCOS), it has been discussed that the increased risk of endometrial hyperplasia and malignancy in PCOS may be linked to decreased CASP3 (Caspase-3) activity in these patients 28 .Further exploration of these molecular signatures holds the potential to deepen our understanding of the underlying mechanisms and facilitate the development of targeted preventive strategies in the context of EC. This study has limitations including retrospective design and single-center data, as well as the absence of follow-up CA-125 levels.Additionally, not performing LND in all patients could result in an underestimation of the stage of EC, which is another limitation.Despite these limitations, the relatively large number of patients, only early-stage diseases being studied, exclusion of adnexal masses as they may cause elevation of CA-125, and similar demographic data between groups are the strengths of this study. CONCLUSION Preoperative elevation of CA-125 levels may predict a poor prognosis and decreased DFS in patients with early-stage EC.Therefore, preoperative evaluation of CA-125 can be used as an additional tool, alongside MI or tumor size, to determine the risk in these patients.However, further prospective studies are needed to validate these findings. ETHICAL APPROVAL The study was approved by the Local Ethics Committee (2011-KAEK-25 2022-11/06).This study was conducted in accordance with the Declaration of Helsinki. Figure 1 . Figure 1.Kaplan-Meier survival analyses for disease-free survival in patients according to preoperative serum cancer antigen-125 levels.DFS: disease-free survival; m: months. Table 1 . Baseline characteristics of patients according to preoperative cancer antigen-125 level.higher in the elevated CA-125 group compared with the normal CA-125 group (21.4 vs. 8.0%, p=0.025) (Table Values are given median (min-max) or number (%), unless otherwise specified.Mann-Whitney U test or chi-square test was performed.p<0.05 was significant.*Values are given as mean±SD.Y: years; BMI: body mass index; m: months; G: grade; MI: myometrial invasion; LVSI: lymphovascular space involvement; FIGO: International Federation of Gynecology and Obstetrics.significantly
2024-05-22T15:17:40.782Z
2024-05-20T00:00:00.000
{ "year": 2024, "sha1": "394263d699a2bdbc9e2f1dd56e0b87cffac44c80", "oa_license": "CCBY", "oa_url": "https://www.scielo.br/j/ramb/a/rRvMT47hTQ9cQLrdbBfrMMp/?format=pdf&lang=en", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "a363a093f5ce763f1f4f84d6bc9cfff361884aeb", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
1432452
pes2o/s2orc
v3-fos-license
Importance of stress-response genes to the survival of airborne Escherichia coli under different levels of relative humidity Other than the needs for infection control to investigate the survival and inactivation of airborne bacterial pathogens, there has been a growing interest in exploring bacterial communities in the air and the effect of environmental variables on them. However, the innate biological mechanism influencing the bacterial viability is still unclear. In this study, a mutant-based approach, using Escherichia coli as a model, was used to prove the concept that common stress-response genes are important for airborne survival of bacteria. Mutants with a single gene knockout that are known to respond to general stress (rpoS) and oxidative stress (oxyR, soxR) were selected in the study. Low relative humidity (RH), 30–40% was more detrimental to the bacteria than high RH, >90%. The log reduction of ∆rpoS was always higher than that of the parental strain at all RH levels but the ∆oxyR had a higher log reduction than the parental strain at intermediate RH only. ∆soxR had the same viability compared to the parental strain at all RH levels. The results hint that although different types and levels of stress are produced under different RH conditions, stress-response genes always play a role in the bacterial viability. This study is the first reporting the association between stress-response genes and viability of airborne bacteria. Electronic supplementary material The online version of this article (doi:10.1186/s13568-017-0376-3) contains supplementary material, which is available to authorized users. Introduction Other than the needs for infection control to investigate the survival and inactivation of airborne bacterial pathogens, there has been a growing interest in exploring bacterial communities in the air and the effect of environmental variables on them (Franzetti et al. 2011;Tang 2009;Mohr 2007;Sun and Ariya 2006). Various bioaerosol studies have been conducted to determine the survival rate of airborne bacteria under different conditions to explain the bacterial diversity, predict the risk of airborne disease transmission and identify appropriate infectioncontrol strategies (Parienta et al. 2011;Thompson et al. 2011;Cox 1986). However, very little knowledge has been accumulated regarding the innate biological mechanism influencing the bacterial viability. Stress response is one of the major mechanisms to help bacteria to overcome harsh environmental conditions. The genetic response and regulation of bacteria subjected to different types of environmental stress, such as oxidative stress, dehydration and cold stress, have been explored in many media such as water, soil and food, but never in an airborne context (Cabiscol et al. 2010;Chung et al. 2006). Understanding the stress response mechanism could provide a new biotechnology and engineering target to predict and control infection risks and facilitate the application of bioaerosol techniques to other fields [e.g. cloud condensation and climate change (Sun and Ariya 2006)]. As reported in some previous studies (Krumins et al. 2014;Dimmick et al. 1975), metabolic activities have been detected in bacterial aerosols. We hypothesize that stress-response genes also play a role to help the survival of airborne bacteria as in other environmental media. The aim of this study is to examine some common stress-response genes in Escherichia coli to test this hypothesis. We selected several mutants, each with a single gene knockout. These genes are known to respond to general stress (rpoS) and oxidative stress (oxyR, soxR), in ways that may be relevant to bioaerosol survival, according to our literature review (Parienta et al. 2011;Tang 2009;Mohr 2007;Cox and Baldwin 1967). By comparing the extent of the log reduction in bacterial survival between the mutant and parental strain, we can identify the genes that are associated with the airborne viability. Relative humidity (RH) is the most widely studied environmental factor to affect bioaerosol evaporation and survival (Parienta et al. 2011;Dunklin and Puck 1948) and was investigated in this study. Bacterial strains Escherichia coli was selected as a model bacterium due to its extensive use in bioaerosol and other stress response studies. E. coli BW25113 (the parental strain) and its isogenic deletion mutants were purchased from Coli Genetic Stock Center (CGSC, Yale University, USA) (Baba et al. 2006) (Table 1). Bacterial culture Fresh cultures of E. coli and its mutants were grown in Luria-Bertani medium (Affymetrix Inc., USA) at 37 °C to stationary phase for 16 h with constant shaking at 150 rpm. Stationary phase was determined by using growth curves. Our own and previous studies showed that exponential phase bacteria died significantly during airborne suspension so they could not be examined in this study. Next, the bacterial cells were harvested by centrifugation at 3000×g for 7 min, then washed and re-suspended in phosphate buffer saline (PBS, pH 7.4) and transferred to a six-jet Collison nebulizer (BGI Inc., USA) for nebulization at 20 psi. The reason of suspending and aerosolizing the bacteria in PBS rather than directly from the culture media is to minimize the variation and unknown composition of the culture media. It is a general procedure applied in bioaerosol studies. Survival during airborne suspension To test the bioaerosol survival, each bacterial suspension was nebulized for 3 min (N 0 ) in the nebulizer at room temperature, 20 ± 2 °C, and the aerosols generated were suspended in a cylindrical chamber (diameter × height: 50.8 cm × 58 cm, volume: 87 L). The air temperature in the chamber was the same as the room temperature, and RH was adjusted by either spraying sterile water or purging dehumidifying air into the chamber to achieve low (30-40%), intermediate (40-60%) and high (>90%) levels of RH. The temperature and RH of the chamber were measured by a digital hygrometer. After 30 min of airborne suspension, the bacterial cells were sampled on a 0.22 μm mixed cellulose ester filter (Advantec, Japan) at flow rate of 28 L/min for 3 min (N 30 ). Immediately after sampling, the filter was placed into 5 mL of PBS and vigorously shaken in a vortex for 30 s to elute the deposited bacteria. Both culturable and DNA counts of the collected bacteria were analyzed to determine the final bacterial concentration. The culturability of the bacterial cells was determined by the plate-count method (spreading the sample on tryptone soy agar (TSA) and incubated at 37 °C for 24 h) and the DNA counts by quantitative polymerase chain reaction (qPCR). Bacterial DNA was extracted using a QIAamp DNA Mini Kit (Qiagen, Germany) following the manufacturer's protocol. The concentration of the extracted DNA samples was determined by qPCR using a QuantiNova ™ SYBR ® Green PCR Kit (Qiagen, Germany) with the forward primer 784 (5′-GTG TGA TAT CTA CCC GCT TCG C-3′) and the reverse primer 866 (5′-AGA ACG GTT TGT GGT TAA TCA GGA-3′). These primers bind to the uidA gene, which is specific to E. coli and thus used in E. coli determination (Fram and Obst 2003). The thermocycling program of the AB StepOne RT-PCR System (AB, USA) consisted of an initial activation cycle at 95 °C for 2 min, followed by 40 cycles of denaturation at 95 °C for 5 s and combined annealing/extension at 60 °C for 10 s. The E. coli BW25113 culture was used to set a standard calibration curve. To account for the potential Master regulator of the general stress response in E. coli. In addition, rpoS transcribes a significant fraction of genes related to sugar and polyamine metabolism in response to cellular stresses and in nucleic acid synthesis and modification (Eisenstark et al. 1996) JW3933-3 12,039 oxyR "Oxidative stress regulator, " is the transcriptional dual regulator for the expression of antioxidant genes in response to oxidative stress, in particular, elevated levels of hydrogen peroxide (Kullik et al. 1995) JW4024-1 10,892 soxR "Superoxide response protein, " is negatively autoregulated and controls the transcription of the regulon involved in defense against redox-cycling drugs (Demple 1996) loss of bioaerosols during aerosolization and sampling, a normalized survival ratio (N) was calculated as shown in Eq. 1. Reagents and buffers used in the study were autoclaved to eliminate DNase contamination. To assess the change in the airborne bacterial survival, the log reduction between the normalized culturable bacterial count before (N 0 ) and after 30 min (N 30 ) of aerosolization was calculated as shown in Eq. 2. Effect of nebulization and filter sampling on the bacterial survival The survival percentage of the bacteria was determined before and after nebulization by using plate-counting method to prove that nebulization did not inactivate the bacteria. For filter sampling, two experiments were conducted to support that air filtration did not conceal the effect of airborne suspension on the viability of the bacteria. The details of the methods and the results were described in the Additional file 1: Figures S1, S2 and S3. Statistical analysis The log reduction in bacterial survival under each RH condition was compared using one-way analysis of variance (ANOVA) with Duncan's post hoc test (SPSS v. 23) in order to determine whether a particular gene deletion makes a difference in the bacterial survival as compared (2) Log reduction = Log 10 N 0 − log 10 N 30 to the parental strain aerosolized under the same condition. The difference between means with a p value lower than 0.05 (p < 0.05) was regarded as statistically significant. The same analysis was also conducted by grouping all mutants at different RH conditions together in one model (Additional file 1: Figures S4, S5 and S6). This analysis showed the overall mutant comparison. Results The process of aerosolizing E. coli from the current liquid medium into the air is detrimental, which causes a significant loss in bacterial viability. Although a high RH condition (Fig. 1) preserved the bacteria the most compared to the intermediate (Fig. 2) and low RH (Fig. 3), the log reduction in survival of the parental strain at high RH still reached to about 0.5 log (Fig. 1), which is equivalent to less than 32% of the bacteria that survive from the aerosolization process. At high RH, only ∆rpoS had a lower survival (1.5 log reduction) than the parental strain (0.5 log reduction), and both ∆oxyR and ∆soxR had the same survival as the parental strain. At intermediate RH, which the RH decreased from 90 to 40-60%, both ∆rpoS and ∆oxyR exhibited a higher log reduction in survival than the parental strain (Fig. 2). The log reduction of the parental strain and ∆rpoS was 0.7 log and 1.9 log, respectively. This means that approximately 94% less survival of ∆rpoS compares to the parental strain under this RH. For ∆oxyR, this mutant had a log reduction of 1.2 log, which means 68% less survival of this mutant than the parental strain. This result also demonstrates that missing the Temperature: 20 ± 2 °C. Error bars represent the standard deviation of replicates (n = 3). The log reduction of different mutants was statistically analyzed by one-way ANOVA. Grouping was conducted with post-hos test Duncan analysis, and the letters above the bars represent different grouping rpoS gene is more harmful to the bacteria than missing the oxyR gene. Again the log reduction of ∆soxR was similar to that of the parental strain at intermediate RH. When the RH level was further adjusted to less than 40%, there was a significant loss in the bacterial viability across all the tested bacteria (Fig. 3). The log reduction of the parental strain went up to above 1.3 log i.e. only 5% of the bacteria from the liquid medium survived in the air. Interestingly, although the decrease in RH from an intermediate level (40-60%) to a low level (40-30%) was less than that from a high level (90%) to an intermediate level, the change in the viability of the parental strain was greater between intermediate and low RH than that between high and intermediate RH; log reduction at high RH-0.5 log, intermediate RH-0.7 log and low RH-1.3 log. The survival of airborne ∆rpoS was barely detectable at low RH. The log reduction of ∆rpoS at low RH was the highest among all the bacteria across all the RH conditions (about 2.5 log i.e. only 0.32% of survival after aerosolization). Although ∆oxyR was shown to be important for the bacterial survival at an intermediate RH, this mutant was no longer important at low RH as this mutant had the same viability as the parental strain. Finally, ∆soxR still showed no effect on the bacterial viability at low RH compared to the parental strain. General stress-rpoS gene Most of the bioaerosol studies investigated the effect of RH on bacterial viability were conducted decades ago by measuring some morphological or physiological changes of the bacteria as the end points (Cox and Baldwin 1967;Hess 1965;Bateman et al. 1961;Dunklin and Puck 1948). Various inactivation mechanisms and molecular targets were suggested such as a reduction in RH inactivates bacteria by increasing their water loss; dehydration causes mechanical damage to the cell surface, ultimately killing the bacteria as well as a reduction in RH increases the oxygen diffusion into the bacteria and so increases the oxidative stress that could damage DNA and protein. This approach is mainly focused on the inactivation efficiency and bacterial damage as if all that bacteria are doing in the air is waiting for something to destroy them. No study has yet investigated whether bacterial stress response mechanisms have an impact on the survival of airborne bacteria. Recent physical modeling approaches revealed the potential physical change of the bioaerosols during droplet evaporation (Parienta et al. 2011). This result further hints the extensive environmental variations and challenges that the bacteria need to overcome in order to maintain their viability in the air such as against the increasing osmotic stress, cold stress and solute toxicity in the droplet nuclei. rpoS gene is one of the most well-studied and important master regulators of general stress-response genes. Various effector genes are directed by the rpoS regulator to initiate strategies to deal with stress, such as oxidative-stress response (katE, katG), osmoregulation (otsA, osmY), DNA protection (dps) and DNA repair (recA, xthA) (Battesti et al. 2011;Eisenstark et al. 1996). ∆rpoS is the only tested mutants showed a consistent higher log reduction than that of the parental strain at all RH conditions. This result indicates that rpoS plays a very important role in the survival of bioaerosols, and it is consistent with our observation that exponential phase E. coli, which the rpoS gene has not yet expressed, was susceptible to airborne suspension. In environmental and infection control applications, this result also implies that if the bacteria encounter other stresses that trigger the expression of the rpoS gene before aerosolization [e.g. some environmental pollution, resource limitation, drug treatments, and disinfectants are known to induce the general stress response (Battesti et al. 2011;Chung et al. 2006;Eisenstark et al. 1996)], this general stress response may also improve the airborne viability of the bacteria. Hess (1965) showed that the increase in oxidative stress caused by low RH rather than dehydration alone was the primary cause of cell death. Cox and Baldwin (1967) also verified that the oxygen content in the air determined the survival of bacteria at a low RH (40%), but had no effect at a high RH (90%). Our results are consistent with their finding. The absence of the oxyR gene did not affect E. Temperature: 20 ± 2 °C. Error bars represent the standard deviation of replicates (n = 3). The log reduction of different mutants was statistically analyzed by one-way ANOVA. Grouping was conducted with post-hos test Duncan analysis, and the letters above the bars represent different grouping coli survival at a high RH (>90%) only at an intermediate RH (40-60%). As ∆oxyR is responsible for the oxidative stress due to elevated levels of H 2 O 2 (Kullik et al. 1995), this result suggests that bacteria suffer from oxidative stress during airborne suspension at intermediate RH but the expression of oxyR gene reduces the impact of this stress. Since the log reduction of ∆rpoS was higher than that of ∆oxyR at an intermediate RH, other stress protected by the rpoS gene co-existed with this oxidative stress. The survival of the ∆soxR was the same as that of the parental strain at every RH level indicating that soxR did not play a role in defending E. coli against oxidative stress in the air. We expected that the log reduction of the ∆oxyR would be even more at RH below 40% than that at above 40% (i.e. assuming a higher oxidative stress level at a lower RH) but it was not the case; the log reduction in survival of the ∆oxyR was the same as that of the parental strain at this RH. This result may imply that other types of stress more detrimental than H 2 O 2 oxidative stress were produced when RH fell below 40%. This postulation is supported by our finding that the ∆rpoS had the highest log reduction at <40% RH. The stress exerted at <40% RH was severe and required a response related to the rpoS gene rather than the oxyR gene to maintain the bacterial viability. By using specific mutants that have a known stress-response gene deleted, we could better study the nature of the stress under different RH. For instance, this study discovered that the damage caused by the oxidative stress produced in the atmosphere could be reduced by oxyR gene, which responds to elevated levels of H 2 O 2 in the bacteria (a more specific understanding of oxidative stress compared to previous studies). Several genes under the regulation of rpoS are responsible for oxidative stress response e.g. katE and katG (Battesti et al. 2011;Eisenstark et al. 1996). Double mutations of rpoS and oxyR may further reduce the bacterial survival. As shown in previous bioaerosol studies that temperature and RH conditions affect the survival of airborne bacteria (Tang 2009;Mohr 2007). Some researchers have proposed that adjusting indoor temperature and RH may reduce the viability of airborne bacteria as a low-cost infection control technology in health-care environments (Tang 2009;Mohr 2007). This study contributes to some new knowledge to advance our understanding of the stress and stress-response genes relevant to different RH conditions that may help developing this infection control technology in the future. Implications of the study This study is the first to prove the concept that stressresponse genes are also vital for bacterial survival in the air. Bacteria are subjected to oxyR-associated oxidative stress at intermediate RH and rpoS-associated general stress at all RH in the atmosphere. This study is important to verify and explain some of the observations reported in previous bioaerosol studies and demonstrates a new approach to explore the biological mechanisms associated with the viability of airborne bacteria. With the support from this study, more experiments can be designed to investigate the effect of bacterial solution on the stress response of different airborne bacteria. For instance, human body fluids contain proteins and many other biological solutes, which are much more complicated than PBS alone. Different solutions may create a different type and intensity of stress to the bacteria and/or affect the bacterial response to the same stress. Similarly, we used E. coli as a model in this study but future studies can look at airborne bacterial pathogens and bacteria that are relevant to various environmental processes. Authors' contributions KML and TWN designed the study, analysed and interpreted the data, and wrote the manuscript. TWN and WLC conducted the study and collected the data. All authors read and approved the final manuscript.
2017-08-03T02:49:29.713Z
2017-03-24T00:00:00.000
{ "year": 2017, "sha1": "4677a28c3739db89fa47539d27bee7b1ad94260d", "oa_license": "CCBY", "oa_url": "https://amb-express.springeropen.com/track/pdf/10.1186/s13568-017-0376-3", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "9b2ccaddfbcbe0f13ce9655cef3aa737373c3f70", "s2fieldsofstudy": [ "Biology", "Environmental Science" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
18902728
pes2o/s2orc
v3-fos-license
Social implications of rheumatic diseases Social consequences of a disease constitute limitations in performing roles relating to working life as well as family and social life caused by the disease, mainly chronic. The aim of the study was to analyze the social consequences of rheumatic diseases in the aspect of disability pensions with respect to incapacity for work and quality of life. The occurrence of rheumatic diseases is related not only to increased risk of different types of organic changes, but above all disability. In Europe almost 50% of persons suffering from diseases of the musculoskeletal system who are currently unemployed were breadwinners. Nearly 60% of them received legal disability status. The loss of work ability is, among other things, the consequence of progressive disability. In Europe 40% of persons suffering from rheumatoid arthritis (RA) had to stop working due to the disease. Most of the persons diagnosed with RA were of working age. It results in the decrease in the quality of life as well as economic difficulties (decreased incomes and increased disease-related costs). In Poland the results of the analysis of the Social Insurance Institution (ZUS) of first-time disability recognition issued for the purpose of disability pensions in 2014 showed that the incapacity for work was caused by diseases relating to general health condition (65.5%). Diseases of the musculoskeletal system were the cause of partial inability to work of 21.6% of persons who received a disability pension for the first time (as many as 5,349 certificates were issued). Early diagnosis and implementation of effective treatment are the necessary conditions for a patient to sustain activity, both professional and social, which is of crucial importance to reduce the negative effects of the disease. Introduction Rheumatic diseases are chronic and progressive. They cause damage to the locomotor system and lead to patient disability [1]. These diseases significantly reduce the quality of life of the patient [1,2]. Rheumatic complaints of the locomotor system are common and affect around 30-40% of the European population [3]. Estimates indicate that in Poland up to 400 000 persons were treated at hospitals in 2014 due to both inflammatory and non-inflammatory diseases of the joints included in the International Statistical Classification of Diseases and Related Health Problems (ICD-10) from the code M00 to M99 [4]. The most frequently occurring inflammatory rheumatic diseases include rheumatoid arthritis and spondyloarthropathies. Connective tissue diseases, such as Sjögren's syndrome, systemic lupus erythematosus, scleroderma or dermatomyositis, occur less often [5][6][7][8]. Social and health outcomes of a disease are the limitations in performing roles relating to working life, family and social life. They are caused by the disease -mainly chronic. The type of limitations may be temporary or permanent. Disability as a result of the chronic process of the disease or injury is a particular type of social effects. Social implications of the disease can be analysed in the following terms: • physical and biological -as limitations in performing regular life functions, • professional -meaning limitations in the ability to work or complete incapacity for work, • legal -acquisition of entitlement to benefits defined in relevant legal acts, e.g. disability pensions, sickness benefits [9]. In the case of rheumatic diseases, multiple organ failures, which often lead to death, are a major consequence. The inability to function on a labour market (the loss of work ability) is also a common implication [10]. In many cases patients suffering from rheumatoid arthritis (RA), ankylosing spondylitis (AS) or psoriatic arthritis have to stop working and rely on a disability pension. This situation constitutes a risk of impoverishment of these patients. According to the data of the Central Statistical Office of Poland (GUS), the risk of extreme poverty rate in the group of pensioners was 12.5% in 2014, whereas it was significantly lower in the general population and was 7.4% [11]. It is worth underlining that inflammatory rheumatic diseases also relate to children [12]. Chronic disease of a child significantly influences the life of parents, in particular the financial situation of their household. The results of the study carried out in Germany by Minden's team in a group of 369 children suffering from juvenile idiopathic arthritis (JIA) showed that the average total cost of JIA was estimated at EUR 4663 per patient per year. The highest costs were estimated for patients with seropositive polyarthritis and systemic arthritis (EUR 7876), whereas the lowest costs were estimated for patients with persistent oligo-arthritis (EUR 2904). The costs of healthcare constituted 89% of the total costs and the costs of drugs constituted nearly half of the value. A substantial part of the costs was borne by the child's family, with a mean out-of-pocket cost of 223 euros and a mean indirect cost due to time lost from work of 270 euros per year per family. The increase in costs corresponded to the increase in: disease activity and pain, duration of the disease, time between the occurrence of symptoms and first visit to the rheumatologist. The authors of the study concluded that JIA constitutes a substantial economic burden, especially if the child is treated with biopharmaceuticals that contribute to the increase in total cost of the disease [13]. Disability There are different definitions of a disabled person. The World Health Organisation defines three terms of disability: injury (impairment), functional disability (disability) and impairment or social disability (handicap). The first term means any deficiency or abnormal anato-my of organ structure as well as a deficiency or mental or physiological disorder of the body due to a defined congenital disorder, a disease or an injury. Functional disability means any restriction or deficiency, which results from impaired ability to perform an activity in the defined manner. Impairment or social disability means a less privileged or less favourable situation of a given individual, resulting from an injury or functional disability. This type of disability limits the fulfilment of a role relating to age, sex and social and cultural factors [14]. Disabled people in general can be divided into two basic groups: legally and biologically disabled. Legal disability is confirmed by a decision establishing disability or a degree of disability, issued by an authority empowered for that purpose. Biological disability is a subjective feeling of limitations in fulfilment of basic activities for a given age, without a disability certificate. The Central Statistical Office (GUS) defines a legally disabled person as having a valid certificate of disability issued by an authority empowered for that purpose [15]. The results of the National Population and Housing Census carried out in Poland in 2011 show that the number of persons who declared limitations in ability to perform regular basic activities for a given age and/or had a valid certificate of disability was 4 697 500, which constituted 12.2% of the population. Among people with disabilities there were 2 530 400 women. The group of men with disabilities in 2011 numbered 2 167 100 persons [15]. According to GUS data in 23.4% of households there is at least one person who holds a certificate of disability issued by the Disability Assessment Board. In 2013 in Poland disability was mostly reported in households entitled to receive a disability pension (people with disabilities occurred in 61.4% of this group of households). In the group of retirees' households this percentage was 25.4%. As many as 10.8% of respondents at the age of 16 years or more had a certificate of disability (3% severe, 5% moderate, 2.8% mild). Over 86% of people with severe disability were under constant medical or nursing supervision [16]. In the ranking of the 10 leading causes of health loss in Central Europe, musculoskeletal disorders were ranked as fifth (after lower back pain, major depression, falls and neck pain) [17]. It should be however noted that lower back pain and neck pain are also symptoms of some musculoskeletal disorders. Rheumatic diseases influence not only the ability to work, but also everyday functioning of a patient and his or her independence. The most common problems of everyday life reported by patients suffering from rheumatic diseases include getting dressed, getting up, turning the tap on, opening the cap of a bottle, getting on the bus and not having a free seat, inability to walk the stairs and open heavy doors, and inability to stand for a long period of time [18]. The problems with performing simple activities of everyday life have a significant impact on the loss of independence, hamper social relations and may even constitute a risk of poverty. The authors of the "Fit for Work" study state that in Europe almost 50% of persons suffering from a disease of the musculoskeletal system, who are currently unemployed, were breadwinners. Nearly 60% of them received legal disability status [19]. The results of a study carried out in the years 2009-2010 in Poland, which included 1000 respondents suffering from RA (average age of 60 years) who were the patients of 50 rheumatology out-patient clinics selected at random, showed that 53% of RA patients received legal disability status. Among them 35% (19% of all respondents) are considered to be severely disabled while 64% (33% of all respondents) have a moderate degree of disability [20]. As many as 41% of respondents stated that they needed to adapt their living conditions to the limitations resulting from the disease, and only 5% out of 1000 respondents stated that they had already adjusted their environment. The most frequent adaptations include appropriate bathroom equipment (6%) and purchase of a dishwasher (5%) [20]. Rheumatic diseases can cause pain and fatigue that reduce work efficiency, which many employees do not want to reveal. Inflammatory diseases of the joints can also influence work safety, e.g. when a disease or associated pain affects the concentration or movement of a worker. The authors of the "Fit for Work" report indicate that 30% of Polish employees who suffer from rheumatoid arthritis are reluctant to disclose their condition to co-workers and superiors because they fear discrimination and 22% of employees do not inform the employers about their health issues [21]. Rheumatic diseases can hamper everyday activities, which forces many patients to leave work. It is estimated that in Europe 40% of RA patients had to stop working due to the disease. It should be kept in mind that most persons were of working age when diagnosed with RA [22]. In Europe the percentage of sickness absence due to musculoskeletal disorders (MSD) constitutes nearly half of all absences due to disease or health condition [19]. In the case of patients suffering from ankylosing spondylitis the percentage of unemployed is three times higher than in the general population [22]. As indicated by the World Bank data for Poland, disability is the main reason for inactivity for men aged 45-59 and for women aged 45-54 [23]. The results of the study carried out in Poland by Tasiemski's team (2009) indicate that almost 62% of RA patients were employed before the occurrence of disease symptoms. After the diagnosis 45% of patients had to stop working. Nearly half of the RA patients in the study had to rely on disability pension and were at high risk of poverty [24]. Experts suggest that after five years from the onset of the disease only half of RA patients are working. After ten years from the onset the number of those remaining in employment decreases to 20% [21]. As many as 32% to 50% of patients stop working within 10 years from the onset of rheumatoid arthritis. At least 12.1% of all sickness absence results from rheumatic diseases. It is estimated that in the UK almost a quarter of RA patients quit work within five years from the diagnosis of the disease. This figure can increase to 40% if the effects of co-existing conditions such as depression and cardiac and respiratory complaints are taken into account [25]. Disability pensions due to musculoskeletal disorders The Social Insurance Institution (Zakład Ubezpieczeń Społecznych -ZUS) does not publish the information on the number of pensioners based on disease entities, which makes it impossible to carry out in-depth analysis. Detailed data broken down by ICD10 codes for diseases are available only for primary decisions. In 2014 ZUS issued a total of 6069 primary decisions for disability pensions due to diseases of the musculoskeletal system and connective tissue, out of which 88.1% were partial incapacity for work, 11.4% were total incapacity for work and 0.4% were inability to lead an independent life. ZUS data show an upward trend in the number of primary decisions issued for the purpose of disability pensions due to musculoskeletal and connective tissue disorders in the years 2012-2014 (Table I). In 2014 also 33 722 renewed decisions were issued, out of which 84.6% were partial incapacity for work, 12.6% were total incapacity for work and 2.7% were inability to lead an independent life [26]. According to ZUS data of 2014 the percentage of primary decisions issued for the purpose of disability pensions by medical commissions that established a degree of work incapacity due to diseases of the musculoskeletal system was 15.7% of the total primary decisions in the group of women and 12.6% in the group of men [26]. The results of analysis of ZUS data on primary decisions issued for the purpose of disability pensions in 2014 indicate that most frequently the incapacity for work was caused by diseases related to the general state of health. The diseases were as follows: neoplasms constituted 23.5% of the total decisions, diseases of the circulatory system -20.4%, and diseases of the musculoskeletal system that were the cause of partial incapacity for work in the case of 21.6% of individuals who received disability pension for the first time. In 2014 the Reumatologia 2016; 54/2 percentage of primary decisions issued for the purpose of disability pensions by ZUS physicians who established a degree of incapacity for work due to diseases of the musculoskeletal system was 12.6% for men and 15.6% for women [26]. Quality of life Rheumatic diseases, both inflammatory and non-inflammatory, significantly affect the reduction of quality of life in terms of functioning within society and mood [27][28][29]. The results of a meta-analysis carried out by Bujkiewicz's team (2014) confirm the increase of the HAQ 1 (Health Assessment Questionnaire) indicator together with the disease duration as well as DAS28 (Disease Activity Score) among patients diagnosed with rheumatoid arthritis [30]. The study showed that RA activity significantly affects the level of pain perceived by the patient. The increase of pain assessment in the DAS28 algorithm by 12.5 ±1.2 points was observed along with the progress of the disease [31]. The results of a study carried out in Poland (Prais, 2007) indicate that the quality of life of patients with rheumatoid arthritis depend on the radiological and functional stage of the disease, and its duration. Quality of life of RA patients was evaluated with the following questionnaires: Medical Outcomes Study 36 − SF-36, Health Assessment Questionnaire − HAQ and Arthritis Impact Measurement Scale − AIMS. No significant correlations were found between the duration of the disease, the age of patients and the activity of the disease. Patients whose disease lasted longer and in whom the inflammatory processes were more active assessed their quality of life as poorer. It was found that radiologi-cal and functional stage of disease significantly affected the assessment of quality of life in the examined group. No significant differences in the evaluation of quality of life between men and women were found [32]. The results of studies carried out in Spain and Australia indicate that the quality of life of women suffering from RA is often low [31,33]. The results of Spanish studies did not show a relation between the level of education of patients and their quality of life [31]. The HAQ score of women with RA is 1.4 and that of men with RA is 0.9 [33]. Women suffering from RA more often than men need the assistance of relatives or friends (women: 65%, men: 25%). The support is needed in the following situations: domestic responsibilities (70%), shopping (41%), handling of heavy objects (20%), transport (15%), opening jars (15%), and personal hygiene (11%) [33]. It should be noted that pain and reduced mobility that hamper everyday activities occur also in the case of osteoarthritis [34]. Patients with rheumatoid arthritis are often financially dependent on other persons (family, friends). Young persons more often speak about the negative influence of the disease (RA) on their functioning within society than persons aged 65 and more. According to young people, RA negatively affects their social life and outdoor sports activities [33]. The results of Norwegian studies show that health-related quality of life (HRQoL) is significantly lower in the group of RA patients than in the general population. It affects all age groups -both women and men (with regard to the health aspect as well as social functioning, physical condition, mental health, and emotional capacities) [35]. Kanecki's team (2013), which carried out a study on the assessment of health-related quality of life (HRQoL) in a group of patients hospitalised due to RA ex- [26]. 1 The minimum value of the indicator is 0 and the maximum is 3. A higher value means greater disability. Reumatologia 2016; 54/2 acerbation (n = 58, mean age of 62.5 years), presented slightly different results. A planned 2-year observation of patients in an outpatient setting (mean duration of the study was 22-23 months) was carried out. The HRQoL analysis was performed using the SF-36 questionnaire. Statistically significant reductions in HRQoL scores were observed in social functioning (p < 0.05), whereas emotional health (p < 0.05) and mental health (p < 0.05) scores were increased [36]. The study carried out in Poland by Tasiemski's team showed that only 38% of individuals with RA were satisfied with their life, with the disease having the greatest impact on professional issues and the financial situation [24]. The results of the Polish-German study carried out by Bugajska's team (2010) showed that 95% of Polish patients with RA (n = 300) felt excluded from social life, compared to 62% of German patients (n = 137) [37]. The authors of the "Fit for Work" report found that rheumatic diseases limit the possibilities of education and decrease the chances for promotion even among persons who are professionally active [21]. Summary Rheumatic diseases, especially inflammatory diseases, should not be viewed exclusively in the framework of health implications. The development of the disease is associated not only with increased risk of organ failure, but above all with progressive disability and increasing mental problems. This translates into reduced quality of life as well as financial difficulties (decreased income and increased disease-related costs). Early diagnosis and implementation of effective treatment are the necessary conditions for a patient to sustain activity, both professional and social, which is of crucial importance to decrease negative outcomes of the disease. Staying in the labour market is also favourable from the perspective of the social aspect and the health insurance system. The authors declare no conflict of interest.
2018-04-03T00:51:01.467Z
2016-06-03T00:00:00.000
{ "year": 2016, "sha1": "b9c8c169bfcb0720952529e034b64a3bec56113b", "oa_license": "CCBYNCSA", "oa_url": "https://www.termedia.pl/Journal/-18/pdf-27670-10?filename=Social.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "b9c8c169bfcb0720952529e034b64a3bec56113b", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
14950443
pes2o/s2orc
v3-fos-license
Automated chest-radiography as a triage for Xpert testing in resource-constrained settings: a prospective study of diagnostic accuracy and costs Molecular tests hold great potential for tuberculosis (TB) diagnosis, but are costly, time consuming, and HIV-infected patients are often sputum scarce. Therefore, alternative approaches are needed. We evaluated automated digital chest radiography (ACR) as a rapid and cheap pre-screen test prior to Xpert MTB/RIF (Xpert). 388 suspected TB subjects underwent chest radiography, Xpert and sputum culture testing. Radiographs were analysed by computer software (CAD4TB) and specialist readers, and abnormality scores were allocated. A triage algorithm was simulated in which subjects with a score above a threshold underwent Xpert. We computed sensitivity, specificity, cost per screened subject (CSS), cost per notified TB case (CNTBC) and throughput for different diagnostic thresholds. 18.3% of subjects had culture positive TB. For Xpert alone, sensitivity was 78.9%, specificity 98.1%, CSS $13.09 and CNTBC $90.70. In a pre-screening setting where 40% of subjects would undergo Xpert, CSS decreased to $6.72 and CNTBC to $54.34, with eight TB cases missed and throughput increased from 45 to 113 patients/day. Specialists, on average, read 57% of radiographs as abnormal, reducing CSS ($8.95) and CNTBC ($64.84). ACR pre-screening could substantially reduce costs, and increase daily throughput with few TB cases missed. These data inform public health policy in resource-constrained settings. this is problematic as many HIV-infected patients are unable to produce sputum [14][15][16] . Obtaining sputum from high numbers of subjects is time consuming and logistically challenging, and the majority of TB prevalence surveys to date pre-screen participants with symptoms and chest radiography (WHO Strategy 3) 13,17 . Thirdly, the Xpert test involves two hours of processing time, restricting daily throughput. Thus, high cost and long processing intervals limit the value and widespread uptake of Xpert as a point-of-care test. Other widely used tools for the diagnosis of active TB, which are older but reliant, are sputum smear microscopy and culture. The former is relatively cheap ($2.10 -$4.60) 6,9,11 but has limited performance 18 . The latter is rather expensive ($5 -$15) 6,9 and has a processing time of multiple weeks. Thus, both tests are suboptimal for the detection of TB in point-of-care settings. Alternative algorithms or tests are urgently needed. Pre-screening tests have the potential to improve throughput if they are quicker to perform than Xpert, and should also reduce cost 19 . In this study, we investigated a novel pre-screening test, namely automated chest radiography (ACR), which consists of digital chest radiography assisted by automated interpretation by computer software. This test is rapid: it requires only correct positioning of the patient to acquire a radiographic image. The image is subsequently processed within 1 minute by a standard computer software program. The advantages of automatically analysing radiographs with computer software are the need of only a radiographer, and no on-site clinical officer. The latter might not always be available and is expensive. ACR is therefore not labour intensive and is faster and potentially cheaper than conventional CXR reading. A recent study by Maduskar et al. 20 showed that sensitivity and specificity of computerized reading was not significantly different from reading by clinical officers. In this study, we evaluate ACR as a pre-screening tool prior to molecular (Xpert) testing. We propose a diagnostic algorithm that is tested on data from a passive case-finding study, but could also be employed in other settings. The performance of this diagnostic algorithm is reported in terms of sensitivity, specificity, throughput and costs, and is compared with a scenario where Xpert is used for all patients and an alternate scenario in which the radiographs are read by humans. Methods Data. For this study, data from the TB-NEAT collaborative study was used 21 . All patients enrolled in this study were self-referred TB suspects presenting at a busy urban clinic in Cape Town, South Africa. All subjects provided sputum, underwent digital chest radiography on an Odelca DR unit (Delft Imaging Systems, Veenendaal, The Netherlands) and were offered voluntary testing and counselling for HIV. In the parent study, conventional radiography was used and liquid culture, and Xpert testing was performed by a trained laboratory technician in a centralized reference laboratory. Full study details have been reported elsewhere 21 . The study was carried out according to the protocol approved by the University of Cape Town Faculty of Health Sciences Research Ethics Committee (#404/2010). All patients provided written informed consent for study participation. Of the 419 recruited patients, 6 patients had missing radiographs, 22 patients had no conclusive Xpert or culture results and the software was unable to ascertain a diagnosis for 3 cases. Thus, a total of 388 patients had all information available. Culture positivity for Mycobacterium tuberculosis served as the reference standard. All cases were scored, blinded to any clinical information, on an integer scale from 0 to 100 by two CRRS certified "B" readers 22,23 and an experienced radiologist (reader 3) for the presence of abnormalities consistent with active TB, with a score >50 considered suspicious for active TB. Diagnostic algorithm. All chest X-rays (CXRs) were retrospectively processed by the CAD4TB v3.07 (Diagnostic Image Analysis Group, Radboud university medical center, Nijmegen, The Netherlands) computer software. This software produces a continuous image abnormality score between 0 (normal) and 100 (highly abnormal). An example can be found in Fig. 1. Using various thresholds (T) on this score, we simulated the effect of using the software as a pre-screening test for molecular testing. In this simulated diagnostic algorithm, subjects with a score smaller than or equal to T were regarded as TB-negative, and these subjects would not undergo additional testing; otherwise, Xpert test results were used. The algorithm is schematically depicted in Fig. 2. Hypothetical point-of-care testing unit. For this study, we assumed a hypothetical setting where a mobile TB unit is used for passive case finding and point-of-care testing. This hypothetical unit would have one digital radiography system and three 4-cartridge GeneXpert IV machines, resulting in a capacity of 300 ACRs per day, and an Xpert testing capacity of 45 tests per day. Subjects selected for Xpert testing by the ACR pre-screening would be referred directly for Xpert testing in the same unit. Cost analysis. Cost analysis was limited to the point-of-care-associated costs of diagnosis. We assumed an Xpert price of $13.06, based on the FIND subsidized price for Xpert cartridges 24 . The price of ACR is determined largely by the cost of the radiography unit and the throughput. We assume a cost of $1.46 for ACR 9,25 . Furthermore, no increased costs were assumed for manual CXR reading. Detailed cost calculations for Xpert and ACR can be found in Table S1 and S2 of the supplementary material. Performance analysis. Diagnostic performance was assessed by computing sensitivity, specificity, positive predictive value (PPV) and negative predictive value (NPV) at each threshold. With the assumed costs and capacity, the cost per screened subject (CSS), the cost per notified TB case (CNTBC) as well as the throughput in cases/day was calculated. The performance of ACR was compared to CXR pre-screening with specialist readers. In this scenario, the ACR score was replaced by the composite score of specialist readings, and different thresholds (R) were simulated and performance was computed. We also evaluated scenarios where all HIV-infected patients would undergo Xpert testing, without receiving pre-screening. Sensitivity changes were tested for significance using the McNemar χ 2 test (IBM SPSS Statistics 20), considering p < 0.05 significant. Results The dataset contained 388 cases, of which 128 (33.0%) subjects were HIV-positive, 71 (18.3%) subjects were culture positive, 62 (16.0%) subjects were Xpert positive and 56 (14.4%) subjects were both culture and Xpert positive. These results are listed in Table 1. The performance, in terms of area under ROC curve, of the CAD4TB software and standalone specialist readers is comparable and not significantly different, and is shown in the supplementary Figure S1. Table 2 shows the performance of the ACR system at different thresholds. Thresholds were chosen based on the percentage selected for downstream Xpert testing shown in the second column. The first row matches the scenario where ACR is omitted and every subject receives an Xpert test. The Xpert baseline performance in terms of sensitivity and specificity was 78.9% and 98.1%, respectively, resulting in a CSS and CNTBC of $13.09, and $90.70, respectively. Significant changes in sensitivity are marked with an asterisk (*). The performance of the diagnostic algorithm combined with human reading is shown in Table 3. Sensitivity and specificity. The maximum achievable sensitivity with this algorithm is that of the Xpert test. With increasing ACR score thresholds, more subjects (including TB-positives) were excluded for Xpert testing, which by definition decreases sensitivity and increases specificity. The changes in sensitivity and specificity were moderate for scenarios up to T = 85, where 40% of subjects undergo Xpert Throughput. Given the Xpert and ACR capacities, the daily throughput was calculated for each threshold T. The throughput started at 45 for the Xpert-only scenario, and increased to 150 for pre-screening with T = 94. The maximum capacity of the digital radiography unit was not reached due to the limited Xpert capacity. Performance analysis stratified by HIV status. The analysis stratified by HIV status, is shown in Tables 4 and 5. Comparing the tables, it can be seen that the performance of Xpert was significantly better in HIV-uninfected patients than in HIV-infected patients. The HIV-uninfected group showed a sensitivity of 93.9% and specificity of 99.1% for Xpert alone, and changed to 87.9% and 100%, respectively, with higher thresholds (T = 85). The CSS and CNTBC decreased 49% and 45%, respectively. The sensitivity and specificity of Xpert for HIV-infected subjects was 65.8% and 95.6%, respectively. For higher thresholds (T = 85), sensitivity dropped to 50.0% and specificity increased to 98.9% (see Table 5). The CSS and CNTBC decreased by 49% and 33%, respectively. Table 6 shows the results for a scenario where all HIV-infected and HIV-unknown subjects get Xpert testing and HIV-uninfected patients get ACR pre-screening. For thresholds up to 50% Xpert (T = 95), the performance is better compared to a scenario where all patients get pre-screening. For example, at T = 84 (60% Xpert), the sensitivity decreased non-significantly to 76.1% and specificity increased to 98.7%, while the CSS decreased to $8.81 (33% decrease), CNTBC to $63.30 (30% decrease) and throughput increased to 75. Specialist reading. The results of specialist reading are shown in Table 3. Based on R = 50, the specialist reader scores associated with suspicion for active TB, three readers committed 63%, 56% and 53%, respectively, of the cases for Xpert testing. The sensitivity for reader 1 was slightly higher than for reader 2 and 3: 76.1% versus 74.6%, but came with a slightly decreased specificity: 98.4% versus 99.1%. The PPV for readers 2 and 3 is better than for reader 1: 94.6% versus 91.5%. The NPV was similar for all readers, with 94.8% and 94.6%. As reader 2 and 3 sent fewer patients for Xpert testing, the CSS are slightly lower, $8.75 and $8.43 versus $9.66, respectively, as are the CNTBC: $64.04 and $61.07 versus $69.40, respectively. Comparing these numbers with the ACR threshold T = 60 (see Table 2), which also processed roughly 60% of the patients for Xpert, the specialist readers perform marginally better. As the radiographs were scored on a scale from 0 to 100, higher thresholds could also be used. For the scenario where roughly 30% of the subjects receive Xpert testing, the automated algorithm outperforms reader 1, shows similar results for reader 3; however it is inferior to reader 2 (see Table 2). Discussion Following WHO recommendations for molecular based testing in 2010, the procurement of the GeneXpert MTB/RIF systems in resource-constrained countries is high and is still expanding. However, available funds are often limited, and the WHO has also highlighted the need for cost-saving diagnostic pathways. Our results have shown that ACR pre-screening may offer a solution: being cheaper, faster, with only a moderate decrease in sensitivity, and the benefit of increased throughput, as compared with Xpert testing. Using the ACR pre-screening algorithm at T = 85, only 40% of the patients would be sent for a downstream Xpert test, and the CSS and CNTBC were less than 51% and 60% of the original cost, respectively. In this study, the benefit of ACR came at the cost of eight additionally missed TB cases. This can be attributed to the limitations of radiography screening itself in addition to the use of automated pre-screening. Of these eight cases, six were HIV-infected, and it has been previously reported that chest radiography is less sensitive in immunocompromised patients 26,27 . This effect was also seen in the results by HIV stratification in Tables 4 and 5. The performance of the diagnostic algorithm among HIV-uninfected patients is considerably better than in HIV-infected patients, and consideration should be applied for different thresholds for both groups. For example pre-screening might be considered only for HIV-uninfected patients ( Table 6). This would still reduce CSS and CNTBC with more than 30% (T = 84), with only a marginally non-significant reduction in sensitivity (two additionally missed TB cases). Comparing the automated system to specialist readers, the performance of the algorithm is slightly inferior with medium-range thresholds, but comparable for high thresholds, although the performance among readers differed. However, although not modelled in this study, specialist reading is more time consuming, requires training and also increases costs. The proposed pre-screening diagnostic algorithm is fully automated and requires only an on-site radiographer. The availability of automated analysis software obviates the need for a radiologist or clinical officer and limits the throughput to that of the radiography unit only. In our simulations, we kept the cost of ACR and manual reading constant at $1.46, and this amount did not change with altered throughput nor did it affect the main outcomes of the study. Besides cost-reduction, pre-screening can substantially increase throughput, although it would remain limited by the Xpert machine's capacity. This may be particularly useful in screening settings, as sputum collection would not be needed for many potential TB subjects. Additionally, given that the average test duration is reduced with most patients receiving a negative radiology test result within a few minutes, this would obviate a 2-hour wait for Xpert results. We predict that, in a typical screening setting, pre-screening with ACR could increase patient throughput by up to 250%. Another advantage of automated reading is that different thresholds with objective and reproducible results could be chosen. Thus, according to the setting used, a threshold could be chosen to obtain the desired sensitivity, specificity or throughput. This makes ACR highly valuable in resource-constrained settings, and this proposed diagnostic pathway can save both cost and time in a point-of-care setting. We hypothesize that the algorithm's value is even higher in active case-finding scenarios and sputum-scarce cohorts, but future studies are needed. Compared to a recent study by Muyoyeta et al. 28 , which uses a previous version of the CAD4TB software on presumptive TB patients with Xpert as reference, the current software showed improved performance and supported their findings that ACR has a potential role in TB screening. Additionally, the current software showed better performance values than reported in a previous study by Maduskar et al. 20 . The proposed diagnostic algorithm does not account for patients with high ACR scores but negative Xpert results: it is likely that these patients either have a false negative Xpert test or may have disease other than TB. Furthermore, current costs analysis was limited to the costs of diagnosis only and does not take into account the cost of treatment and misdiagnosed patients. This group of patients should be flagged to receive further attention based on their ACR scores. In general, basic automated screening for abnormalities could prompt referral mechanisms for higher levels of care. In conclusion, ACR pre-screening before sputum Xpert molecular testing is a promising new advance in TB diagnostics. The ACR algorithm can significantly reduce cost and substantially increase throughput, while maintaining high sensitivity. This makes it a potentially highly valuable diagnostic tool in resource-constrained countries.
2016-05-12T22:15:10.714Z
2015-07-27T00:00:00.000
{ "year": 2015, "sha1": "1e265722259ad5880226c71ac082a6c8d6f11d84", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/srep12215.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "b55ad6b57666dea3b4b645c6af28ece70c6a01aa", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
221949255
pes2o/s2orc
v3-fos-license
The special point on the hybrid star mass--radius diagram and its multi--messenger implications We show the existence and investigate the location of the special point (SP) in which hybrid neutron star mass-radius (M-R) curves have to cross each other when they belong to a class of hybrid equation of state (EoS) constructed with generic constant--speed--of--sound (CSS) quark matter models for which the onset deconfinement is varied. We demonstrate that for a three-parameter CSS model the position of the SP in the M-R diagram is largely independent of the choice of the hadronic EoS, but in dependence on the stiffness of the quark matter EoS it spans a region that we identify. We find that the difference between the maximum mass and the SP mass depends on the mass at the onset of deconfinement so that an upper limit of $0.19~M_\odot$ for this difference is obtained from which a lower limit on the radius of hybrid stars is deduced. Together with a lower limit on the radius of hadronic stars, derived from a class of reasonably soft hadronic EoS including hyperons, we identify a region in the M-R diagram which can be occupied only by hybrid stars. Accordingly, we suggest that a NICER radius measurement on the massive pulsar PSR J0740+6620 in the range of 8.6-11.9 km would indicate that this pulsar is a hybrid neutron star with deconfined quark matter in the inner core. Introduction A large and comprehensive body of work exists on the equation of state of nuclear matter up to and exceeding the nuclear saturation density that has recently been reviewed in [1]. Models, such as [2,3,4] are well fitted to existing data on nucleon-nucleon interactions, nuclear structure and nuclear matter saturation properties and can be readily used to derive the properties of neutron stars with a hadronic matter core. Such models are, however, blind to the quark substructure of strongly interacting matter, which in itself is a subject of many studies (cf. [5,6,7,8] and references therein). The existing state-of-the-art field-theoretical description of strongly interacting matter is the theory of Quantum Chromodynamics (QCD). Ideally, one would want a description of strongly interacting matter in astrophysical systems derived directly from this theory. However, the first-principle calculations on the basis of the QCD Lagrangian have to exploit Monte-Carlo simulation techniques that up to now can be applied only in vacuum or at finite temperatures and densities corresponding to baryon chemical potentials not exceeding twice the temperature (cf. [9,10,11,12,13]). These first-principle calculations predict a smooth crossover from hadronic matter to deconfined quark-gluon plasma (QGP) at a temperature of 156.5 ± 1.5 MeV [13]. For applications to astrophysical systems, such as neutron stars, the temperature T is well below the Fermi temperatures of nucleons and leptons, so that for the equation of state calculations can be performed assuming T = 0. With densities in excess of the nuclear saturation density, and due to the asymptotic freedom of QCD it is possible that a transition to QGP will also occur under such conditions, likely in the form of a first order phase transition. The possibility of such a feature of neutron stars is widely discussed in the literature (cf. [14] and reference therein). Specific properties of the QGP present in hybrid neutron stars and the phase transition itself could provide a supernova explosion mechanism triggered by the proto-neutron star formation arising from a strong first order phase transition within the collapsing star's core (cf. [15,16,17]). Furthermore, the current efforts in providing so-called multi-messenger measurements of neutron star properties (cf. [18,19,20,21,22,23]) can potentially give us a glimpse into the QCD phase diagram at an area inaccessible by terrestrial heavy-ion collision experiments. For that purpose, a theoretical description of strongly interacting matter with a phase transition between bound hadronic states and QGP is desperately needed. The state-of-the-art in this regard are classes of effective models, such as the tdBag [24] and the CSS model [25,26], coupled to a separate hadronic equation of state via a Maxwell construction ensuring a first order phase transition. Such multiphase models provide predictions on the M-R relations and the central density of stable neutron stars. But due to the presently large uncertainties in the simultaneous measurement of masses and radii, e.g., by the NICER experiment [27,28] one cannot yet select a most favorable one among them (cf. [29]). A recent study [30] has found, that EoS from the class of CSS models share the property that all M-R curves, regardless of the hybrid star onset density, must cross a small region in the M-R diagram, the SP. We would like to extend this study to a wider range of EoS with additional microscopic features, such as vector repulsion, and investigate the possibility that the existence of a SP is a universal property of hybrid neutron star models. This could potentially provide a tool for the interpretation of current multi-messenger observations as signals for the existence of a hybrid neutron star branch. The manuscript is organized as follows. In Section 2 we present the EoS of the models used in this study. In Section 3 we investigate the existence and the properties of the SP in each class of models. In Section 4 we summarize our findings and present our conclusions. Equations of state For the hadronic (confined quark) matter equations of state we choose from the class of density-dependent relativistic mean field models based on the "DD2" parametrization [4] with excluded volume effects according to [31]. They describe the properties of nuclear matter at low densities up to and slightly above nuclear saturation. For higher densities we have chosen three slightly different models for an approximate description of the thermodynamics of deconfined quark matter: 1. the CSS model [26] which is similar to the one used in [16], 2. a bag model inspired by [24], but modified by a pressure and energy-density shift corresponding to the formation of a vector condensate related to repulsive vector-channel interaction on the quark level 3. the novel vBag model [32,33,34,35,36], which combines the previous bag model with a non-trivial correlation between the hadronic and quark phases in order to impose a simultaneous onset of chiral symmetry restoration and deconfinement. The equation of state of the CSS model [26] postulates a relation between thermodynamic pressure p and the energy density of the form where c 2 s is the speed of sound squared and 0 is a constant energy density shift. The pressure, described as a function of the baryon chemical potential µ B , is which can be inverted to give where A, B and β are constant model parameters. The derivative of the pressure with respect to the baryon chemical potential is 2 Using the relation = µ B n B − p we obtain the energy density as a function of baryon chemical potential, The relation between energy-density and pressure takes the form which shows us, that c 2 s = 1/β and 0 = (1 + β)B. The parameter A can be varied without changing the relation of pressure and energy density, but it should be chosen such, that the jump in baryon density at the phase transition is not negative. Both of the bag models [24,33] used here start from the thermodynamics of a gas of non-interacting fermions [37] with the dispersion relation E(p) = p 2 + m 2 , and add a constant pressure shift, the bag constant B, A well known feature of such models is their inability to support a high mass neutron star consistent with observations ( [18,19]). In order to remedy the problem, repulsive vector interactions are postulated to modify the quark equation of state beyond the free gas approximation. In this study, we follow a construction based on [32], which ensures that the speed of sound does not exceed the causality limit c 2 s = 1, The additional model parameter, K v is related to the vector quark current interaction and can be related to the gluon mass scale (cf. [33]). The above equation of state fully defines the model, which later will be denoted as the ordinary bag model. The model vBag, which also belongs to the class of thermodynamic bag models, attempts to phenomenologically account for both quark confinement and high density chiral symmetry restoration. It does so by redefining the bag constant of Eq. (14) as the flavor-dependent thermodynamic shift associated with the vacuum pressure contribution of the chiral condensate. It can therefore be related to the dressed quark mass in vacuum, [38], where M f is the full dressed quark mass and m f is the current quark mass (cf. [8] and reference therein). The model additionally assumes a simultaneous onset of chiral symmetry restoration and deconfinement, which is imposed by introducing an additional pressure shift B dc equal to the pressure of the hadronic phase at the critical chiral chemical potential (i.e. µ f,χ such, that f p f (µ f,χ ) = 0). This by construction ensures, that the phase transition occurs at µ f,χ and the full thermodynamic description of chiral physics is consistently described across both phases, The specifics of this alternative bag model lies in its non-trivial connection to the hadronic EoS in the form of the B dc parameter. In all previous two-phase models, the phases were independent, apart from the Maxwell transition interface. The vBag model offers an alternative, with an explicit hadronic impact of the quark equation of state. Since the aim of this study is to investigate a shared property of the hybrid neutron star branches derived using a two-phase model, the impact of such a non-trivial connection warrants investigation. The hybrid star EoS special point The existence of a SP in the M-R diagram of hybrid neutron stars, observed in [30] is a feature visible in many other studies (for example, Fig. 2 of [39], Fig. 14 of [40] or Fig. 2 of [29]), including models using non-Maxwell phase transitions, see Fig. 8 of [29]. Indeed, for all the models described in the previous section, such a point does exist (examples can be seen in Fig.1). We need, however, to make a short remark on the use of the term "point" in reference to this feature. As illustrated in the original study (Fig. 5 of [30]), there is no strict intersection between all hybrid M-R curves, but rather a very narrow region to which all of them converge. For that reason, the special point masses and radii, given in this study, should be considered approximate, along with any implications related to the multi-messenger constraints. Despite this shortcoming, we would argue that this feature does provide a useful mechanism for evaluating the agreement of a chosen hybrid star model with observations. An example of robustness of the special point (SP) can be seen in the right panels of Fig. 1, where the position of the SP is compared for different hadronic EoS coupled to the respective quark matter models. Of particular note are the panels depicting the difference in the vBag model (Fig. 1c), which due to its non-trivial connection between quark and hadron EoS, exemplifies the worst-case scenario of the impact of the hadronic EoS on the SP position. We find, even in this case, that the change in mass and radius is less than 10%. The SP doesn't always belong to the stable part of the M-R relation (as evident from Fig. 1). It can be on the unstable branch when the QM phase transition occurs at rather high masses and with low latent heat. We would conjecture, that, for the more realistic EoS with a phase transition, the SP must appear on the stable part of the NS M-R plot. Studies, such as [42], suggest that nuclear matter stiffens at high density as a result of the hadron substructure effects. This naturally favors a phase transition with a relatively large latent heat and would result in a hybrid EoS with the SP on the stable hybrid star branch. Examples of this are evident in panels (a) and (c) of Fig. 1 and in the hybrid star models using advanced quark matter approaches in the literature, e.g. in [29,39,40]. We would now like to focus on the CSS model, which was used in the original study by Yudin et al. [30]. It is clearly established, that the SP is insensitive to a change in the quark matter onset density. The same is not true for a change of the speed of sound or the pressure slope (the A parameter of Eq. (2)). Both of these quantities have a large impact on the shape of the hybrid star M-R relation. Fig. 2 illustrates the possible positions of the SP resulting from the change of these two parameters. The SP can take values well above the 2 M constraint of [19], and the maximum mass of a hybrid star can be in excess of this value. In fact, as evident from this figure, for a hybrid branch with the so-called twin feature (i.e. with stable hybrid stars of lower masses than the heaviest purely hadronic neutron stars) it is a requirement. The SP, as a phenomenological tool, has the most utility in discussing the twin phenomenon. Since it must correspond to a stable hybrid star, it must obey all multi-messenger observational constraints. This means the SP area of Fig. 2 must be limited to areas not excluded by the signals from GW170817. Additionally, the hybrid branch must reach the 2 M band. The relation between the maximal mass gain above the SP mass (M max − M SP ) and the hybrid star onset mass M onset for deconfinement is shown in Naming convention consistent with [31]. The special points are marked by black circles. The gray and blue bands correspond to 68.3% and 95.4% credibility intervals of the Shapiro-delay measurement of pulsar's PSRJ0740+6620 mass [19]. Red bands are regions excluded by the analysis of the signal from the neutron star merger event GW170817 according to Bauswein et al. [41], Annala et al. [43] and Rezzolla et al. [44]. 5 Taken together with the present observational lower limit on the maximum mass of pulsars (the lower limit of the 95.4 % credibility interval) of 2.14 − 0.18 M for PSR J0740+6620 [19], this relation tells us that the maximum mass cannot be larger than M SP + 0.19 or alternatively, that any parametrization of the EoS resulting in M SP below 1.77 M will be in disagreement with observations. This is only true for a very early onset of deconfinement at the lower limit of the mass of neutron stars. With additional constraints on the quark matter onset density, this mass constraint becomes more strict. An example for the modified SP region is illustrated in Fig. 4 by the green hatched region. The relation (25) allows to Figure 4: The SP range allowed (green), and excluded (yellow) by the 2 M constraint. The black dashed line shows the upper limit of masses that can be reached for a given radius of the hybrid star configurations. The red, blue and cyan lines show the M-R relation for the hadronic EoS that are obtained with realistic baryon-baryon two-body and three-body interaction augmented by a repulsive short-range multi-pomeron (MP) exchange potential according to [46,47]. The solid lines represents EoS parametrizations with hyperons, while dash-dotted lines represent EoS with only non-strange baryons. The black line is obtained for the APR EoS [2]. predict a limiting M-R relation for hybrid stars which is shown as the dashed line in Fig. 4. As a possible implication the SP region has on signals from observations, one can suggest the region between the lower limit on neutron star (viz. hybrid star) radii and the lower limit on radii of hadronic neutron stars should be populated by hybrid neutron stars with deconfined quark matter in their cores. Should the APR EoS be considered as the softest realistic hadronic neutron star? We suggest that the predictions of the APR EoS for the behavior of nuclear matter at supranuclear densities should not be trusted, and therefore the corridor for identifying hybrid stars in the M-R diagram becomes wider, see the curves labelled with "MP" in Fig. 4. This argument is based on a work by Yamamoto et al. [46] who have shown that realistic nucleon-nucleon (NN) potentials like the Argonne V18 (AV18) or the extended soft core (ESC) interactions fail to reproduce the large-angle differential cross section for 16 O - 16 O scattering at E Lab = 70 AMeV. But with an additional repulsive, density-dependent multi-pomeron exchange interaction, the data is described very well. This addition to the NN interaction results in very good saturation properties and a stiffening of the nuclear EoS at supernuclear densities which allows to solve the hyperon puzzle [47] and leads to a shift of the M-R curves of the corresponding solutions of the TOV equations to larger radii, see the curves labelled MPa, MPa+ and MPb in Fig. 4. We choose the curve corresponding to the MPa+ parametrization, as it is the only multi-pomeron EoS in agreement with the PSR J0740+6620 mass measurement despite a high density onset of hyperons (thin red line on Fig.4), and consider it as the lower limit of hadronic NS radii. All other potential-model based hypernuclear matter EoS suffer the hyperon puzzle problem and cannot be considered. We therefore postulate, that a NICER measurement of the PSR J0740+6620 radius between 8.6-11.9 km would be a strong indication of a quark matter core. In deriving this range we are deliberately ignoring a low radius constraint proposed in [45]. The reason for this is the use of exclusively hadronic EoS and deriving the radius limit from an observed relation between the threshold mass of a prompt collapse of binary NS collision and the maximum NS compactness. Both of these quantities are sensitive to the features of the EoS and can significantly change due to a phase transition to deconfined quark matter (cf. [23]). For that reason we choose a more conservative limit proposed in [41]. It is not free from this flaw but represents a systematic analysis of a wide range of different hadronic EoS, therefore we assume it holds in general. As evident in Fig. 2, it affects the hybrid branch by imposing a lower limit on the quark matter onset. A systematic analysis of this effect in relation to the SP is beyond the scope of this paper, and merits further study. Conclusions We have shown the existence and investigated the properties the special point, a characteristic feature shared by a wide range of hybrid neutron star equations of state from the class of two-phase models that allow a systematic variation of a quark matter parameter determining the onset of deconfinement. We have demonstrated that the position of this special point is only marginally depending on the underlying hadronic equation of state. We have drawn conclusions on the allowed region for the location of the special point in the M-R diagram of a hybrid neutron star, derived using the CSS model, which would be in agreement with multi-messenger observational constraints. We define a corridor in the M-R plane that can be used to determine if subsequent mass and radius measurements characterize the target as a hybrid neutron star. As an example, we suggest that PSR J0740+6620 with a 2σ lower limit on its mass at 1.96 M would be a candidate for a hybrid star with quark matter core when the measurement of its radius by, e.g., the NICER experiment would fall in the range of 8.6-11.9 km. Such a conclusion crucially depends on our knowledge of the lower limit for the radius of purely hadronic neutron stars models in the M-R diagram, but its importance merits further studies of the special point as a general feature of hybrid neutron star models.
2020-09-28T01:01:15.296Z
2020-09-25T00:00:00.000
{ "year": 2020, "sha1": "2dfab1227668db06361547e4e38702023e339470", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1140/epjst/e2020-000235-5.pdf", "oa_status": "HYBRID", "pdf_src": "Arxiv", "pdf_hash": "0fc0661781d96c02db3f00d4026fc1a87c842b29", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
25750311
pes2o/s2orc
v3-fos-license
On-admission blood pressure and pulse rate in trauma patients and their correlation with mortality: Cushing's phenomenon revisited Background: Injury-induced alteration in initial physiological responses such as hypertension and heart rate (HR) has a significant effect on mortality. Research on such associations from our country-India is limited. The present study investigates the injury-induced early blood pressure (BP) and HR changes and their association with mortality. Materials and Methods: The data were selected from Towards Improved Trauma Care Outcomes collected from October 1, 2013, to July 24, 2014. Patients above 18 years of age with documented systolic BP (SBP) and HR were selected. BP was categorized into hypotension (SBP <90 mmHg), hypertension (SBP >140 mmHg), and normal (SBP 90–140 mmHg). HR was categorized into bradycardia (HR <60 beats/min [bpm]), tachycardia (HR >100 bpm), and normal (HR 60–100 bpm). These categories were compared with mortality. Results: A total of 10,200 patients were considered for the study. Mortality rate was 24%. Mortality among females was more than males. Patients with normal BP and HR had 20% of mortality. Mortality in patients with abnormal BP and HR findings was 36%. Mortality was higher among hypotension-bradycardia patients (80%) followed by hypertension-bradycardia patients (58%) and tachycardia hypotension patients (48%). Elderly patients were at higher risk of deaths with an overall mortality of 35% compared to 23% of adults. Conclusion: The study reports that initial combination of hypotension-bradycardia had higher mortality rate. Specific precautions in prehospital care should be given to trauma patients with these findings. Further prospective study in detail should be considered for exploring this abnormality. INTRODUCTION The assessment of vital parameters helps to understand the physiological response and their appropriateness to injury response in trauma patients' injuries and depending on the status of compensation helps to decide the management strategy. [1][2][3][4] Traditional vital signs (systolic blood pressure [SBP], heart rate [HR], and respiratory rate) can be measured noninvasively and has been routinely used for initial assessment of trauma patients. [5][6][7] This study takes a review of on admission blood pressure (BP), HR among trauma patients and their impact on in-hospital mortality. MATERIALS AND METHODS Data considered from prospective, observational, multicenter trauma registry -Towards Improved Trauma Care Outcomes (TITCO) from four Indian City Hospitals. TITCO data collection was carried out from October 1, 2013, to July 24, 2014. [8] Patients with a valid recording of HR and BP on admission are considered for this study. Only patients with age more than 18 years were included in this study. HR records were further grouped as tachycardia, bradycardia and normal for the readings below 60 beats/min (bpm), above 100 bpm and 60-100 bpm respectively with units' bpm. Similarly, SBP recorded were grouped as hypertension, hypotension, and normal BP for respective reading SBP values of below 90 mmHg, above 140 mmHg, and 90-140 mmHg. Overall survival of trauma patient was compared with SBP and HR groups. TITCO registry patient's details were available till discharge or death, during admission. Admission HR and SBP was then compared with mortality. Data analysis and statistical analysis Data were analyzed using SPSS version 24.0 (SPSS Inc., Chicago, IL, USA) for Windows and Microsoft Excel version 2016. Contingency table of HR versus SBP was prepared, in each category, patients were defined in percentages. Among death and survived patients separate estimates were shown. Primary and secondary variables under consideration were analyzed to estimate statistical parameters including mean, standard deviation, and percentages. Logistic regression analysis was performed assuming Pulse and BP as an independent variable with life outcome as the dependent variable. P < 0.05 was considered as a statistically significant. RESULTS Of 16,047 patients in the TITCO registry, 10,200 adult patients (>18 years) were considered for analysis with valid records of HR and SBP. Overall mortality was 24% (2438/10,200) among patient under consideration for this study. Males were at higher risk of trauma (82%) than that of females (18%). The overall mortality was 23% and 28% for males and females, respectively. 75% (7657/10,200) trauma patients had normal parameters of their HR and SBP on admission, overall mortality among them was 20% (1526/7657). While among remaining 25% (2543) patients overall average 30-day mortality was 36% (912/2543). The highest risk was associated with hypotension-bradycardia patients with 80% mortality, followed by 58% (n/d) in hypertension-bradycardia patients, then 48% (n/d) in tachycardia hypotension patients [ Table 1]. At least one normal record either of HR or BP was seen to be associated with 30-day mortality of 29%-44% (n/d). Both abnormal, HR and BP on admission were seen to be associated with 56% (n/d) mortality, while at least one abnormal count was associated with 35% (n/d). Whereas the trauma patients with normal BP and pulse; the overall mortality was 20% (1526/7657). Elderly patients (>60 years) were at higher risk of deaths with an overall mortality of 35% (n/d) compared to 23% (n/d) of adults. However, adults were 90% (n/d) of total trauma victims while elderly were 10% (n/d). A total of 12 patient had combination of hypertension (BP >140 mmHg) and bradycardia (HR <60 bpm) [ Tables 2 and 3]. All the patients were adults male. All the patients sustained blunt injuries except one who had a penetrating injury to the chest wall and mediastinum. Six patients were intubated; three patients underwent neurosurgical intervention, evacuation of the mass lesions and decompressive craniotomy. DISCUSSION It has been suggested that routine physiological parameters can be used to predict mortality in patients with traumatic brain injuries, [9][10][11] however, the predictive value of these vital signs has been questioned in many studies. [5][6][7] BP, and other vital signs HR and respiratory rate have been shown to play an important role in the initial evaluation of trauma patients. [9][10][11] SBP has been shown as an important clinical marker to indicate trauma severity and included in many trauma scoring systems. [12] SBP of 120 mmHg has been considered as normal in the adult population, [13,14] and a value <90 mmHg is considered as hypotension. [12,15] Many studies have shown that hypotension as well as hypertension are associated with reduced survival in trauma patients, [10,16] particularly increased short-term (we need to define this -is it early mortality <24 h) mortality. [17] SBP of 110 mmHg is associated with significant increase in mortality and for every 10 mmHg reduction in SBP increases the risk of mortality by 4.8%. [18] Hypotension can be a reflection of decompensation of the physiological mechanisms (30%-40% loss in blood volume) and needs to be considered a late finding. [19] Cushing's triad or Cushing reflex Classically, the "Cushing reflex" or "Cushing's triad" has been described as the presence of hypertension, bradycardia and abnormal breathing in a patient with raised intracranial pressure. [20] In animal studies, it has been shown that the occurrence of the bradycardia is preceded by initial tachycardia and hypertension. [21] The Cushing reflex is body's mechanism to increase the mean systemic arterial pressure and thus to restore an approximately normal blood flow to cerebrum. [22,23] Kalmar et al. reported that the occurrence of Cushing reflex depends on the underlying pathology and may take hours, days or months (depending on the rate of increase in the pathology) and almost all the patients have bradycardia and hypertension at presentation. [23] We noticed the combination of bradycardia and hypertension in 12 cases (11 head injury and one case of penetrating chest injury), and none of them had respiratory abnormalities. The possible mechanism of bradycardia and hypertension can be irritation of the vagus nerve in the mediastinum. [24] The present study helps to understand the pattern of BP and HR in a subgroup of the population of trauma in India. It also highlights bradycardia patient and their relatively high-risk groups. A series of reports has suggested that BP offers prognostic information relevant to trauma-related injuries. [12,10,16,18] The study result reports that associative risk seen in the patients of trauma based on the data collected from the trauma registry in the Indian settings. It also highlights the prevalence of hypertension and bradycardia with altered respiration in the patients of head injury, thereby providing with the occurrence of Cushing's reflex. There is an association between relative risk (RR) and traumatic brain injury. [5,6] However, we did not identify any significant relationships involving RR. Possibly, the patients in this data set with respiratory depression were intubated early, and thereafter their RR was under control of the caregivers, not dependent on the patient's own depressed respiratory drive. [9] Limitations This study does not take into account the details of prehospital treatment and the details of resuscitative measures (intravenous fluids). [7] There were also no details of the medications received by the patients, for example, for pain and analgesia; however, it is being reported that these medications may not hamper the results of BP and HR as these have no adverse effect on the cardiovascular response. [25] Another limitation for the present study is that we do not have the details whether these patients had a history of hypertension in the past and or were receiving any medications. CONCLUSION The present study reports that patients with bradycardia and hypo-or hyper-tension had higher mortality percentages. The study supports that understanding initial altered physiological findings might be helpful in prognosticate the outcome in patients who present in the emergency room for trauma care. To better understand the relationship between admission HR and admission BP there will be a need for a well-planned prospective study. Financial support and sponsorship This study was funded by grants from the Swedish National Board of Health and Welfare and the Laerdal Foundation for Acute Care Medicine. Conflicts of interest There are no conflicts of interest.
2018-04-03T06:18:32.046Z
2017-01-01T00:00:00.000
{ "year": 2017, "sha1": "66126a5cbea2c409c6584d86f27baee702e790db", "oa_license": "CCBYNCSA", "oa_url": "https://europepmc.org/articles/pmc5364763", "oa_status": "GREEN", "pdf_src": "WoltersKluwer", "pdf_hash": "1aabbf96d549ccaad6a0207df2a8ebdaae463616", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
37357829
pes2o/s2orc
v3-fos-license
Prophylactic Use of Pentoxifylline and Tocopherol in Patients Undergoing Dental Extractions Following Radiotherapy for Head and Neck Cancer © 2017 Nigerian Journal of Surgery | Published by Wolters Kluwer Medknow Background: In head and neck cancer patients undergoing radiotherapy, osteoradionecrosis (ORN) of the jaw is one of the major but uncommon complications. Satisfactorily results have been observed while treating ORN patients with upcoming treatment modalities such as combination therapy of pentoxifylline and Vitamin E (PVe). It is believed that in patients undergoing dental extractions, these treatment modalities can be used prophylactically for lowering the risk of development of ORN. Hence, keeping all these things in mind, we planned the present study to assess the prophylactic role of pentoxifylline and tocopherol in patients who require dental extractions after radiotherapy for cancer of head and neck. Materials and Methods: A total of 110 patients were included in this retrospective study, which had radiotherapy for cancer of the head and neck. After radiotherapy, a total of 450 dental extractions were done in these 110 patients. Results: External beam therapy was given in 92.72% of the patients. 7.27% and 40% of the patients received intensity modulated radiotherapy combination of chemotherapy and intensity modulated radiotherapy, respectively. ORN developed only in 2 patients. Patients had taken PVe for a mean of 12 (24) weeks preoperatively and 14 (18) weeks postoperatively. The incidence was lower than that normally associated with dental extractions in irradiated patients. Conclusion: In patients undergoing dental extractions, after receiving radiotherapy of head and neck region, combination therapy of pentoxifylline and tocopherol are sufficiently effective. ORN is a condition of nonvital bone in a site of radiation injury. ORN can be spontaneous, but it most commonly results from tissue injury. The absence of reserve reparative capacity is a result of the prior radiation injury. [2] Symptoms can include pain, trismus, bad breath, difficulty with mastication, deglutition, and/or speech, dysgeusia, dysesthesia or anesthesia, pathologic fracture, intrOductiOn T he treatment of head and neck cancers remains a major challenge to medical practitioners because of the varied nature of clinic-histological patterns sites of origin, natural history, and varied treatment modalities involving extensive, delicate, and sometimes repeated surgeries, radiotherapy and chemotherapy. Most of the patients require adjuvant therapy in addition to surgery, concurrent chemotherapy or as palliative treatment for head and neck malignancies. Certainly, radiotherapy has been proved to increase cure rates; the irradiated patient is susceptible to secondary effects and a series of potential orofacial complications. One of the nastiest complications is osteoradionecrosis (ORN). [1] and local, spreading, or systemic infection. It is common in the posterior mandible. Incidence rate has been reported as 11%. [3] The etiology of ORN is multifactorial. There is radiation-induced tissue damage. It is most common in mandible. Mandible is supplied by inferior alveolar artery and minor supply from bony attachment. With aging and due to atherosclerotic changes, there is increased dependence on blood supply from these attachments. In an irradiated area such as in case of radiotherapy of head and neck, the source of infection such as periodontal disease or pulpal exposure leads to delay wound healing resulting in ORN. [4] Thus, ORN is very common in patients undergoing extraction after radiation therapy. Hence to prevent it, proper care should be taken. New treatment in the form of pentoxifylline and Vitamin E (PVe) has been introduced pentoxifylline acts as a tumor necrotic factor. Tocopherol scavenges free radicals generated during oxidative stress and protects cell membranes against lipid peroxidation. The combination of these two drugs proved to be synergistic antifibrotic agents. [5] This article aims to demonstrate the role of pentoxifylline and tocopherol in patients who require dental extractions after radiotherapy for cancer of head and neck. Materials and MethOds This study was conducted in the Oral and Maxillofacial Surgery Department from 2010 to 2015. Ethical permission from Institutional Ethical Committee was taken before the commencement of the study. Consent had been taken from all the patients involved in the study. The present study included assessment of a total of 110 patients, who had previously undergone radiotherapy for head and neck cancer. After careful clinical and radiographic examination, 450 unrestored teeth, root stumps, periodontally weak teeth of these 110 patients were extracted. Following injection of 2% lidocaine/1:800,000 adrenaline, treatment of all patients was started. Patients were put on a standard regimen of pentoxifylline 400 mg twice daily and tocopherol (Vitamin E) 1000 IU daily, ideally 1 month before extraction, and postoperatively, until the socket healed properly. To analyze the results, patients were categorized as having a high, moderate, or low risk of development of ORN after dental extraction. Extractions on the same side as the primary tumor and in a direct line of the radiation beam, for example, the lower right first molar in a patient with a squamous cell carcinoma (SCC) of the right tonsil, were considered at high risk. Those on the contralateral side to the primary tumor in an area in line with the radiation beam, for example, the lower left first molar in a patient with SCC of the right tonsil, were considered to have a moderate risk. Those in an area distant from the site of the primary tumor but still within the radiation field, for example, a posterior mandibular extraction in a patient with SCC of the larynx, were considered to have a low risk. In patients who required multiple extractions, we used the classification of the tooth with the highest risk. All the results were recorded and analyzed. results Results showed that out of total 110 patients, who underwent extractions after radiotherapy of head and neck region, 70 were male and 40 were female. Out of total 110 patients, 290 mandibular teeth and 160 maxillary teeth were extracted for various reasons cited in Table 1. It has been found that radiation caries is the main reason behind the extraction of teeth followed by apical periodontitis and periodontal diseases. All the 110 patients received radiotherapy. One hundred and two (92.72%) patients underwent external beam therapy and 8 (7.27%) had intensity modulated radiotherapy whereas 44 (40%) patients also received a combination of chemotherapy and intensity modulated radiotherapy. Patients who have undergone radiotherapy were at high risk to develop ORN. These patients were categorized into high, moderate, and low depending on the level of risk associated. When patients had multiple extractions, all teeth were classified according to those at highest risk [ Table 2]. Time interval from radiation to extraction was evaluated in patients who had extraction within 1 year, more than 2 years, and more than 5 years after radiotherapy. Results showed that 6% developed ORN in 1 st year and 12% in more than 2 years and 16% in more than 5 years after radiotherapy. Antibiotics were given preoperatively in 40 patients and postoperatively in 70 patients. Fifty patients were put on a single antibiotic such as penicillin while the remaining 60 patients had dual antibiotics such as penicillin and metronidazole. The mean (standard deviation) duration of PVe was 12 (24) weeks preoperatively and 14 (18) weeks postoperatively. discussiOn ORN is one of the serious complications of radiation therapy. The hallmark of the disease includes the presence of exposed bone in an irradiated are which fails to heal with a time period of 3 months. [6] Various risk factors which increase the chances of development of ORN includes: • Dose of external beam radiation above 50 gray units • Delivery of large dose in a comparatively short period • Irradiation in less vascularized parts such as posterior part of the mandible • Any surgical procedure performed after radiotherapy. [7] In ORN, there is hypocellularity, hypoxia and hypovascularity. There is damage to microvesiculation, resulted in initial hyperemia followed by endarteritis, thrombosis, and obliteration. [8] Several protocols have been proposed to reduce the risk of ORN after dental extractions, of which the two most commonly quoted are antibiotic prophylaxis, and hyperbaric oxygen therapy (HBOT). Thirty HBOT dives to 2.4 atmosphere for 90 min has been proposed. [9] Hence, we planned the present study to assess the prophylactic role of pentoxifylline and tocopherol in patients who require dental extractions after radiotherapy for cancer of head and neck. In the present study, only one patient developed ORN. According to a study conducted by Marx et al. [10] of 37 patients in hyperbaric oxygen group and 11 of 37 in antibiotic group develop ORN. Nabil and Samman [11] estimated a rate of 4% for the development of ORN after HBOT and 6% after antibiotic treatment. However, in reality, HBOT is not practical, as it requires 30 sessions in a compression chamber, each lasting 90 min, and a further series of 10 sessions is also usually required after the extraction. Patel et al. in 2016 reported the following effects of pentoxifylline: [12] 1. It raises intracellular cyclic adenosine monophosphate, activates protein kinase A, inhibits tumor necrosis factor (TNF) and leukotriene synthesis, and reduces inflammation and innate immunity improves red blood cell deformability reduces blood viscosity and decreases the potential for platelet aggregation and thrombus formation 2. Pentoxifylline exerts an anti-TNF-α effect 3. It increases erythrocyte flexibility, vasodilates, inhibits inflammatory reactions in vivo, inhibits human dermal fibroblast proliferation and extracellular matrix production, and increases collagenase activity in vitro [2] 4. Pentoxifylline and its metabolites improve blood flow by decreasing its viscosity. Tocopherols are a class of organic chemical compounds consisting of various methylated phenols, many of which have Vitamin E activity. Its vitamin activity was identified in 1936 as a dietary fertility factor in rats. Following are the functions of tocopherols: [13] 1. It scavenges the reactive oxygen species generated during oxidative stress that escape the activity of in vivo antioxidant enzymes, to protect cell membranes against lipid peroxidation 2. It partly inhibits transforming growth factor-beta and procollagen gene expression thus reduces fibrosis. The combination of pentoxifylline and tocopherol has been proven effective both in prevention and treatment of ORN by Delanian and Lefaix. [13] They act synergistically and have potent anti-fibrotic action. With the emergence of fibroatrophic theory which explains the pathogenesis of ORN, this drug combination reduces fibroatrophic changes in tissues and enhances wound healing by stimulating defective osteoblasts. [14] In this study, patients were put on a standard regimen of pentoxifylline 400 mg twice daily and tocopherol (Vitamin E) 1000 IU daily, 1 month before extraction, and postoperatively, until the socket had healed. Delanian et al. [15] treated 18 patients with ORN using pentoxifylline and tocopherol, with (8 patients) and without (10 patients) the addition of clodronate (new generation bisphosphonate that inhibits bone resorption by reducing the number and activity of osteoclasts). Complete healing of mandibular ORN was seen at The present study showed that 6% patients developed ORN in 1 st year and 12% in more than 2 years and 16% in more than 5 years after radiotherapy. This consistent increase in the risk of ORN over the period of time has also been reported by Nabil and Samman [11] who demonstrated that the incidence of 8% at 1 year after radiotherapy ascended to 16% after 2 years. Only 10 (9%) patients subjected to PVe in our study demonstrated adverse effects such as nausea, headache, and gastric irritation. Patel et al. [12] reported that 7% of their patients could not tolerate one or other of these medications. Literature has also revealed few side effects of these drugs, which include dyspepsia, nausea, headache or vertigo, asthenia, hot flushes, epigastralgia, and allergy in some patients. [12][13][14][15] Rice et al. reported that for the treatment of advanced cases of ORN, surgical treatment, including microvascular reconstructive techniques, remains the only reliable treatment option available. [16] The efficacy of HBOT in treating ORN varies considerably as reported by Costa et al. [17] Gevorgyan et al. advocated conservative approach for treating early cases of ORN while radical resection approach for more advanced cases. [18] cOnclusiOn For the prevention of development of ORN, pentoxifylline and tocopherol can be used safely as a prophylactic measure in patients undergoing extractions of head and neck cancer patients. These are newer drugs that are readily available, well tolerated, pain diminishing, and cost-effective. However, large scale studies are required to substantiate the results. Financial support and sponsorship Nil.
2018-04-03T04:00:57.941Z
2017-07-01T00:00:00.000
{ "year": 2017, "sha1": "9f8f7e2d75714e0d65d67395bc33c8793b53d691", "oa_license": "CCBYNCSA", "oa_url": "https://doi.org/10.4103/njs.njs_40_16", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "5dd94df4528d290f57325045800656d1895837db", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
233392887
pes2o/s2orc
v3-fos-license
Imaging modalities delivery of RNAi therapeutics in cancer therapy and clinical applications Technologies on theranostic nanomedicines has been discussed. We designed and developed bioresponsive and fl uorescent hyaluronic acid-iodixanol nanogels (HAI-NGs) for targeted X-ray computed tomography (CT) imaging and chemotherapy of MCF-7 human breast tumors. HAI-NGs were obtained with a small size of ca. 90 nm, bright green fl uorescence and high serum stability from hyaluronic acid-cystamine-tetrazole and reductively degradable polyiodixanol-methacrylate via nanoprecipitation and a photo-click crosslinking reaction. This chapter presents an over view of the current status of translating the RNAi cancer therapeutics in the clinic, a brief description of the biological barriers in drug delivery, and the roles of imaging in aspects of administration route, systemic circulation, and cellular barriers for the clinical translation of RNAi cancer therapeutics, and with partial content for discussing the safety concerns. Finally, we focus on imaging-guided delivery of RNAi therapeutics in preclinical development, including the basic principles of diff erent imaging modalities, and their advantages and limitations for biological imaging. With growing number of RNAi therapeutics entering the clinic, various imaging methods will play an important role in facilitating the translation of RNAi cancer therapeutics from bench to bedside. Introduction In the development of RNAi-based therapeutics, imaging methods can provide a visible and quantitative way to investigate the therapeutic effect at anatomical, cellular, andmolecular level; to noninvasively trace the distribution; to and study the biological processes in preclinical and clinical stages. Their abilities are important not only for therapeutic optimization and evaluation but also for shortening of the time of drug development to market. Typically, imagingfunctionalized RNAi therapeutics delivery that combines nanovehicles and imaging techniques to study and improve their bio distribution and accumulation in tumor site has been progressively integrated into anticancer drug discovery and development processes. There are three major aspects of scienti ic challenges in developing RNAi therapeutics for cancer therapy ( Figure 1): (1) target gene selection for RNAi and gene vector development for either systemic or local administration; (2) product screening and preclinical evaluation including the relative ef icacy, bio distribution, pharmacokinetics, and toxicity; and (3) clinical study of the ef icacy, safety, pharmacokinetics, and optimal dose. It has been proven that noninvasive imaging methods and biomarker detection could speed up the development of RNAi therapeutics [39][40][41]. Developing powerful imaging techniques and methods is important for providing valuable information by visualizing, characterizing, and quantifying the biological processes of RNAi therapeutics and monitoring their therapeutic effects and factors that are crucial for the optimization of RNAi cancer therapeutics. Generally, in vivo imaging of RNAi refers to the utilization of a variety of imaging modalities, including bioluminescence imaging (BLI) [42], photoluminescence imaging [43,44], magnetic resonance imaging (MRI) [45], positron emission tomography (PET) [46], single-photon emission computed tomography (SPECT) [47], and ultrasound [48], to quantitatively and/or qualitatively visualize the in vivo behavior of the RNAi therapeutics. Many imaging modalities have been applied for RNAi researches. The therapeutic genes have been directly or indirectly marked with a variety of imaging contrast agents to make them detectable in biological systems with clinically relevant imaging equipment. These probe-labeled systems with target delivery function hold high promise for studying the pharmacological properties of RNAi therapeutics in clinical translation. This chapter highlights the current status of RNAi-based Figure 1: The workfl ow of developing RNAi therapeutics and translating them for clinical applications. There are mainly four steps for this, including targeting gene selection, compound screening, clinical evaluation, and market release. Imaging methods could assist the development of RNAi therapeutics at every step in the RNAi therapeutics development processes, including marking for gene selection, tracing the small RNA sequences for pharmacokinetics study, evaluating the gene-silencing effi cacy, and diagnosing tumors and monitoring the therapeutic eff ects. cancer therapeutics in clinical trials with discussion of selected cases. Before addressing the current progresses of applying various imaging techniques in RNAi therapeutics development, we will discuss the biological barriers and challenges for developing RNAi therapeutics and emphasize the design considerations of non-viral gene delivery vehicles for targeting tumors. Finally, the perspective of applying various imaging techniques for developing RNAi cancer therapeutics is described in more detail. RNAi cancer therapeutics in clinical trials The signi icant advantages of RNAi strategy are its high speci icity together with in inite choice of genes that can be applied for cancer therapy. The RNAi therapeutics could interfere with angiogenesis, metastasis, chemo resistance of tumors, and the proliferation of cancer cells [49,50]. The intrinsic features of RNAi therapeutics make RNAi a novel strategy for cancer treatment. Therefore, a myriad of RNAi therapeutics is under investigation for this purpose. Recently, around 10 types of RNAi-based cancer therapeutics have entered the early stage of clinical trial (Table 1), demonstrating potential capability of RNAi with speci ic gene-silencing ef icacy for cancer treatment. Although the ongoing clinical trials provide encouraging results for future commercial success, many obstacles still remain ahead for applying RNAi therapeutics in humans. Here, we highlight the current status of two selected cases in clinical evaluation. The siRNA-loaded lipid nanoparticles (Atu027) have been applied clinically to suppress the expression of protein kinase N3 (PKN3) [63]. PKN3 is an effector of PI3K pathway related to the modulation of cell growth, differentiation, survival, motility, and adhesion as well as immune cell and glucose transport function. Chronic activation of PI3K pathway could prevent various human cancers and inhibit the growth of malignant cells [64]. Although numerous signaling molecules can be considered as therapeutic candidates to mediate the PI3K pathway, their upstream inhibition could trigger signal cascades with undesirable signal regulation of normal cells associated with various side effects. For this reason, PKN3 is considered to be a proper effector to adjust the growth of metastatic cancer cells with activated PI3K [65]. The results from animal studies indicate that Atu027 could effectively knock down the expression of PKN3 gene in the vascular endothelium to inhibit tumor growth and metastasis [63]. In clinical study of Atu027, no interferon response or activation of cytokines was observed, which may be due to liposomal encapsulation of siRNA with enhanced safety while avoiding triggering side reactions during circulation [66]. Thus, Atu027 was well tolerated, and no dose-dependent toxicity was observed. Recent study of using Atu027 to treat advanced solid tumors has found that up to 41% of patients exhibited no further progression of tumors after eight weeks of treatment [67]. Since Atu027 targets tumor stroma instead of tumor cells, it is expected that this treatment will be effective for all type of vascularized metastatic cancers. Another clinical trial product that targets kinesin spindle protein (KSP) and vascular endothelial growth factor (VEGF) for treating solid tumors is ALN-VSP02. KSP is a type of motor protein that plays a central role in the proper separation of emerging spindle poles during mitosis, and it is upregulated in many types of cancer cells. Therefore, KSP is another attractive target for cancer therapy by RNAi therapeutics, as silencing its expression will lead to cell cycle arrest at mitosis through the formation of an abnormal mitotic spindle and inally inducing apoptosis [68]. VEGF, which is involved in angiogenesis and lymphangiogenesis, is overexpressed in numerous cancer types [69,70]. Blocking the expression of VEGF is expected to inhibit angiogenesis and suppress tumor growth [71]. The dually targeted RNAi drug (ALN-VSP02) is in clinical trial aimed for treating solid tumors ( Figure 2a). In the irst-in-human study, the clinical activity, safety, and pharmacokinetics of ALN-VSP02 were evaluated, demonstrating good systemic tolerance and acceptable toxicity with biweekly intravenous administration. The results also demonstrated that the expression levels of both target genes were decreased in multiple patients, including one patient with complete regression of liver metastases from endometrial cancer (Figure 2b,c) [11]. Although ALN-VSP02 has achieved some success in clinical trials, some interesting results should be noticed. The infusion-related reactions (IRR) of ALNVSP02 seem to be complementmediated, not cytokine-induced. Additionally, it was found that ALN-VSP02 could cause spleen toxicity with prolonged dosing. The patient with endometrial cancer had a more than 50% decrease in blood low (Ktrans) and a 90% reduction in spleen size [11]. Based on the above observations, some improvements are suggested for future development of Stable nucleic acid lipid particle (SNALP)-based cancer siRNA therapeutics to improve the safety. It includes using more ef icacious and less immunogenic lipid components and reducing the size of SNALP-based drug delivery systems to be small enough to improve the circulation time and bene it the accumulation in solid tumors through the enhanced permeability and retention (EPR) effect. Finally, the biodegradability of SNALP should be optimized to reduce the spleen toxicity, which was induced by lipid accumulation along the endosomallysosomal pathway. To date, owing to the low stability, poor systemic distribution, possible side reactions, and low bioavailability of naked siRNA, most siRNA based therapeutics in clinical trials are formulated by encapsulation within lipid-based particles. These early trials hold great potential for cancer therapy, especially allowing RNAi to be precisely tailored in each case. However, many challenges still need to be solved for personalized cancer therapy by RNAi to become a reality. First, RNAi delivery systems with high ef iciency and low toxicity are needed for early (Phase I) clinical trial. Then an adequate knockdown of target genes needs to be achieved in later trials of Phase I to Phase II. Besides, possible offtarget effects in normal tissues should be avoided for safety consideration (Phase II). The positive or negative correlation of target knockdown with tumor regression should be analyzed during Phase II to verify the feasibility for Phase III study. Finally, for future clinical applications, the distribution, metabolism, and degradation of siRNA loaded nanocarriers need to be extensively studied. Biological barriers for RNAi cancer therapeutics Although several siRNA-based therapeutics have been evaluated in early phase clinical studies, there are various biological barriers that need to be conquered for successful clinical translation, such as ef icient delivery of RNAi therapeutics to tumors after reaching the circulation, overcoming the vascular barrier, cellular uptake, and endosomal escape (Figure 3). Because many target sites are not accessible or not convenient for local administration, thus the systemic administration of RNAi therapeutics is essential. To fully exploit the therapeutic potential of RNAi therapeutics, developing effective and biocompatible gene delivery systems that could speci ically target cancer cells in the body is a key factor. Generally, gene vehicles with the diameter ranging from 10 to 100 nm are suitable for systemic administration. These gene vectors are needed to be stable enough and well dispersed in blood. Besides, gene vectors decorated with targeting moieties could facilitate speci ic cellular uptake by cancer cells, a strategy that is important to evade the innate immune stimulation in the body [72]. In such cases, noninvasive imaging tools can be utilized to study the pharmacokinetic properties of RNAi therapeutics and monitor their effects in a visible way. In order to accelerate the screening process, imaging techniques with adequate spatial resolution and sensitivity for small animals should be used for determining the temporal and spatial bio distribution of the developed compounds. This may help reduce irrational costs and allow the selection of the most promising candidates during the early stage of drug development [73]. Administration barrier: Administration through oral route is obviously the most convenient approach for patients. However, it is currently challenging for treating cancers through the oral administration of RNAi therapeutics due to dif iculties in accessing tumor sites, including poor intestinal stability and insuf icient permeability across intestinal epithelium into circulation [34]. Subcutaneous administration is another route for the systemic delivery of RNAi therapeutics. The drugs can reach the circulation directly fromthe interstitial space of subcutaneous connective tissue to the capillaries by traversing through the vascular barrier or through lymphatic drainage. Compared to intravenous administration, the sustained entry of drugs into circulation through subcutaneous route can achieve almost complete absorption without irst-pass effect in the liver. A clinical example of RNAi therapeutics using subcutaneous administration to treat transthyretin-mediated amyloidosis is ALN-TTRsc, which could achieve approximately 90% knockdown of the transthyretin gene expression in liver in a phase I study [74]. For the subcutaneous administration of RNAi therapeutics, the lipophilicity and size of gene vectors have to be taken into account to avoid endocytosis by phagocytic cells in subcutis and lymph node drainage, which subsequently in luence the potency of RNAi therapeutics. For further details on designing desirable properties of siRNA delivery system, please refer to the following reviews [17,19,[75][76][77][78][79]. The most direct way for RNAi therapeutics to reach blood circulation is intravenous or infusion injection. Currently, several RNAi products for systemic administration have entered clinical evaluation for cancer treatment (Table 1). In order to identify and validate gene vector candidates for speci ic administration route of siRNA delivery during preclinical stage, in vitro and in vivo optical imaging can be assisted as a fast and inexpensive method for compound screening by visualizing and evaluating the biocompatibility, stability, absorption, and distribution in a live subjects in a real-time manner, as well as biological interactions at subcellular level. Vascular barrier: Passing through the endothelium of vasculature is a key step for RNAi therapeutics. Depending on the vascular permeability of target organs or tissues, a certain half-life of RNAi therapeutics in plasma is required. A successful gene silencing in the liver primarily bene its from the discontinuous sinusoidal capillaries, as the large openings in the endotheliumcan greatly access the leaked RNAi nanocarriers from the vasculature. Such pores in sinusoidal capillaries are wide openings for both passive and active passages of RNAi nanocarriers up to 100 nm in size from bloodstream to hepatocytes in liver. Tumor capillaries are discontinuous, with considerable variation of cell composition, the basement membrane, and pericyte coverage. Kobayashi, et al. suggested that four important factors should be considered when passive targeting is involved [80]: (1) internal and external blood low of tumor, (2) tumor vascular permeability, (3) structural barriers enforced by extracellular matrix and tumor cells, and (4) intratumoral interstitial pressure. These factors can certainly in luence the accumulation of certain sized nanoparticles or molecules in tumor tissue by EPR effect [81]. In order to take advantage of the EPR effect in tumor tissues, or receptor mediated transcytotic pathway through vascular endothelium [34], longer half-life for RNAi therapeutics is necessary. In contrast to sinusoidal or tumor capillaries, the fenestrated capillaries have much smaller pores (60-80 nm in diameter) in endothelium covered with continuous basal lamina, which can prevent the diffusion of large-scale nanoparticles. They are mainly located in the endocrine glands, intestines, pancreas, and glomeruli of kidneys. The tightness, shape of the pores, continuous basal lamina, and extracellular matrix should be considered for designing non-hepatic tissues-targeted RNAi-based formulations and delivery strategies. The stability of RNAi therapeutics in blood circulation is of primary importance for arriving tumor tissues. A major challenge is to evade the phagocytic uptake by mononuclear phagocyte system (MPS) in the bloodstream [82]. The formulation size, surface electrostatic nature, and stability can certainly affect the uptake by MPS. It is more likely to https://doi.org/10.29328/journal.jro.1001035 Figure 3: Schematic illustration of nanocarrier-mediated delivery of RNAi therapeutics: the biological barriers for gene silencing in cancer treatment include reaching the circulation, crossing the vascular barrier, cellular uptake, and endosomal escape. Following systemic administration in patients, the RNAi therapeutics could be transported to the blood vessels in tumor tissues. The gene vectors could escape from the sinusoidal and fenestrated capillaries and retain in the tumor regions. The gene vectors could be taken by cancer cells through endocytosis or ligand-mediated intracellular transport and then transferred into the cytoplasm to silence target genes for cancer therapy. interact with MPS and other components in the bloodstream if they possess large size and excessive net charge [83]. Thus, the formulations are generally of minimum net charge through modi ication with hydrophilic and neutral molecules to increase the stability. Introducing poly(ethylene glycol) (PEG) shell to the surface of RNAi vectors is a general approach for stabilization. However, the high stability achieved by PEGylation can reduce uptake by target cells [84]. This can be solved by attaching targeting moieties to RNAi therapeutics [85]. The vascular endothelium is negatively charged because the heparin sulfate proteoglycans are on the cell surface and in the extra cellular matrix. Additionally, tumor vascularity is controlled by oxygen supply and some metabolites [86]. It has been observed that increasing the tumor vascularity could result in more ef icient delivery of RNAi agents to solid tumors [87]. It was reported that nanoparticles with smaller size and increased lipophilicity could lead to higher level of accumulation in tumor tissues [88]. To evaluate the delivery ef iciency and in vivo behavior of the potential RNAi therapeutics, quantitative functional, and molecular imaging can be integrated to provide valuable information concerning pharmacokinetics and bio distribution properties in animal models, such as quantitative visualization of compound absorption and distribution in tissues and organs, as well as elimination time after the administration, and the amount of compounds that reaching target tissues. Usually, the quantitative imaging assessment can be performed in a real-time, whole-body, and noninvasive way by selecting appropriate imaging techniques. In this method, the compound is often labeled with speci ic imaging probes corresponding to the imaging modality for further applications. A great bene it of this approach is that the animals can be imaged repeatedly for longitudinal studies, which minimizes the number of animals needed for a given experiment [89]. Cellular barrier: Cellular uptake is critical for successful delivery of RNAi therapeutics into the cytoplasm for gene silencing. Basically, cell membrane is comprised of hydrophobic phospholipid bilayer embedded with various functional proteins. The innate negatively charged cell surface provides an external biological barrier to naked siRNA molecules. The small cationic peptides, cationic lipids, and polymers have often been applied to facilitate the cellular uptake of siRNA molecules via endocytosis [90]. However, for targeted delivery of RNAi therapeutics into cancer cells, the receptor-mediated endocytosis is mostly preferred, and various ligands have been used for targeted delivery of RNAi therapeutics, including folate [91], transferrin [92], and aptamers [93]. Those ligands could speci ically interact with receptors overexpressed on the surface of cancer cells to promote cellular uptake. Pros and cons of using various targeting ligands for siRNA therapeutics have been reviewed elsewhere [94]. Once entering into target cells, the success in approaching RNA induced silencing complex (RISC) in cytoplasm is largely dependent on endosomal escape. It has been suggested that endosomal escape should occur before late endosomes fuse with lysosomes, which contains certain digestive enzymes [95]. For cationic polymers, which could enhance endosomolysis via absorbing protons and preventing the acidi ication of the endosomes, the elevated in lux of the protons to endosomes increases osmotic pressure that causes lysosome swelling and rupture, eventually releasing the gene vectors to cytoplasm [96,97]. Similarly, ionizable lipids with neutral charge in bloodstream can become positively charged in endosomes, subsequently leading to disruption of the endosome membrane [17]. Elucidating the mechanism of endosomal escape pathways can help develop new strategies for gene delivery without relying on acidi ication. For investigations of cellular uptake and intracellular traf icking, molecular/cellular imaging is an essential tool to demonstrate the ligand-receptor interaction, subcellular translocation, and mechanism of potential RNAi therapeutics. Fluorescent probes and radioactive isotopelabeled drug candidates are commonly used for wholebody and subcellular tracking to provide quantitative and qualitative information of biological processes occurring at cellular and molecular levels. Immune response and safety: The key issue in the application of RNAi therapeutic is the safety without any undesirable side effects. Unwanted silencing of target genes in normal organs or tissues is called "off-target silencing" [38,98]. The siRNAs longer than 30 bp can induce the interferon pathway [36]. Such induction of interferon reaction is caused by the innate immune system because the human body recognizes long dsRNA as virus particles and triggers the innate immune system to overcome the infection. Several investigations showed that even low concentration of siRNAs can induce natural immunity by activating interferon expression [99]. It is suggested that the main mechanisms of immune response caused by some siRNAs are the stimulated production of proin lammatory cytokines through TLR-8 on monocytes and TLR-7 on dendritic cells in a sequencedependent manner [36,100]. In order to overcome this problem, the therapeutic siRNAs should be less than 21-23 bp [36]. Chemicalmodi ications including 2′-O-methylation are also performed to avoid immune activation of siRNAs. Hence, immunostimulatory effects of potential siRNA-based therapeutics must be evaluated in animals prior to the clinical trial until the exact mechanism of sequence-dependent siRNA-induced immune response is fully understood [101]. In addition to siRNA-induced immune activation, the formulation of siRNA-based nanoparticles also plays an important role in safety pro ile for systemic delivery. Presently, the collective results of siRNA SNALP-based nanoparticles (ALN-VSP), cationic liposome/lipoplexes (Atu027), and cyclodextrin-based polymers (CALAA-01) have shown RNAi ef icacy and dose tolerability in early clinical development. Based on these valuable outcomes and lessons learned, several additional key investigations are suggested here. (1) The quality control assay for RNAi products must be performed to prevent structure alteration during the practical use. (2) A better understanding of proin lammatory cytokine response is needed for developing more potent siRNA-based nanoparticles with less acute immunostimulatory events. The side effects could be re lected in the change of pathological states or pathological indicators, such as in lammation, enzyme levels, and other biological factors. (3) Imaging techniques could be applied to study the safety of RNAi therapeutics. Once the safety evaluation of a potential RNAi therapeutic has been established in early trial, molecular imaging can assist in the establishment of biological activity at appropriate dosage range with acceptable toxicities associated with the detection, diagnosis, evaluation, treatment, and management of cancer. Imaging modalities in the RNAi cancer therapeutics development process Noninvasive imaging techniques are important tools to visualize and quantify the biological processes of RNAi therapeutics at cellular or tissue levels in a real-time way. Imaging-guided delivery of RNAi therapeutics can trace the pathway of RNAi therapeutics inside the body, provide pathological information of tumors, evaluate the tumor targeted delivery, and provide further information about the pharmacokinetics of the RNAi therapeutics. Optical imaging: Optical imaging, mainly including luorescence imaging and bioluminescence imaging (BLI), which provides the most convenient way for preclinical study of RNAi knockdown due to their advantages including abundant choices of optical dyes, easy labeling, noninvasiveness, multi-channel imaging function, and wholebody real-time readout. Besides, the excellent sensitivity and inexpensive use of optical imaging (with maximum penetration depth of few centimeters [102,103]) make it a promising modality for real-time monitoring of siRNA delivery in small animals. For luorescence imaging, the animals are illuminated by a light at an appropriate wavelength to excite the luorescent agents in situ, and then the emitted light from luorescent agents is iltered and detected by a charge-coupled device (CCD) camera maintained at low temperature ( Figure 4a) [104]. Since both the excitation and emission lights used are low-energy photons, it is considerably safer than other imaging systems that involve ionizing radiation. However, the use of relatively low energy light results in limited tissue penetration due to the light absorption by tissue components, which makes it virtually impossible for deep tissue imaging in large animals or human subjects. Moreover, there are other intrinsic limitations for consideration, for example, the light scattering phenomenon during the light propagation in tissues can result in a blurry luorescence image [105]. For in vivo optical imaging, auto luorescence is an undesired background signals emitted by natural luorophores in tissues, which can overlay with luorescent signal from optical probes, and consequently reduce the signal-to-noise ratio [105]. Near-infrared luorescent (NIRF) agents are generally applied for labeling siRNA therapeutics, as NIRF emits light in the range of 650-900 nm with deep tissue penetration by minimizing the tissue absorption, scattering, and auto luorescence [106]. For instance, VEGF siRNAs labeled with cyanines dye (Cy5.5) were conjugated to PEG through disul ides to form micelles through the interaction with PEI [107]. With Cy5.5, the VEGF siRNA-loaded micelles could be traced for delivering siRNA into prostate cancer cells (PC3), and the accumulation of Cy5.5-labeled siRNA, siRNA/ PEI mixture and micelles could be detected in tumors and the main organs also could be monitored after administration to PC-3 tumor-bearing mice. The luorescent dyes could also be utilized to study the intracellular pathway of RNAi therapeutics. By image-based analysis of lipid nanoparticles (LNPs) incorporating A647-labeled siRNA, it was observed that the LNPs could enter cancer cells through clathrinmediated endocytosis and macropinocytosis, while the ef iciency of siRNA escape from endosome to cytosol was low (1-2%) and the endosome escape only occurred when the LNPs were located in the compartment sharing early and late endosomal characteristics [108]. In addition, luorescence imaging could provide multi-channel imaging functions by labeling RNAi therapeutics with multiple dyes. For instance, the Cy3 and Cy5-labeled siRNAs were covalently linked to PEG to formnanocarrier-like loop, which could generate signals for dual imaging of the products inside cancer cells based on the luorescence resonance energy transfer (FRET) of the luorophore pair [109]. Furthermore, the RNAi therapeutics in blood circulation, leakage from blood to tumor tissues as well as distribution in tumor tissues can be monitored in a real-time manner with intravital confocal laser scanning microscopy (IVRTCLSM). For instance, the Cy5-siRNAincorporated polymeric micelles and naked siRNA (Cy5-siRNA) could be traced in blood circulation to study their pharmacokinetics in vivo, to investigate their entry from blood to tumor tissues, as well as their distribution in tumor tissues by utilizing IVRTCLSM (Figure 4b,c,d) [110][111][112]. Besides organic luorescent dyes, other nanomaterials, such as quantum dots (QDs) [113], carbon dots (C-dots) [114], and up-converting nanoparticles (UCNPs) [115], are also used for optical imaging as well as for siRNA delivery due to their high signal intensity and photostability. Speci ically, tumorspeci ic, multifunctional siRNA loaded QDs were prepared to induce downregulation of the expression of epidermal growth factor receptor variant III (EGFRvIII), which plays an important role in interfering with the proliferation of various types of cancer cells, while the uptake of siRNA-QDs was monitored by luorescence imaging [116]. Compared to QDs, the C-dots show advantages of better biocompatibility, lower cost, and easier preparation and are considered as a potential alternative to QDs [117]. The C-dots surface coated with alkyl-PEI2k could effectively deliver siRNA and transfect against ire ly luciferase (fLuc) with inhibited expression of luciferase gene in 4 T1-luc cells, while maintaining their biocompatibility and luorescence properties [118]. The UCNPs have gained increasing attention in recent years as a new generation of biological luminescent nanoprobes for cell labeling and optical imaging. It offers many advantages, such as deep penetration, low background auto luorescence, and high resistance to photo-bleaching, thus providing a promising optical imaging way for monitoring the siRNA transfection [119,120]. In contrast to luorescence imaging, BLI does not require an external light illumination on the living subjects because the bioluminescence is produced by oxidation reaction of luciferin with catalytic assistance of luciferase enzyme in the body. Various luciferase genes can be introduced into biological systems via transfection techniques and expressed with corresponding luciferase enzymes (Figure 5a). In this technique, there is no interference from auto luorescence and endogenous bioluminescence. Thus, BLI can provide a higher signal sensitivity and better signal-to-noise ratio than luorescence imaging. By exposing RNAi therapeutics to cancer cells or injecting to tumor-bearing animals, the expression of bioluminescent reporters and luorescent proteins, such as luciferases, the green luorescent protein (GFP), and red luorescent protein (RFP) [121], could be tested to evaluate the gene-silencing ef icacy of those products. This could be helpful in forecasting their interfering ability for screening the most effective siRNA therapeutics and the best formulations before further test in human. In one study, Kay's group demonstrated the feasibility of monitoring siRNA delivery and assessing silencing effect by in vivo BLI (Figure 5b,c) [122]. In another study, by co-injection of luciferase plasmid and synthetic luciferase siRNA, the silencing effect in a variety of organs was monitored through BLI [123].Moreover, BLI was applied to assess the silencing of the activity of P-glycoprotein (Pgp), a multidrug resistance (MDR1) gene product overexpressed in multidrug-resistant cancer cells by using short hairpin RNA interference (shRNAi) [124]. The shRNAi-mediated downregulation of Pgp activity at cellular level or in animal models could be directly traced by BLI of Renilla luciferase (rLuc) reporter through its substrate, coelenterazine, which is also a known as substrate for Pgp transportation. Furthermore, the in vivo gene-silencing activity of luciferase siRNAs incorporated in calcium phosphate (CaP) nanoparticles was tested in a fLuc-expressing human cervical cancer cell line (HeLa-Luc). These nanoparticles were tested in transgenic mice (FVB/ NJc1 female mice) with spontaneous pancreatic tumors by measuring the bioluminescence intensity with IVIS ® after intraperitoneal injection of luciferin [125]. Recently, the kinetics of siRNA-mediated gene silencing combined with BLI was studied for the assessment of the best approach for gene silencing [126], for simulating/predicting the effective siRNA dose based on luciferase knockdown in vitro, and for studying the kinetics of luciferase knockdown by RNAi therapeutics in subcutaneous tumors and their effects. GFP and its derivatives have also been widely utilized for imaging in vitro gene silencing of siRNA in numerous studies [127,128]. For instance, silica-gold nanoshells were covalently decorated with epilayer of poly(L-lysine) peptide (PLL) on the surface to load single-stranded antisense DNA oligonucleotides or double-stranded short-interfering RNA (siRNA) molecules with NIR laser irradiation-triggered release of gene segments and endosomal escape [128]. The gene-silencing ef icacy was evaluated by measuring the downregulation of GFP in human lung cancer H1299 GFP/ RFP cell line. Cancer cells and tumor-bearing animals were used to study the gene-silencing effects for screening the RNAi products and serve as a real-time tool to investigate the ef icacy of siRNA delivery in preclinical studies. Understandably, the application of these technologies in humans is limited because of need of reporter gene transfection. PET and SPECT: PET and SPECT are nuclear imaging techniques that use radioactive tracers and detect gamma rays to provide information of molecular signatures at molecular level within living subjects ( Figure 6). Both imaging systems could afford excellent penetration depth in tissues and high sensitivity for whole-body imaging. For PET imaging, the speci ically radiolabeled imaging agents are required for targeting and visualizing organs or tissues of interest. Imaging of radioactive tracer is achieved by detecting the high-energy photons (gamma ray) emitted from the radioactive isotopes during the spontaneous radioactive decay. More speci ically, the nucleus of speci ic radioactive isotope undergoes a beta plus (β+) decay due to its unstable nuclear system, while an excessive proton is converted into a neutron, a positron, and an electron neutrino [129]. Based on electron-positron annihilation, the collision between electron and positron produces two gamma ray photons, traveling at opposite directions at approximately 180° from each other [130,131]. The gamma ray has ten times higher energy than X-ray, and large number of emitted paired photons from radioactive isotopes detected by gamma cameras provide angular and radial distance information from regional interest [131]. These features enable high signal sensitivity and reconstruction of quantitative tomographic images. Owing to their highly quantitative and sensitive nature, radionuclide imaging techniques have been utilized for analyzing the pharmacokinetics and bio distribution of RNAi therapeutics [132]. Relatively short half-lived isotopes such as 18F (t1/2 = 109.8 min) and 64Cu (t1/2 =12.7 h) are frequently used as radioactive tracers to label siRNA molecules or drug delivery systems for PET imaging. In one recent study, core/ shell-structured hollow gold nanospheres (HAuNS) were developed as a targeted NIR light-inducible delivery system for nuclear factor kappa-light-chain-enhancer of activated B cells (NF-κB) targeting siRNA [133]. By conjugating 1,4,7,10-tetraazacyclododecane-1,4,7,10-tetraacetic acid (DOTA) derivatives to the surface of nanospheres and labeling with 64Cu, the HAuNS were applied for micro-PET/computed tomography (CT) imaging. The PET/CT images indicated that targeted HAuNS showed higher accumulation in tumors than nontargeted nanocarriers in HeLa cervical cancer bearing nude mice after intravenous injection (Figure 7). 18F-labeled siRNA has also been investigated using PET to measure the pharmacokinetics and bio distribution of siRNA delivery systems [134]. For example, Oku et al. used Nsuccinimidyl 4-18F-luorobenzoate (18F-SFB) to label siRNA for real-time analysis of siRNA delivery [135]. PET images revealed that naked 18F-labeled siRNA was cleared quite rapidly from the blood stream and excreted from the kidneys. However, the cationic liposome/18F-labeled siRNA complexes tended to accumulate in the lung. There is an urgent need to develop facile and ef icient 18F-labeling methods for PET imaging of RNAi because most traditional 18F-labeling strategies are time-consuming with low yield. Single-photon emission computed tomography (SPECT) is similar to PET by utilizing radioactive materials that decay through the emission of single gamma rays ( Figure 6). By comparison, SPECT scans are signi icantly less expensive than PET since the cyclotron is not required to generate short half-life radioisotopes [136]. SPECT uses isotopes with longer half-lives or from generator elution, such as 111In (t1/2 = 2.8 days), 99mTc (t1/2 = 6h), 123I (t1/2 = 13.3 h), and 131I (t1/2 = 8 days), to provide information about localized function in internal organs with view of the distribution of radionuclides. However, as the emission of gamma rays cannot provide suf icient spatial information for tomographic reconstruction, a special instrumental design for data acquisition is required, and the sensitivity of SPECT can be over 1 order of magnitude lower than PET [136]. In a recent study, siRNA was modi ied with hydrazinonicotinamide (HYNIC), a chelator for technetium-99m (99mTc), to monitor siRNA at cellular level by gamma counting and micro autoradiography [137]. Besides, the delivery process and bio distribution in tumor-bearing mice were assessed by whole-body imaging. Merkel, et al. also employed SPECT to monitor the bio distribution and pharmacokinetics of siRNA labeled with a gamma emitter (e.g., 111In/99mTc) [138]. In the real-time perfusion investigation, rapid accumulation of gamma emitter-labeled siRNA in the liver and kidneys could be observed, followed by an increasing signal in the bladder. In this technique, the radioactive isotope-labeled RNAi therapeutics is injected into animals. Positrons are emitted from the isotopes associating with electrons, which cause annihilation and subsequent production of two gamma (γ) rays. The two high-energy γ rays are traveling at 180° from each other. Then the γ rays are received by detector array with electrical signals and fi nally converted into tomographic images. (b) Another schematic illustration of the basic principle of in vivo SPECT imaging. First, the radioactive isotopes-labeled RNAi therapeutics is administered into the mouse to emit γ rays. The γ rays produced by isotopes in SPECT do not travel in opposite directions, instead, are collected by detector array that rotates around animals, while any diagonally incident γ rays are fi ltered by collimator. The γ rays received by detector array are converted and reconstructed into tomographic images. Quanti ication of scintillation counts in the regions of interest (ROI) revealed that the half-life of siRNA complexes in the blood pool is less than 3 min, suggesting a very rapid excretion into the bladder. Once the siRNA is labeled with radioactive probes, it can be used in noninvasive perfusion, kinetics, and bio distribution evaluation. This offers the advantage of realtime live imaging and investigation at various time points in the same animal to reduce the number of animals needed compared to conventional methods. MRI: MRI is an important versatile technique that provides noninvasive imaging based on the principle of nuclear magnetic resonance (NMR), by using strongmagnetic ield and radiofrequency (RF) pulses to generate RF signal (relying on intrinsic physiological feature) for visualization (Figure 8a). Speci ically, an atom nucleus consists of a number of protons and neutrons, each of which has a constant spin and produces angular momentum, which consequently leads to a net angular momentum in the nucleus. If there is an equal number of protons and neutrons in nucleus, net angular momentum is zero. If there is an unequal number of protons and neutrons, then the nucleus gives a speci ic net spin angular momentum. In the latter circumstances, the nuclear Larmor precession is gained when an external magnetic ield is present, and the resonant absorption of RF pulses by nucleus will occur when the frequency of RF pulses equals to the Larmor precession rate. Finally, the RF signal is generated after the removal of external magnetic ield [139]. In this regard, the 1H nucleus is particularly useful for MRI since it is abundant in aqueous physiological environment and is magnetically active to give a large magnetic moment to generate RF. However, the RF signal can only be detected from an excess of nuclei with spins aligned either parallel or antiparallel direction, an equal number of nuclei spins pointing in opposite direction cannot generate detectable MR signals [129]. Therefore, MRI is limited by low sensitivity with long signal acquisition time. Nonetheless, MRI has a number of unique advantages including high spatial resolution, deep tissue penetration, and excellent soft tissue contrast. MRI has been widely used in the clinic to study the anatomy as well as function of tissues. In addition to the development of high ield scanners, the design of contrast agents (CAs) plays an important role to improve the image quality by enhancing the contrast of diseased regions while sparing normal tissues. Generally, the CAs could be classi ied as T 1 and T 2 CAs due to their magnetic properties and relaxation mechanisms (Figure 8b). Super paramagnetic iron oxide nanoparticles (SPIONs) have the ability to decrease the spin-spin relaxation time for T 2 -weighted imaging of speci ic tissues. There are several types of SPIONs approved as contrast agent's for MRI in the clinic [140]. Recently, Mok, et al. designed and synthesized a pH-sensitive siRNA-loaded nanovector based on SPIONs. The SPIONs were modi ied with PEI, a commonly used gene transfection macromolecule, through acid-cleavable citraconic anhydride bonds, and coated with anti-GFP siRNA and tumor-speci ic ligand, chlorotoxin (CTX) [141]. The nanovectors exhibited excellent magnetic property for MRI with a signi icantly higher r2 (673mM−1 s−1) than the commercial available T 2 contrast agents (e.g., Feridex). More interestingly, the nanovectors did not elicit obvious cytotoxicity at pH 7.4, but exhibited signi icant cytotoxicity at pH 6.2 as a result of acidic environment elicited cytotoxicity, which may be caused by the protonation of the primary amine at low pH. Meanwhile, the gene-silencing effect under acidic pH condition was signi icantly higher than that under physiological pH condition, because the surface of nanoparticles was nearly 3 times more negatively charged at pH 7.4 than that at pH 6.2. In another study, the formulation of polyethylene glycol-graft-polyethylenimine (PEG-g-PEI)coated SPIONs were prepared, which was further modi ied with neuroblastoma cell speci ic disialoganglioside GD2 single-chain antibody fragment [142]. The nanocarriers could deliver Bcl-2 siRNA to cancer cells and knock down the expression of Bcl-2 mRNA. In addition, effective delivery of siRNA was con irmed through the in vitro and in vivo MR imaging studies. Besides T 2 CAs, paramagnetic compounds, such as gadolinium and manganese-based compounds, can elevate the relaxation potential by reducing the T 1 relaxation time, which are widely applied as T 1 contrast agents. Recently, a type of nanoplex that self-assembled from luorescein isothiocyanate (FITC)-labeled siRNA-chk duplexes and rhodaminelabeled PEI, in which the PEI segments was linked to poly-L-lysine (PLL) with dual-labeling of Cy5.5 and Gd-DOTA, while the PLL-end was combined with prodrug enzyme bacterial cytosine deaminase (bCD) that can convert the nontoxic prodrug 5-luorocytosine (5-FC) to cytotoxic 5-luorouracil (5-FU), for imaging-guided RNAi cancer therapy [143]. The nanoplex labeled with different types of CAs could make it possible for MR and optical imaging of the delivery of siRNA and the function of prodrug enzyme in breast tumors for image-guided andmolecular targeted cancer therapy. For instance, the high-resolution T 1 -weighted MR images and quantitative T 1 map of tumor region were obtained by MRI, and the contrast in tumor region was enhanced after the administration of nanoplex (Figure 8c) with T 1 value change as a result of the accumulation and diffusion of the nanoplex, demonstrating successful delivery of siRNA into tumor tissues also con irmed by optical imaging. The delivered siRNA could downregulate the activity of aggressive enzyme of choline kinase-α (Chk-α) in breast cancer cells, while the bCD could convert 5-FC to 5-FU, which procedure could be noninvasively monitored by 1HMR spectroscopic imaging and 19F MR spectroscopy. In another study, hollow manganese oxide nanoparticles (HMONs) were exploited as a theranostic Nano platform for simultaneous cancer targeted siRNA delivery and MR imaging [144]. In this study, HMON nanoparticles were coupled with 3,4-dihydroxy-Lphenylalanine conjugated branched PEI (PEI-DOPA) through the strong af inity between DOPA and metal oxides and further modi ied with Herceptin, a therapeutic monoclonal antibody to target Her-2 expressing cancer cells selectively. Although SPIONs have already been widely used as T 2 MRI contrast agents, they still have some drawbacks, such as magnetic susceptibility artifacts and negative contrast, which limit their clinical applications [145]. The development of T 1 -T 2 dual-modal contrast agents has attracted considerable interest because they can provide the contrast for T 1 -weighted imaging with high tissue resolution and for T 2 -weighted imaging with high feasibility of lesions detection. Recently, Wang, et al. reported a low-molecularweight polyethylenimine (stPEI)-wrapped and gadoliniumembedded iron oxide (GdIO) nanoclusters (GdIO-stPEI) for T 1 -T 2 dual-modal MRIvisible siRNA delivery, which exhibited high relaxivities for MRI measurements and suppressed expression of luciferase proteins for dualtype of MR imagingguided siRNA delivery [146]. Ultrasound: Ultrasound is a clinically widely equipped imaging modality for evaluating the structure, function, and blood low of organs, which could provide images with high spatial and temporal resolution at low cost (Figure 9a). Ultrasound scanners can emit sound waves with frequencies between 1 and 20 MHz and receive feedback waves re lected by tissues based on density difference to build images for diagnosis [147], which could provide images in a real-time manner without processing delay after acquisition compared with other imaging modalities [40]. In principle, the signal re lected from tissues is insuf icient for precise diagnosis because of artifacts from normal tissues. Thus, ultrasound contrast agents are essential to increase the imaging accuracy. Some contrast agents have been developed to enhance positive signal for ultrasound imaging [148][149][150], such as microbubbles [151][152][153], nanodroplets [154][155][156][157], nanobubbles [158,159], and liposomes [160,161] (Figure 9b). These are usually constructed with shell of proteins, polymers, lipids, or surfactants to maintain stability in the bloodstream as well as escaping from RES,while loading air or biologically inert heavy gas such as nitrogen, per luorocarbons, and sulfur hexa luoride to generate echogenicity (Figure 9b) [162]. Besides, solid nanoparticles with cavities that can trap gas [163], and nanoparticles constructedwith gas generating materials have also been applied to enhance the contrast for ultrasound imaging [164]. Ultrasound demonstrates potential advantages in the development of RNAi therapeutics. First, with low acoustic pressure (< 100 kPa) when the ultrasound probe arrives the tumor vasculature, ultrasound could be used to diagnose tumors for imaging-guided delivery of RNAi therapeutics and monitoring the therapeutic effects. Besides, by applying high acoustic pressure (100 kPa to several MPa), ultrasound could be applied to disrupt the probes to release cargos (drugs or RNAs) in target positions and change the permeability of cell membrane with more siRNA delivered intracellularly for gene silencing [152]. As a result, ultrasound can enhance therapeutic effect of RNAi therapeutics (Figure 9c) [149,165]. The siRNA molecules can be attached to the surface of microbubbles or trapped in the bilayer of liposomes, or siRNA-loaded nanoparticles can be incorporated into ultrasound probes. For instance, epidermal growth factor receptor (EGFR)-directed siRNA (EGFR-siRNA) could be ef iciently attached to microbubbles with around 7 mg siRNA per 109 microbubbles and safely protect siRNA from RNase digestion [166]. The EGFR-siRNA-loaded microbubbles reduced the EGFR expression of murine squamous carcinoma cells in vitro, and the ultrasound triggered destruction of microbubbles released EGFR-siRNA speci ically in the tumor region to effectively delayed tumor growth, while the tumor volume was monitored by ultrasound. However, microbubbles are limited to vascular compartment with poor tumor tissue penetration because of its large size and relatively poor stability. Therefore, ultrasound probes with much smaller size, such as nanobubbles, nanoparticles, and nanoscale liposomes, with better tumor tissue penetration properties have been fabricated for ultrasound diagnosis and ultrasoundmediated siRNA delivery with better tumor accumulation [157,167]. For instance, the ultrasound-sensitive siRNA nanobubbles made from positively charged liposomes with gas core and decorated with negatively charged siRNAs on the surface could effectively accumulate in the tumor tissues through EPR effect, demonstrating high potency for tumor imaging and targeted delivery of siRNA for RNAi therapy [168].Moreover, high cellular af inity ligands have been introduced to the surface of ultrasound nanoprobes, such as aptamer-decorated nanobubbles have been developed to speci ically target the CCRF-CEM cells (T-cells, human acute lymphoblastic leukemia) for ultrasound imaging [159]. Other bioactive compounds, such as anticancer drugs and plasmid DNA, could also be co-loaded into probes for ultrasound imaging and combination therapy of tumors [160]. Multimodality imaging: Although each imaging modality has its unique advantages, it is also endowed with its intrinsic limitations making it dif icult to obtain accurate and reliable information on all aspects of structure and function about the target organs by a single imagingmodality [169]. Table 2 summarizes some general features of classical imaging modalities including optical imaging, radionuclide imaging, MRI, and ultrasound imaging. To cope with the shortcomings of each modality, multimodality imaging combines different imaging techniques and imaging probes and can provide some complementary information about RNAi therapeutics. However, in context of developing multimodality probes, it should be noted that challenges are involved especially for applications in living subjects. Since the multimodality probes are primarily nanoparticle based, such as organic dye-labeled iron oxides, gadolinium chelates functionalized QDs, magnetic microbubbles, radiolabeled C-dots, etc. The major problems may be associated with insuf icient concentration of probes at target sites due to the undesired uptake by mononuclear phagocyte system (MPS). Other concerns include slow clearance time, long-term retention in tissues and organs, as well as the long-term toxicity. These issues are highly related to the physicochemical properties (e.g., chemical component, size distribution, inal hydrodynamic diameter, shape, surface charge) of nanoprobes [170]. In addition, different modalities differ in their imaging sensitivity by large magnitude [171]. Thus, the combination of two different probes needs to be carefully designed with proper ratio [172]. Therefore, in order to reach the full potential of multimodality imaging, the participations of multidisciplinary scientists with solid background in nanotechnology, material science, pharmacology, pharmaceutical chemistry, clinical medicine, biomedical engineering, and instrumental techniques are essential in the early development stages [173,174]. A typical example of a multimodality probe-siRNA delivery system for in vivo imaging of RNAi has been reported by Medarova, et al. [175]. In this study, a dual-purpose probe was developed, which was composed of iron oxide core and Cy5.5 dye on the surface. The probe was further modi ied with cell membrane translocation peptides to facilitate intracellular delivery of siRNA (Figure 10a). The successful delivery of GFP siRNA duplex to tumors was assessed byMRI and optical imaging of tumor-bearing animals after intravenous injection of the probes (Figure 10 b,c). In another study, both commercial MRI contrast agents (Magnevist/ Feridex) and Alexa-647 dye-labeled siRNA for targeting cyclooxygenase-2 (COX-2), an important therapeutic target in cancer, were encapsulated in PEGylated polycationic liposomes. The liposomes were used to assess the delivery and silencing effects of siRNA in vivo [176]. It was found that Feridex-loaded liposomes demonstrated better performance than that of Magnevist, which was further tested in vivo. Both MRI and optical imaging con irmed successful delivery of siRNA to MDA-MB-231 tumor. Recently, PET/CT combined with BLI was also employed to monitor the whole-body bio distribution of RNAi therapeutics and assess their silencing effects of the expression of luciferase in vivo [177]. The nanoparticles were prepared with cyclodextrin-containing polycations and anti-Luc siRNAs with their 5′-end conjugated with 1,4,7,10-tetraazacyclododecane-1,4,7,10-tetraaceticacid (DOTA) for 64 Cu labeling. Micro-PET/CT was carried out to determine the distribution and tumor accumulation of siRNA-containing nanoparticles. No obvious difference in distribution between the targeted nanoparticles and nontargeted ones was observed. Meanwhile, the BLI revealed that the targeted nanoparticles had better RNAi effects 1 day after injection, demonstrating the importance of multimodal imaging. Thus, the combination of PET/CT and BLI is important to simultaneously monitor both the gene delivery and silencing effects of RNAi, which is critical for the design of RNAi therapeutics for clinical translation. Technologies on theranostic nanomedicines Theranostic nanomedicines contain both a diagnostic agent and one or more therapeutic drugs within one integrated system, enabling noninvasive diagnosis, therapy, and real-timemonitoring of the therapeutic response at the same time [178][179][180][181]. Among various imaging techniques, computed tomography (CT) is one of the most commonly used non-invasive clinical imaging modalities because of its wide availability, high spatial resolution, unlimited depth, and accurate anatomical information with reconstructed three dimensional imaging [182][183][184]. Iodixanol (Visipaque) is a small iodinated molecule, clinically used as a CT contrast agent that has a low osmolality and great tolerability [185]. However, like all low molecular weight iodinated CT contrast agents, iodixanol has drawbacks like non-speci ic distribution and rapid renal clearance following i.v. injection [186]. In recent years, nanosized CT contrast agents have attracted great interest as they have several advantages over small molecular contrast agents such as prolonged circulation time, site-speci ic accumulation and use for theranostics [187][188][189][190]. Some recent work showed systems with great promise of nanosized CT contrast agents such as iodinated hyaluronic acid oligomer-based nano-assembled systems, theranostic self-assembly structures of gold nanoparticles, and multifunctional dendrimerentrapped gold nanoparticles for simultaneous tumor imaging and therapy [191][192][193]. Among various types of nanoscale drug delivery systems, nanogels have attracted increasing attention since they have a large surface area for multivalent bioconjugation and a cross linked three-dimensional network structure that offers great colloidal stability [194][195][196]. To achieve rapid release of the payload at the target site, pH, redox potential, and enzyme-responsive nanogels have been designed [197][198][199][200][201][202][203][204]. Nanogels based on hyaluronic acid (HA) have recently appeared as a unique system because HA is a hydrophilic natural material with excellent biocompatibility and intrinsic targeting ability toward CD44-overexpressing tumor cells [203,[205][206][207][208]. HA nanoparticles have been used for ef icient delivery of chemotherapeutics, proteins as well as siRNA in vitro and in vivo [209][210][211][212]. We report on bioresponsive and luorescent hyaluronic acid-iodixanol nanogels (HAI-NGs) for targeted CT imaging and chemotherapy of MCF-7 human breast tumor (Scheme 1). HAI-NGs were obtained from hyaluronic acid-cystamine-tetrazole (HA-Cys-Tet) and reductively degradable polyiodixanol-methacrylate (SS-PI-MA) via nanoprecipitation and a photo-click crosslinking reaction. HAI-NGs were designed with the following unique features: i) both HA and iodixanol have excellent biocompatibility and are currently used in the clinic; ii) the "tetrazole-ene" photoclick crosslinking reaction is highly selective, which prevents cross-reaction with most drugs and furthermore endows nanogels with bright green luorescence [213,214]; iii) HA can actively target CD44 receptors which are overexpressed on various malignant tumor cells and stem cells [215][216][217][218]; iv) HAI-NGs can be used for targeted CT imaging in vivo; Scheme 1: Illustration of bioresponsive and fl uorescent hyaluronic acid-iodixanol nanogels for targeted X-ray computed tomography imaging and chemotherapy of breast tumors. (a) PTX-loaded HAI-NGs are prepared via nanoprecipitation followed by crosslinking via UV irradiation; (b) PTX-loaded HAI-NGs actively target and accumulate at MCF-7 tumors, resulting in enhanced CT contrast and targeted therapy; (c) PTX-loaded HAI-NGs are selectively internalized into the MCF-7 breast tumor cells via CD44 receptor-mediated endocytosis, nanogels are decross linked and disassembled in response to GSH in the cytosol, and PTX is quickly released into the cells. and v) the reduction-sensitivity of HAI-NGs allows fast intracellular release of payloads like PTX to achieve ef icient and targeted chemotherapy. Tetrazole (Tet) and cystamine diisocyanate (CDI) were synthesized according to previous reports [213,219]. Herein, the stability of HAI-NGs and the reduction-triggered PTX release from PTX loaded HAI-NGs were investigated. Furthermore, the targetability of HAI-NGs and antitumor activity of PTX loaded HAI-NGs toward MCF-7 cells, the pharmacokinetics and bio distribution, NIR and CT imaging, as well as therapeutic effects in MCF-7 human breast tumor xenografts in mice were evaluated. Scheme. 1 Illustration of bioresponsive and luorescent hyaluronic acid-iodixanol nanogels for targeted X-ray computed tomography imaging and chemotherapy of breast tumors. (a) PTX-loaded HAI-NGs are prepared via nanoprecipitation followed by crosslinking via UV irradiation; (b) PTX-loaded HAI-NGs actively target and accumulate at MCF-7 tumors, resulting in enhanced CT contrast and targeted therapy; (c) PTX-loaded HAI-NGs are selectively internalized into the MCF-7 breast tumor cells via CD44 receptor-mediated endocytosis, nanogels are decrosslinked and disassembled in response to GSH in the cytosol, and PTX is quickly released into the cells. Preparation of nanogels and triggered drug release: Hyaluronic acid-iodixanol nanogels (HAI-NGs) were readily obtained via nanoprecipitaion and photo-click crosslinking reaction from HACys-Tet and SS-PI-MA. Figure 11A shows that HAI-NGs had a small size of about 90 nm with a low polydispersity (PDI) of 0.11. TEM con irmed that HAI-NGs had a homogenous size distribution and spherical morphology ( Figure 11B). Notably, HAI-NGs emitted bright green luorescence under UV light ( Figure 11C inset), which derives from pyrazoline cycloadducts produced by the "tetrazole-alkene" photo-click reaction [213,220]. Fluorescence spectroscopy displayed that HAI-NGs had a strong emission at ca. 485 nm ( Figure 11C). The strong luorescence of HAI-NGs can be used to monitor their in vitro and in vivo fate. HAI-NGs displayed excellent stability against extensive dilution as well as 10% serum. However, in the presence of 10 mM glutathione (GSH), HAI-NGs rapidly swelled and agglomerated, supporting their fast redoxresponsivity ( Figure 11D). In contrast, nearly complete PTX release was observed in the presence of 10 mM GSH under otherwise the same conditions, probably due to GSH triggered disul ide bond cleavage and de-crosslinking of the nanogels, corroborating that drug release can be accelerated in an intracellular reductive environment. Nanogels typically have a low loading and fast leakage of small molecule drugs [221]. Paclitaxel (PTX) could be easily loaded into HAI-NGs during nanoprecipitation. The high PTX loading and inhibited drug leakage of HAI-NGs is likely due to existence of strong π-π interactions between PTX and pyrazoline groups and iodixanol moieties in the nanogels [222]. Cellular uptake and cytotoxicity of PTX-loaded HAI-NGs: Given their strong luorescence, the cellular uptake of HAI-NGs into CD44 receptor overexpressing MCF-7 breast cancer cells could be conveniently traced by confocal laser scanning microscopy (CLSM). Notably, nanogel luorescence was clearly observed in MCF-7 cells after 1 h incubation and the luorescence became stronger at a prolonged incubation time of 2 or 4 h ( Figure 12) [223]. The cellular uptake of nanogels was greatly inhibited and only weak nanogel luorescence was discerned in the cellmembrane of MCF- 7 cells pre-incubated for 4 h with free HA, demonstrating that HAI-NGs are internalized byMCF-7 cells via a receptormediated mechanism. Here it has been also selected L929 murine ibroblastic cells with a low expression of CD44 as negative controls, the luorescence intensity of HAI-NGs in MCF-7 cells was much stronger than that in L929 cells, again proving the cellular uptake of HAI-NGs via a CD44-mediated mechanism. MTT assays showed that blank HAI-NGs were practically non-toxic to MCF-7 cells (N93% cell viability) even at a high nanogel concentration of 1 mg/mL ( Figure 13A), indicating that HAI-NGs possess excellent biocompatibility. In contrast, PTX-loaded HAI-NGs exhibited signi icant and dose dependent cytotoxicity against MCF-7 cells ( Figure 13B). The half-maximal inhibitory concentration (IC50) of PTX-loaded HAI-NGs was determined to be 0.52 μg/mL, comparable to that of free PTX (0.35μg/mL), corroborating their ef icient cellular internalization and rapid intracellular PTX release. The pre-treatment of MCF-7 cells with free HA for 4 h largely reduced the cytotoxic effect of PTX-loaded HAI-NGs, in line with the above CLSM observations that cellular uptake is inhibited by free HA. In vivo pharmacokinetics, near infrared imaging and bio distribution of nanogels: To investigate the in vivo pharmacokinetics, PTX-loaded HAI-NGs were i.v. injected into BALB/c nude mice at 5 mg PTX/kg and the plasma levels of PTX at different time points were determined by HPLC. Figure 14A displays that PTX-loaded HAI-NGs had a prolonged circulation time with an elimination half-life of 3.3 h. [223] indicating that nanogels are stable in the circulation and drug leakage is low as a result of strong π-π interactions between PTX and pyrazoline group and iodixanol moieties in the nanogels [222]. In comparison, free PTX was rapidly eliminated from the blood circulation, with an extremely short half-life time of 0.35 h. To visualize their tumor accumulation in vivo, a near infrared dye Cy5was loaded into HAI-NGs, and the serum Cy5 release was evaluated. It has been found that Cy5 release from HAI-NGs was slow and within 24 h only 4.1% was released. Therefore it is envisaged that Cy5 loaded HAI-NGs are also relatively stable in the circulation. Figure 14B shows real-time images of Cy5-loaded HAI-NGs in MCF-7 tumorbearing mice. Notably, tumor accumulation of nanogels was clearly observed at 2 h post injection and reached a maximum at 6 h. This high tumor targeting ef iciency of nanogels is likely due to their small size, high stability, and active targeting effect. Interestingly, the luorescence at the tumor site at 24 h again became stronger compared with that at 12 h, probably because the Cy5 luorescence is partly self-quenched when loaded into the HAI-NGs due to the homo Förster resonance energy transfer (homo-FRET) effect [224]. When Cy5 molecules are partly released from the disassembled HAI-NGs in the reductive environment, the luorescence of Cy5 in the tumor area may increase again. It is also noticed that besides in the tumor site, strong luorescence was also observed in the liver and spleen, probably because after i.v. injection, nanogels are also captured by the RES system (mainly liver and spleen). As CD44 receptors are also expressed on liver and spleen cells, uptake of part of the nanogels by these organs may be inevitable". To pro ile the in vivo bio distribution of PTX-loaded HAI-NGs inMCF-7 tumor bearing mice, PTX levels in the tumor and different organs at 6 and 12 h post injection were quantitatively determined by HPLC. Notably, PTX-loaded nanogels exhibited a high tumor accumulation with 5.5%ID/g at 6 h ( Figure 14C). The tumor PTX accumulation remained high (3.6% ID/g) at 12 h post injection. In comparison, free PTX displayed 7-and 15-fold lower tumor accumulation than PTX-loaded HAI-NGs at 6 and 12 h post injection, respectively ( Figure 14D versus Figure 14C). The PTX level expressed in %ID for the tissues after injection with PTX/HAI-NGs also showed a relatively high retention of PTX in tumor tissue besides the high level in liver tissue and lower levels in the spleen. The levels after the administration of free PTX were investigated [223]. The pharmacokinetics, NIR imaging and biodistribution studies all point out that PTX-loaded HAI-NGs have a prolonged circulation time and signi icantly enhance PTX accumulation in the MCF-7 tumor. Enhanced CT imaging by HAI-NGs: HAI-NGs can be used for CT imaging due to the presence of a high content of iodixanol. Figure 15A shows clearly that HAI-NGs effectively enhanced the CT contrast in vitro. The corresponding Houns ield units (HU) values exhibited a linear correlation with HAI-NGs concentrations, suggesting that HAI-NGs can be used for quantitative CT studies. It has been then evaluated the application of HAI-NGs for in vivo CT diagnosis. Interestingly, 5 min after intratumoral (i.t.) injection of 50 μL of HAI-NGs at an HAI-NGs concentration of 15 mg/mL (i.e. 60 mg iodine equiv./kg) into MCF-7 tumor bearing nude mice, remarkably enhanced contrast was observed at the tumor site in the three-dimensional reconstructed images, with amarked increase of HU value from 37.2 to 182.8 ( Figure 15B). It has been further investigated whether HAI-NGs can be applied for targeted CT imaging of CD44 overexpressed tumors. The results showed that enhanced contrast was discerned in the MCF-7 tumor from both axial and coronal CT images, with HU values increasing from 37.0 to 82.6, at 7 h following intravenous injection of HAI-NGs into MCF-7 tumor bearing nude mice (Figure 16). The enhanced tumor contrast further con irms that HAI-NGs can target to and accumulate in the MCF-7 tumor. Notably, the high contrast signal at the tumor site lasted for a long time, which is advantageous for clinical diagnosis. In sharp contrast, little enhancement of HU value was observed at the tumor site for iodixanol (small molecule contrast agent) at the same iodine dose. Iodixanol was rapidly cleared from the body to the bladder. It is clear, therefore, that HAINGs are superior to iodixanol in targeted CT imaging of CD44 positive tumors. In vivo tumor penetration and therapeutic effi cacy of PTX-loaded HAINGs In the process of tumor-targeted drug delivery, a series of biological barriers may in luence the inal therapeutic ef icacy, among which are interstitial hindrance and tumor penetration [225,226]. The strong intrinsic luorescence of HAI-NGs was utilized to track their distribution. The blood vessels and cell nuclei were stained by CD31 antibody and DAPI, respectively. Figure 17 reveals that HAI-NGs were located in the blood vessels at 2 h post i.v. injection. At a prolonged time of 6 h post injection, HAI-NGs extravagated from blood vessels to the interstitial space, displaying green luorescence throughout the whole tumor. At 12 h post injection, HAI-NGs penetrated further deep into the tumor and were actively endocytosed by tumor cells, presenting bright green luorescence around the cell nuclei. The above phenomena suggest that the HAI-NGs possess good tumor penetration ability. The antitumor ef icacy of PTX-loaded HAI-NGs was evaluated in MCF-7 tumor bearing mice at a dose of 5 mg PTX equiv./kg. The results [223] showed that PTX-loaded HAI-NGs exhibited effective inhibition of tumor growth, which was signi icantly better than that of free PTX ( Figure 18A). The photographs of tumor blocks excised on day 24 further con irmed that mice treated with PTX-loaded HAI-NGs had the smallest tumor size ( Figure 18B). Both PTX-loaded HAI-NGs and free PTX caused no change of mice body weight ( Figure 18C). Importantly, survival curves showed that PTXloaded HAI-NGs effectively prolonged the survival time of the MCF-7 breast tumor bearing mice with all mice surviving over an experimental period of 65 d ( Figure 18D). In contrast, mice treated with free PTX and PBS had a median survival time of 40 and 28 d, respectively. The histological analyses by H&E staining displayed that PTX-loaded HAI-NGs induced widespread necrosis of tumor tissue with little damage to the healthy organs including heart, liver and kidney ( Figure 18E), supporting that PTX-loaded HAI-NGs cause low systemic side effects. In comparison, hepatocellular necrosis (red arrows) was observed for free PTX treated mice, similar to previous reports [227,228]. It is evident that PTX-loaded HAI-NGs have low systemic toxicity and mediate ef icient and targeted delivery of PTX to human breast tumors in vivo, resulting in effective suppression of tumor growth and markedly prolonged survival time. Conclusions and perspectives This chapter summarized the application of various imaging modalities for qualitative and quantitative assessments of RNAi therapeutics to promote their applications in cancer therapy and translate them into clinical applications. Overall, a variety of RNAi therapeutics has been developed and some of them are under clinical evaluation. Diverse imaging techniques have been applied to study the mechanism of gene transfection, route of delivery, systemic distribution in the body, genesilencing effect, and therapeutic outcomes. RNAi has demonstrated great potential for treating a wide range of diseases, especially for multidrug-resistant cancer treatment, which are dif icult to be treated by conventional methods of chemotherapy and radiotherapy. Despite the positive progresses in translating some RNAi formulations into clinical trial for cancer therapy, several issues are required to bear in mind for effective translation of RNAi therapeutics from bench to bedside: (1) How to optimize the parameters/conditions to accumulate suf icient amount of RNAi therapeutics in tumor tissues and translocate them into cancer cells; (2) How to select the proper target to maximize the therapeutic effects of siRNA; and (3) How to develop new effective gene carriers with high stability during Figure 17: Tumor penetration of HAI-NGs observed by confocal microscopy. Tumor sections were obtained fromMCF-7 tumor bearing mice following 2, 6 and 12 h tail vein injection of HAINGs (10 mg/mL). The nuclei were stained with DAPI (blue) and blood vessels were stained with CD31 (red). HAI-NGs have an intrinsic green fl uorescence. The scale bar represents 50 μm. circulation as well as controlled release function of payload nucleic acids inside cancer cells with high gene transfection effects. Finally, the ideal RNAi formulations should satisfy the requirement of speci ic delivery of RNAi agents to tumors and ef iciently transfer into cancer cells to silence target genes at high performance for tumor suppression, while avoiding potential toxicity, side effects, and off-target silencing. The advances in noninvasive imaging techniques [229][230][231][232][233][234] provide new approaches to visualize and quantify RNAi in cancer therapy and further understand the mechanism of RNAi intracellular and in the body. So far, several imaging modalities, such as optical imaging, MRI, ultrasound, and SPECT/PET, have been applied to assess the delivery of siRNA, to determine the bio distribution, and to monitor the therapeutic effects, leading to a signi icant contribution to the progresses of RNAi therapeutics, especially in the preclinical studies [235,236]. It is very important to establish proper evaluation models/systems with the aid of molecular imaging in preclinical studies and translate the animal results as important references for clinical trial in human. As each imaging modality has its intrinsic advantages and limitations, there is a trend to combine different imaging modalities for better tracing the fate of siRNA, the delivery vehicles, and the therapeutic effects, which is expected to maximize the potential of cancer therapywith RNAi therapeutics and help overcome the barriers that block the road to clinical translation. The rational design of molecular imaging probes is essential to accurately monitor the biological processes of RNAi therapeutics and fully realize their potency in cancer therapy. Molecular imaging probes for RNAi-based cancer therapeutics, consisting of a variety of functional Figure 18: In vivo antitumor effi cacy of PTX/HAI-NGs inMCF-7 tumor bearing nude mice. Free PTX and PBS were used as controls. The drug was given on day 0, 3, 6, 9, 12 (drug dosage: 5mg PTX equiv/kg). (A) Tumor volume changes in time. Data are presented as mean ± SD (n = 6). (B) Photographs of tumor blocks collected from diff erent treatment groups on day 24. (C) Body weight changes of nude mice following diff erent treatments within 24 days. (D) Survival rates of mice in diff erent treatment groups within 60 d. Data are presented as means ± SD (n = 5). *p < 0.05, *** p < 0.001. (E) H&E stained heart, liver, kidney and tumor sections excised from MCF-7 human breast tumor-bearing mice following 24 d treatment with PTX/HAINGs, free PTX or PBS. The images were obtained by a Leica microscope at 200× magnifi cation. Red arrows indicate hepatocellular necrosis. nanomaterials including lipids, metals, carbons, polymers, and biologics-based nanoparticles, should be relatively safe for clinical use with low toxicity and biodegradability in the body. Further development of new imaging contrast agents, which can increase the signal intensity for more reliable image analysis with cancer cell speci ic targeting ability, would optimize the diagnostic selectivity and provide more detailed and accurate pathological information about the biological processes of RNAi therapeutics in cancer treatment. Furthermore, the technical advances in imaging devices would enable proper patient selection and rapid translation of RNAi therapeutics into clinic. We have demonstrated for the irst time that bioresponsive and luorescent hyaluronic acid-iodixanol nanogels (HAI-NGs) mediate targeted X-ray computed tomography (CT) imaging and chemotherapy of MCF-7 human breast tumor in vivo. Notably, HAI-NGs have integrated multiple functions including excellent biocompatibility, bright green luoresence, high stability, superior target ability to CD44 overexpressing cells and fast glutathione-responsive drug release. The in vivo studies clearly show that PTX-loaded HAI-NGs have a prolonged circulation time, high tumor accumulation and enhanced tumor penetration in MCF-7 breast tumor-bearing nude mice, resulting in effective tumor growth inhibition, markedly improved survival rate, and reduced systemic toxicity as compared to free PTX. Furthermore, HAI-NGs via either intratumoral or intravenous injection leads to signi icantly enhanced CT imaging of MCF-7 breast tumors in nude mice as compared to iodoxanol. HAI-NGs provide a highly versatile and targeted theranostic nanoplatform that elegantly combines CT imaging with targeted chemotherapy toward CD44 overexpressing tumors.
2021-04-25T15:44:09.723Z
2021-03-04T00:00:00.000
{ "year": 2021, "sha1": "197e55f7dd1f698ce3d96ba849d2d26c4c44aece", "oa_license": "CCBY", "oa_url": "https://www.heighpubs.org/jro/pdf/jro-aid1035.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "197e55f7dd1f698ce3d96ba849d2d26c4c44aece", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
237406637
pes2o/s2orc
v3-fos-license
Analysis of Heritability Across the Clinical Phenotypes of Frontotemporal Dementia and the Frequency of the C9ORF72 in a Colombian Population Frontotemporal dementia (FTD) is a highly heritable condition. Up to 40% of FTD is familial and an estimated 15% to 40% is due to single-gene mutations. It has been estimated that the G4C2 hexanucleotide repeat expansions in the C9ORF72 gene can explain up to 37.5% of the familial cases of FTD, especially in populations of Caucasian origin. The purpose of this paper is to evaluate hereditary risk across the clinical phenotypes of FTD and the frequency of the G4C2 expansion in a Colombian cohort diagnosed with FTD. Methods: A total of 132 FTD patients were diagnosed according to established criteria in the behavioral variant FTD, logopenic variant PPA, non-fluent agrammatic PPA, and semantic variant PPA. Hereditary risk across the clinical phenotypes was established in four categories that indicate the pathogenic relationship of the mutation: high, medium, low, and apparently sporadic, based on those proposed by Wood and collaborators. All subjects were also examined for C9ORF72 hexanucleotide expansion (defined as >30 repetitions). Results: There were no significant differences in the demographic characteristics of the patients between the clinical phenotypes of FTD. The higher rate phenotype was bvFTD (62.12%). In accordance with the risk classification, we found that 72 (54.4%) complied with the criteria for the sporadic cases; for the familial cases, 23 (17.4%) fulfilled the high-risk criteria, 23 (17.4%) fulfilled the low risk criteria, and 14 (10.6%) fulfilled the criteria to be classified as subject to medium risk. C9ORF72 expansion frequency was 0.76% (1/132). Conclusion: The FTD heritability presented in this research was very similar to the results reported in the literature. The C9ORF72 expansion frequency was low. Colombia is a triethnic country, with a high frequency of genetic Amerindian markers; this shows consistency with the present results of a low repetition frequency. This study provides an initial report of the frequency for the hexanucleotide repeat expansions in C9ORF72 in patients with FTD in a Colombian population and paves the way for further study of the possible genetic causes of FTD in Colombia. INTRODUCTION Frontotemporal dementia (FTD), a heterogeneous neurodegenerative disorder, is a highly heritable condition with reports of a positive family history in as many as 60% of cases (1,2). In order to estimate the heritability of the family history, some criteria have been standardizedfollowing the Goldman score and the one proposed by Wood and collaborators-according to the number of first-and second-degree relatives affected by FTD (3,4). These efforts suggest a disease mechanism regarding the likelihood of an identifiable genetic cause and variability across clinical phenotypes (4,5). A strong family history and higher frequency has been found in the behavioral variant of FTD (bvFTD), but less so in the semantic variant PPA (svPPA), the logopenic variant PPA (lvPPA), and the nonfluent agrammatic PPA (nfaPPA) (5)(6)(7)(8)(9). The heritability of FTD with motor neuron disease (FTD-MND), and atypical parkinsonian disorders are less clear, possibly due to the number of studies reported until today (5,10). However, the G4C2 (GGGGCC) hexanucleotide repeat expansions in the C9ORF72 gene is the most common genetic cause of ALS and FTD (11,12), and although the expansion mechanism is uncertain, it is suggested that the cause of disease in FTD includes "gain-of-toxicity" or reduction in function of the C9ORF72 protein (13). It has been estimated that G4C2 can explain up to 37.5% of the familial cases of FTD, in particular, in populations of Caucasian origin (14). G4C2 has also been reported as a major cause of the disease in northern Europe, mainly Finland, and in North American FTD and ALS cohorts (11,15). C9ORF72 also accounts for a significant proportion of Australian and Spanish FTD cases (16). By contrast, the C9ORF72 repeat expansion was not present or extremely rare in patients of Native American, Pacific Islander (11), Asian (17,18), and Middle Eastern countries (19), and China (20,21). Very few studies on the frequency of C9ORF2 have been carried out in Latin America. The first report was in an Argentinian population, where the expansion frequency in a FTD group was similar to that reported for patients in Europe and North America (14). In a Brazilian population (22,23), the frequencies of the mutation in pure ALS and pure FTD cases were much lower than those observed in Finnish patients (11,24), but similar to what was found for Germany (11) and Flanders-Belgium (25). There are no data as yet on the frequency and heritability of this expansion in an FTD population in Colombia (26). As such, in this study, we expect to estimate the frequency and heritability of C9ORF72 hexanucleotide repeat expansion in a group of patients with FTD diagnosis in Colombia. Population A total of 132 patients were diagnosed with FTD according to consensus criteria for bvFTD, PPA: lvPPA, nfaPPA, and svPPA (27)(28)(29), at the Memory and Aging Clinic at the Hospital Universitario San Ignacio and Pontificia Universidad Javeriana in Bogotá, Colombia. The ethnicity of our sample could not be directly verified, but all patients are Colombian, and reported to be of Hispanic origin. This study was approved by the Ethics Committee at the same institution, and written consent was obtained from all participants and their legal representatives. Pedigree Family trees of the patients with FTD diagnosis were drawn up using information provided by the patients' families and caregivers. Pedigree information was obtained using the Proband application, where at least three generations of each of the subjects were described. The heritability of the disorder was classified by a geneticist with experience in the field of neurodegenerative diseases. The classification criteria were based on those proposed by Wood and collaborators. This classification method has four categories that indicate the pathogenic relationship of the mutation: high, medium, low, and apparently sporadic. These criteria are based on the number of first-and second-degree relatives affected with the spectrum of FTD disorders or other neurodegenerative diseases (4). Gene Sequencing and Genotyping Genomic: All evaluated patients had a 3-cc blood sample taken in EDTA (ethylenediaminetetraacetic acid) tubes from which the genomic DNA was extracted using the Salting Out protocol. The DNA was then quantified using a NanoDrop R ND-1000 spectrophotometer. C9ORF72 hexanucleotide expansion (defined as >30 repetitions) was analyzed and tested with repeatprimed PCR and capillary electrophoresis as previously described (30). The sizes of the PCR fragments were analyzed using GeneMapper software (Applied Biosystems, Foster City, CA). Statistics A frequency distribution was performed taking into account the risk classification of the pedigrees together with phenotypic (sex, age, and diagnosis) and genotypic (presence of the C9ORF72 expansion) characteristics. For the statistical analysis, absolute and relative measures were obtained for quantitative data. Central tendency and dispersion measures were evaluated for quantitative data. Presence RESULTS Of the 132 patients, 51.52% were males and 48.48% were females. The latter presented a lower prevalence in the low-risk group than the male group. The main age of onset was of 59 years (12 IQR) ( Table 1). The higher rate phenotype was bvFTD (62.12%), followed by non-specific PPA (18.18%), svPPA (15.90%), lvPPA (3.03%), and nfaPPA (0.75%). In categorizing by genetic risk based on the Wood pedigree classification, we found that 72 (54.4%) complied with the criteria for the sporadic cases; for the familial cases, 23 (17.4%) fulfilled the criteria for being high risk; 23 (17.4%) fulfilled the criteria for low risk; and 14 (10.6%) fulfilled the criteria for medium risk. Females and males were similarly distributed in three of the risk classification groups: apparent sporadic (40/32), medium risk (8/6), and high risk (12/11). The low-risk classification included more men than women (4/19). C9ORF72 expansion was observed in 0.76% (1/132) of the sample. The positive case is a female patient diagnosed with bvFTD. The family pedigree was classified as a high-risk familial case (Figure 1), and the simple brain MRI with contrast revealed moderate supratentorial cortical atrophy predominantly in frontal and temporal regions. DISCUSSION The present results show that the Colombian FTD sample data are similar to what is described in the literature regarding heritability, age of onset, and time of evolution of the disorder (31). Most of our patients exhibited the bvFTD followed by language variants (11,32). One previous study demonstrated that bvFTD and the non-fluent/agrammatic variant of primary progressive aphasia (nfv-PPA) appeared to be more heritable than the semantic variant of primary progressive aphasia (sv-PPA) (33). We observed no differences in the overall percentage of men and women in the study population, as has been reported in studies of populations in Argentina, southern Italy, and Brazil where the percentage of female patients has been higher (14,23,34). However, we note that our only case with the G4C2 expansion was presented by a woman and that our percentage of women classified as being of low heritable risk was much lower than that presented in other risk groups, which could support the hypothesis that female G4C2 repeat mutation carriers are more likely to develop cognitive or behavioral impairment (35). Given previous reports where C9ORF72 expansions have been found in non-familial cases (11), we found only one patient with the bvFTD that presented the C9ORF72 expansion from the high-risk cases, with a total frequency of 0.76% (1/132). The repeat expansions in the C9ORF72 gene is responsible for one of the FTD cases but not all FTD diagnoses in a Colombian cohort, revealing that there may be causes other than C9ORF72 to account for FTD cases in Colombia. Wood and collaborators found C9ORF72 expansion in 25/306 (8.2%) of FTD patients, with the mutation-detection rate being highest in the low category and apparent sporadic cases (12,24). This finding is consistent with prior reports of C9ORF72 expansion in sporadic families, and it coincides with findings from other studies (11,36). Although we found C9ORF72 expansion in the high-risk group, we found no other patients that fulfilled the high-risk criteria and presented the expansion, supporting the importance of performing molecular analysis of this expansion in the idiopathic forms (11,(37)(38)(39). The low frequency of the G4C2 expansion in the patient group with FTD 0.76% (1/132) is similar to what has been reported for Asian and Amerindian populations (17)(18)(19)(20)(21). There are even studies where no cases with this expansion 0/52 were identified (40). In Europe and North America, much higher frequencies have been established for the G4C2 expansion, with Finland and Sweden with overall frequencies of 29.33 and 20.73%, respectively, and Spain with 25.49%. Lower frequencies have been observed in Germany with 4.82% (41). In North America, C9ORF72 expansion accounted for almost 25% of familial FTD cases and 6% of sporadic cases (11). So far, only two studies have been conducted for the Latin American population, one in Argentina (14) where a frequency of expansion of 18.2% (6/33 cases) of patients with FTD was observed (14), and the other in Brazil, where a frequency of 7.1% (n = 67) for patients with pure familial FTD was found (23). As it was shown before, the high frequency of the C9ORF72 expansion is associated with populations of European origin (11,14). According to the human settlement hypothesis, Asian populations arriving through the Bering strait settled in North and South America, making the Amerindian populations very similar to the original ones and homogeneous with each other. This would support the absence of the C9ORF72 repetition in populations of Amerindian origin and this coincides with the results found for Amerindian groups in North America (11). The populations of European ancestry with high frequencies present similar frequencies. An example of this is the Argentine population among which frequencies similar to those of European countries have been found, corroborating the Caucasian origin of this repetition (14,42,43). Colombia is a triethnic country, made up of a population of Native American, African, and European origin. Bogotá, the capital of Colombia, has a typical multiple ancestry population, showing a high proportion of people of European ancestry, followed by Native American and African (42). The higher frequency of Amerindian genetic markers presents a coherent result with a low frequency of repetition. This study provides an initial report of the frequency of expansions of hexanucleotide repeats in C9ORF72 in patients with FTD in the Colombian population and paves the way for further study of the possible genetic causes of FTD in Colombia. DATA AVAILABILITY STATEMENT The original contributions presented in the study are included in the article/Supplementary Material, further inquiries can be directed to the corresponding author/s. ETHICS STATEMENT The studies involving human participants were reviewed and approved by Pontificia Universidad Javeriana, Facultad de Medicina. The patients/participants provided their written informed consent to participate in this study. AUTHOR CONTRIBUTIONS AL-C and MV-R: study concept development and study design. AL-C, MV-R, and DM: testing and data collection. AL-C, MV-R, EG-C, and IZ: data analysis and interpretation. AL-C, MV-R, and IZ: manuscript drafting and provision of critical reviews. All authors have participated in the work and approve the final version of the manuscript for submission.
2021-09-04T13:49:24.258Z
2021-08-30T00:00:00.000
{ "year": 2021, "sha1": "12b411624ce56b06d97bb28fffc7dd6972d5cb3e", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fneur.2021.681595/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "12b411624ce56b06d97bb28fffc7dd6972d5cb3e", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
209524655
pes2o/s2orc
v3-fos-license
Sensor Classification Using Convolutional Neural Network by Encoding Multivariate Time Series as Two-Dimensional Colored Images This paper proposes a framework to perform the sensor classification by using multivariate time series sensors data as inputs. The framework encodes multivariate time series data into two-dimensional colored images, and concatenate the images into one bigger image for classification through a Convolutional Neural Network (ConvNet). This study applied three transformation methods to encode time series into images: Gramian Angular Summation Field (GASF), Gramian Angular Difference Field (GADF), and Markov Transition Field (MTF). Two open multivariate datasets were used to evaluate the impact of using different transformation methods, the sequences of concatenating images, and the complexity of ConvNet architectures on classification accuracy. The results show that the selection of transformation methods and the sequence of concatenation do not affect the prediction outcome significantly. Surprisingly, the simple structure of ConvNet is sufficient enough for classification as it performed equally well with the complex structure of VGGNet. The results were also compared with other classification methods and found that the proposed framework outperformed other methods in terms of classification accuracy. Introduction In the era of data explosion, time series data, which is a series of data points indexed in time order, is one of the most common data collected. A variety of time series data can be collected from the internet, machines, devices, and sensors for all kinds of applications such as monitoring, tracking, and pattern classification. Multivariate time series (MTS) data from multiple resources can be used to present the operating statuses of the machines, or human health condition such as electrocardiography. In smart manufacturing, building a binary classification model by machine learning algorithm to identify defects or tool wearing (normal or abnormal) from the collected time series data is also a popular approach to improve production quality [1]. Assume a time series x is a set of data points indexed in time order, x = x(t) ∈ R : t = 1, 2, . . . , T , where T represents the length of the time series data [2]. An MTS can be considered as a m × n matrix. Generally, MTS data mining research can be categorized into: (1) representation and indexing, (2) similarity measure, (3) segmentation, (4) visualization, and (5) mining [3]. Essentially, MTS classification belongs to a "mining" area that tries to which transforms one batch of MTS data into multiple images and concatenating them as bigger two-dimensional images as inputs of ConvNet. The deep learning architecture of ConvNet was then applied to extract and learn features from these images for classification purpose. Three typical methods of encoding MTS data into images, the sequences of image concatenation and two kinds of ConvNet architectures were investigated. Two open multivariate datasets which are the benchmarks datasets were used to evaluate the experiment results. The results show that proposed framework can enhance the accuracy of MTS classification by using the relatively simple network. In short, we conclude this work as the followings: • This work aims to extend 2-D image transformation method for MTS classification from the univariate time series input to MTS inputs; • The proposed innovative image concatenation can combine MTS data as multiple color channels as inputs of ConvNet; • The proposed framework can enhance the accuracy of MTS classification by using the relatively simple network; • The result shows the selection of image transformation methods and the sequence of image concatenation are not significant for classification accuracy. The rest of the paper is organized as follows: Section 2 provides a review of MTS, data encoding methods, and ConvNet; Section 3 describes the methodologies of data transformation, image aggregation, and ConvNet hyperparameter setting; Section 4 explains the experiments and results; and Section 5 presents the conclusion and suggestions for future research. Convolutional Neural Network (ConvNet) In recent years, ConvNet is widely used as the deep learning algorithm for computer vision to detect meaningful features and patterns. The concept of this framework was introduced by two neurophysiologists, Hubel and Wiesel, who were inspired by the visual cortical neurons of cats and monkeys. However, the first researchers who used backpropagation applied in ConvNet were LeCun et al., who also started the new era of ConvNet [18]. Over time, many outstanding architectures were developed, such as the AlexNet [19], VGGNet [20], ResNet [21], and Inception v3 [22]. They achieved good results in the ImageNet Large Scale Visual Recognition Competition (ILSVRC) each year. A typical ConvNet consists of a convolutional layer, an activation function, a pooling layer, a fully connected layer, and an output layer. The convolutional layer extracts meaningful features such as edges, color, and gradient orientation, from the input image by using linear function. The output matrix is the result of computing the dot product when a filter covers the input image. Activation functions plays a non-linear role between the convolutional layer and pooling layer in a ConvNet model. The Rectified Linear Unit (ReLU) is the popular activation function in the last few years in the deep learning field [23], although this concept was proposed as early as the year 2000 [24]. The advantages of ReLU are it reduces the vanishing gradient problem and allows models to learn faster and perform better. The main purpose of the pooling layer is to reduce the spatial dimensions of the feature map but still preserve the important information. Generally, the feature map is shrunk by a factor greater than or equals to two. The max pooling method [25], which simply calculates the maximum value of each patch in the feature map, is often used in the pooling layer. After passing through multiple convolutional and pooling layers, the output is converted into a dense vector by flattening the pooled feature map from two dimensions to one dimension. Lastly, a ConvNet uses the feedforward neural network to compute the different weights between nodes, and get the probabilities of different classes. Image Based Time Series Data Due to the rapid developing of computer vision, the idea of classifying time series data using computer vision technology was inspired. Various transformation methods were proposed to encode times series as input images of computer vision, in hope that the two-dimensional images can reveal features and patterns not found in the one-dimensional sequence of the original time series. Two of the popular data transformation methods are the Gramian Angular Field (GAF) and the Markov Transition Field (MTF) [15]. GAF encodes time series into image by polar coordinates based matrix and it can preserve absolute temporal correlation [26]. The original time series x is first normalized to between 0 and 1, which is defined in Equation (1). Then, angular cosine and the time stamp are used to encode the rescaled data into polar coordinates. From top-left to bottom-right, the image position corresponds to the raw time series and it is symmetrical by the main diagonal. Due to this characteristic, the polar coordinates can revert back to the raw time series by its transformation principle. GAF can generate two images by different equations. The Gramian Angular Summation Field (GASF) is defined in Equations (2) and (3) and the Gramian Angular Difference Field (GADF) is defined in Equations (4) and (5). The difference is the conversion of trigonometric functions, where GASF is based on cosine functions and GADF is based on sine functions. MTF uses Markov transition probabilities to maintain details in the time domain [15]. MTF is composed of Markov transition probabilities M ij of quantile bin q i moves to q j , at time stamp i and j, respectively. Suppose a time series x = x(1), x(2), . . . , x(T), and quantile Q = q 1 , q 2 , . . . , q j . The size of Q affects the Markov transition matrix (w) size. MTF is defined in Equation (6). MTF can preserve details in the temporal range. However, as the transformed matrix is formed by the probabilities of element moving, the MTF method cannot revert to the raw time series data like GAF. In addition, as MTF is formed by the probabilities of element moving, it is not as symmetrical as GAF method. For both GAF and MTF, the transformed values can be represented as colors via the colormap. The colormap contains the colors of a rainbow. The redder color corresponds to a larger value and the bluer color corresponds to a smaller value. Both GAF and MTF were applied in many studies. For example, Mitiche, et al. utilized GAF in an Electromagnetic Interference (EMI) image study for extracting significant information [27]. In their work, the GAF method was combined with two feature reduction methods called the Local Binary Pattern and the Local Phase Quantization to remove redundancy. The Random Forest method was implemented to classify the images with promising outcomes. In addition, Sánchez and Cervera used electrocardiogram (ECG) data from the PhysioNet/CinC Challenge 2017 to detect atrial fibrillation [28]. The data was encoded into GASF and fed into a feed forward neural network and ConvNet for classification. Similarly, Nagem et al. encoded the American Geostationary Operational Environmental Satellite (GOES) data into MTF images, and applied ConvNet to predict the status of solar flares [29]. In the field of financial technology, Chen et al. proposed the mean average mapping method and the double moving average mapping method to encode the time series into two-dimensional images, and compare them with the GAF method [30]. The images of the mentioned methods were fed into ConvNet, and the results showed that the GAF outperforms the others. To illustrate the advantage of transforming time series data into two-dimensional images, Figure 1 shows an example of the comparison between normal and abnormal sensor data under GADF transformation from the Wafer dataset [31]. In the Wafer dataset, each time series is labelled as abnormal or normal for identifying whether the wafer process has defect. The left side of Figure 1 shows the normal time series' sensor data and the corresponding GADF images while the right side of Figure 1 shows abnormal case. As can be seen, the abnormal time series has relatively low values and two obvious spikes comparing with the normal one. The corresponding GADF image of the abnormal case can be easily recognized that it has relatively lighter color with two distinct crossing lines (marked by the white circles) to represent the two spikes. Therefore, the characteristics of time series data can be identified in two-dimensional image from different features such as color, points, and lines at the corresponding locations in the image. Sensors 2020, 20, x FOR PEER REVIEW 5 of 15 method and the double moving average mapping method to encode the time series into twodimensional images, and compare them with the GAF method [30]. The images of the mentioned methods were fed into ConvNet, and the results showed that the GAF outperforms the others. To illustrate the advantage of transforming time series data into two-dimensional images, Figure 1 shows an example of the comparison between normal and abnormal sensor data under GADF transformation from the Wafer dataset [31]. In the Wafer dataset, each time series is labelled as abnormal or normal for identifying whether the wafer process has defect. The left side of Figure 1 shows the normal time series' sensor data and the corresponding GADF images while the right side of Figure 1 shows abnormal case. As can be seen, the abnormal time series has relatively low values and two obvious spikes comparing with the normal one. The corresponding GADF image of the abnormal case can be easily recognized that it has relatively lighter color with two distinct crossing lines (marked by the white circles) to represent the two spikes. Therefore, the characteristics of time series data can be identified in two-dimensional image from different features such as color, points, and lines at the corresponding locations in the image. Similarly, Figure 2 shows an MTF example of the comparison between normal and abnormal sensor data (the same as time series data in Figure 1) from the Wafer dataset. As can be seen similarly, the abnormal case shown on the right hand side can be recognized with different color mapping and unique cross-lines due to the relatively high values (marked by the white circles) representing the two spikes. Although GAD and MTF shares this similarity, it is interesting to evaluate which transformation can perform better in terms of classification accuracy. Similarly, Figure 2 shows an MTF example of the comparison between normal and abnormal sensor data (the same as time series data in Figure 1) from the Wafer dataset. As can be seen similarly, the abnormal case shown on the right hand side can be recognized with different color mapping and unique cross-lines due to the relatively high values (marked by the white circles) representing the two spikes. Although GAD and MTF shares this similarity, it is interesting to evaluate which transformation can perform better in terms of classification accuracy. Similarly, Figure 2 shows an MTF example of the comparison between normal and abnormal sensor data (the same as time series data in Figure 1) from the Wafer dataset. As can be seen similarly, the abnormal case shown on the right hand side can be recognized with different color mapping and unique cross-lines due to the relatively high values (marked by the white circles) representing the two spikes. Although GAD and MTF shares this similarity, it is interesting to evaluate which transformation can perform better in terms of classification accuracy. Methodology This research is to propose a framework to classify MTS data using deep learning technology. This study first applied MTF, GASF, and GADF to transform MTS data into images. Then, the transformed images were concatenated for processing by ConvNet to identify features in the images for classification. Basically, this framework consists of four steps: (1) dimension reduction of time series, (2) image encoding, (3) image concatenation, and (4) ConvNet classification model training. Figure 3 shows the workflow of the proposed framework for MTS Classification by ConvNet. The details of this framework are introduced in the following sub-sections. Methodology This research is to propose a framework to classify MTS data using deep learning technology. This study first applied MTF, GASF, and GADF to transform MTS data into images. Then, the transformed images were concatenated for processing by ConvNet to identify features in the images for classification. Basically, this framework consists of four steps: (1) dimension reduction of time series, (2) image encoding, (3) image concatenation, and (4) ConvNet classification model training. Figure 3 shows the workflow of the proposed framework for MTS Classification by ConvNet. The details of this framework are introduced in the following sub-sections. Dimensionality Reduction Using Piecewise Aggregate Approximation (PAA) An image is composed of pixels, so it can be considered as a × matrix, where n defines the image size. When the length of the time series data is n, the image size of any kind of transformation method is × [26]. As each batch of time series data can vary in length, the straight transformation of the original data into images will result in different sizes of images. Therefore, to obtain images of the same size for ConvNet, in this research, Piecewise Aggregate Approximation (PAA) method is applied to perform dimension reduction of the original time series data before transforming time series data into images [32]. Please note that applying PAA is also the convention method for data preprocessing before transferring time series to images [17]. PAA divides original time series into N equal-length segments. N is the length of the reduced times series that should satisfy the constraint of 1 ≤ N ≤ T. Then, the mean value of each segment substitutes the original time series to reduce the dimensionality from T to N. Suppose a time series = (1), (2), … , ( ) where is the length of the original time series. T/N denotes as the length of each segment. It also means the original time series x will be divided by N segments and the reduced time series can be denoted as ̅ = ( ) ∈ : = 1,2, … , based on Equation (7) where l is the index of the reduced time series. If = 1, ̅ is the mean of the original time series; If = , ̅ is the original time series. In this research, in order to synchronize the image size, N is determined by the shortest length of MTS. Inevitably, the information losing on the longer timer series occurs. Although PAA will reduce the dimensionality of some time series, the result shows the classification can be improved based on concatenating multiple time series. The more detailed information can be found in Section 4. Dimensionality Reduction Using Piecewise Aggregate Approximation (PAA) An image is composed of pixels, so it can be considered as a n × n matrix, where n defines the image size. When the length of the time series data is n, the image size of any kind of transformation method is n × n [26]. As each batch of time series data can vary in length, the straight transformation of the original data into images will result in different sizes of images. Therefore, to obtain images of the same size for ConvNet, in this research, Piecewise Aggregate Approximation (PAA) method is applied to perform dimension reduction of the original time series data before transforming time series data into images [32]. Please note that applying PAA is also the convention method for data preprocessing before transferring time series to images [17]. PAA divides original time series into N equal-length segments. N is the length of the reduced times series that should satisfy the constraint of 1 ≤ N ≤ T. Then, the mean value of each segment substitutes the original time series to reduce the dimensionality from T to N. Suppose a time series x = x(1), x(2), . . . , x(T) where T is the length of the original time series. T/N denotes as the length of each segment. It also means the original time series x will be divided by N segments and the reduced time series can be denoted as x = x(l) ∈ R : l = 1, 2, . . . , N based on Equation (7) where l is the index of the reduced time series. If N = 1, x is the mean of the original time series; If N = T, x is the original Sensors 2020, 20, 168 7 of 15 time series. In this research, in order to synchronize the image size, N is determined by the shortest length of MTS. Inevitably, the information losing on the longer timer series occurs. Although PAA will reduce the dimensionality of some time series, the result shows the classification can be improved based on concatenating multiple time series. The more detailed information can be found in Section 4. Time Series Data Encoding As Images In this study, a 3-dimensional matrix is formed to contain the MTS. First, a time series data is encoded as a color image which has two dimensions using the GDF or MTF method. As the image can be of any color, adding one more dimension to represent the color is required. For example, the image can be represented with 3 color channels by red, green, and blue (RGB). Then, 3 elements in the first dimension exists. Please note that more colors can be used for representing more color channels. In this work, only RGB channels were to evaluate the concept of the framework. Image Concatenation MTS data transformation produces multiple images (one image for each univariate time series). These images have to be combined before feeding the ConvNet. This study adopted the concatenating method proposed by Yang et al. [33]. For RGB image aggregation, each colored image was first separated into three monochroic images: red, green, and blue (RGB) in this case. Then these monocolor images were concatenated together as a bigger image. Figure 4 illustrates the framework of concatenating RGB images. Please note that if more time series data are used as inputs for classification, more 2D images will be generated accordingly. However, only three RGB channels will be constructed in this case. Basically, this design is to maintain the same number of the input channels of the network structure which will benefit on keeping the ConvNet network structure simple. This design is particularly convenient to apply on the domains such as anomaly detection where the time series data can be processed on the edge computing from a variety of sensors, and the image files can be uploaded as inputs of ConvNet which might be in the different location such as on cloud computing environment. Sensors 2020, 20, x FOR PEER REVIEW 7 of 15 MTS data transformation produces multiple images (one image for each univariate time series). These images have to be combined before feeding the ConvNet. This study adopted the concatenating method proposed by Yang et al. [33]. For RGB image aggregation, each colored image was first separated into three monochroic images: red, green, and blue (RGB) in this case. Then these monocolor images were concatenated together as a bigger image. Figure 4 illustrates the framework of concatenating RGB images. Please note that if more time series data are used as inputs for classification, more 2D images will be generated accordingly. However, only three RGB channels will be constructed in this case. Basically, this design is to maintain the same number of the input channels of the network structure which will benefit on keeping the ConvNet network structure simple. This design is particularly convenient to apply on the domains such as anomaly detection where the time series data can be processed on the edge computing from a variety of sensors, and the image files can be uploaded as inputs of ConvNet which might be in the different location such as on cloud computing environment. There is an interesting issue regarding the "spurious edge" created by concatenating 2D images. The question is if the "spurious edge" influences the classification? In order to study this issue, an experiment was designed to evaluate the sequence of concatenating 2D images. The concatenated images with different sequence of the 2D images (the different patterns of "spurious edges") are compared with their classification performance. The experimental result shows the patterns of "spurious edges" will not significantly influence the classification result. The details of this experimental results can be found in Section 4. The Architecture of a ConvNet In this study, for each time series data, the size of 2D transformed image is fixed at 128 × 128 pixels. Due to the nature of the proposed concatenation method, if m time series exists, the size of the input image for the ConvNet is fixed at (128 × ) × 128 for each monochrome channel. For RGB images, three channels will be allocated. There is an interesting issue regarding the "spurious edge" created by concatenating 2D images. The question is if the "spurious edge" influences the classification? In order to study this issue, an experiment was designed to evaluate the sequence of concatenating 2D images. The concatenated images with different sequence of the 2D images (the different patterns of "spurious edges") are compared with their classification performance. The experimental result shows the patterns of "spurious edges" will not significantly influence the classification result. The details of this experimental results can be found in Section 4. The Architecture of a ConvNet In this study, for each time series data, the size of 2D transformed image is fixed at 128 × 128 pixels. Due to the nature of the proposed concatenation method, if m time series exists, the size of the input image for the ConvNet is fixed at (128 × m) × 128 for each monochrome channel. For RGB images, three channels will be allocated. In order to assess whether the complexity of ConvNet architecture affects the classification accuracy, in this research, two kinds of ConvNet, noted as the simple ConvNet and VGG16, are studied. VGG16 proposed by Simonyan and Zisserman is the model won the ImageNet Large Scale Visual Recognition Competition (ILSVRC) in 2014 [20]. For the simple ConvNet, we adopted the very popular model devised by Palm [34]. Two convolutional layers with a kernel size of 5 × 5, two max pooling layers with a 2 × 2 pixel window and stride of 2, and one fully-connected layer are suggested. After max pooling, the height and width of the input image becomes half. The learning rate was set to 0.0023 and the rectification non-linearity was applied to all hidden layers as the activation function based on the setting suggested in [19]. To prevent the overfitting problem, the early stopping method was implemented according to the suggestion in [35]. This method can also reduce memory and decrease computation time. Because VGGNet uses more layers and smaller size of convolutional filters to construct the deeper depth of network structure, in this work, we consider VGGNet as a larger network for learning which is expected to classify images more accurately. This research adopted the typical VGG16, which has 13 convolutional layers with a kernel size of 3 × 3, 5 max pooling layers with a 2 × 2 pixel window and 3 fully-connected layers. The learning rate was set to 0.00023 based on [20]. Most of the learnable parameters are used in the first fully-connected layers. The number of learnable parameters in VGG16 is 201,330,688, which is 800 times larger than the simple ConvNet (251,542). Obviously, VGG16 can be expected to spend more execution time and memory than typical ConvNet. Experiments and Results In this work, three series of experiments were conducted to evaluate the impact of: (1) the image transformation methods, (2) the sequences of concatenating images, and (3) the structure complexity of the network. As mentioned earlier, the first experiment was to evaluate the significance of utilizing image transformation methods: GASF, GADF, and MTF methods as inputs of ConvNet. The second experiment aimed to study the impact of "spurious edges" which are generated by concatenating images. The different sequences of concatenating 2D images were evaluated to check if the classification performance was affected by the sequence, or "spurious edges" of concatenated images. The performances of different random sequences are compared with each other. The third experiment focused on evaluating if the more complicated network structure is able to further improve the classification accuracy. The MTS data were transformed by three methods (GASF, GADF, and MTF) using the pyts package [36]. All experiments were carried out in Python 3.6 coding environment. The deep learning frameworks were built in PyTorch 1.1. The tests were conducted on a computer with Intel ® Core I7-8700K CPU 3.7 GHz, 64GB RAM, GeForce GTX Titan Xp video card, and Windows 10. Introduction of Data Set In this study, two popular MTS datasets, benchmark datasets for binary classification of MTS data, were used to evaluate the performance of the proposed framework. The Wafer dataset was collected from six vacuum chamber sensors that monitored the manufacture of semiconductor microelectronics. The ECG dataset in which exactly one heart beat exists per series was collected from two electrodes that recorded heartbeats as normal or abnormal. Both of the datasets were provided by Olszewski [31] and the classes of both datasets are binary (normal or abnormal). The details of these two datasets are described in Table 1. The data length can be different in each batch, but within the same batch, the data length is the same for all sensor variables. As the range of values collected by multiple sensors is different, the data were normalized to between 0 and 1. Then the data were smoothed using the PAA mentioned in Section 3 before transformation into images. Performance Evaluation Five-fold cross validation was applied to avoid overfitting problem. It also means for each fold, 80% of the data was used for training the simple ConvNet and VGG16 while the remaining 20% was used to test the deep learning tools. The accuracy rate and the error rate are the common measures to evaluate the performance of a classification tool. Equation (8) shows the formula to calculate the error rate. When the predicted class is the same as the actual class, the value of correct is 1, or 0 otherwise. N is the total number of testing data in each dataset. Experimental Results In this research, three experiments were conducted. Each experiment used the five-fold cross validation and ran for 20 times to obtain the mean value of error rate. The first experiment investigated the impact of image transformation method GASF, GADF, and MTF under the proposed RGB image concatenation using the simple ConvNet. The second experiment evaluated the impact of the sequence of concatenating images. The third experiment explored whether the more complex architecture of the ConvNet can produce better classification results. Figure 5 shows the boxplot of the average error rates by classifying classes of Wafer dataset under RGB images inputs of ConvNet. As mentioned earlier, three image encoding methods: GADF, GASF, and MTF were used. As can been seen, the mean error rates, indicated in the blue ink on the center of the plot, are between 0.4% and 0.57% for Wafer dataset. Similarly, the average error rates by the case of ECG dataset are between 5.72% and 6.15%. 4.3.1. Experiment #1: Comparison of Image Transformation Method Figure 5 shows the boxplot of the average error rates by classifying classes of Wafer dataset under RGB images inputs of ConvNet. As mentioned earlier, three image encoding methods: GADF, GASF, and MTF were used. As can been seen, the mean error rates, indicated in the blue ink on the center of the plot, are between 0.4% and 0.57% for Wafer dataset. Similarly, the average error rates by the case of ECG dataset are between 5.72% and 6.15%. Further statistical analysis, through the Dunn tests, was conducted to determine whether different image transformation methods affect the error rates. Based on the results presented in Table 2, the error rates are not significantly different among pairwise comparison of the three methods in the ECG dataset under 95% confidence interval. Although the mean error rates of GASF and MTF, which are the largest and lowest in the Wafer dataset, respectively, are significantly different, the pairwise comparisons between GASF and GADF, and between GADF and MTF are not significant. In short, the selection of the image transformation seems not to affect the classification result in terms of error rates. Further statistical analysis, through the Dunn tests, was conducted to determine whether different image transformation methods affect the error rates. Based on the results presented in Table 2, the error rates are not significantly different among pairwise comparison of the three methods in the ECG dataset under 95% confidence interval. Although the mean error rates of GASF and MTF, which are the largest and lowest in the Wafer dataset, respectively, are significantly different, the pairwise comparisons between GASF and GADF, and between GADF and MTF are not significant. In short, the selection of the image transformation seems not to affect the classification result in terms of error rates. In this experiment, only Wafer dataset was used because ECG has only two time series which cannot represent the complication of different image concatenation. In the Wafer dataset, each batch contains data collected from six sensors. Hence, the transformed images from the sensors can be arranged in various sequences. The concatenation can be arranged based on the different randomness. Different sequences generated different concatenated images. Without losing the generality, the concatenation of RGB images was conducted to clearly show "spurious edges" by MTF transformation which has shown the better result in the Wafer dataset. Experiment #1: Comparison of Image Transformation Method By following the same framework in Experiment #1, Figure 6 shows the box plot of 20 classification results under three different sequences that are based on different random number seed in the experiment. No matter which sequence was applied, the means of classification errors are around 0.4~0.45. The Wilcoxon Signed Rank Test was applied to check the pairwise comparison among these three random sequences. The statistical test also confirmed no significantly difference on the classification performance under the pairwise comparisons. It means the sequence of concatenating the images will not significantly influence the classification. This test also demonstrated that the ConvNet is able to learn image features regardless of the sequence of concatenation (or the patterns of edges). around 0.4~0.45. The Wilcoxon Signed Rank Test was applied to check the pairwise comparison among these three random sequences. The statistical test also confirmed no significantly difference on the classification performance under the pairwise comparisons. It means the sequence of concatenating the images will not significantly influence the classification. This test also demonstrated that the ConvNet is able to learn image features regardless of the sequence of concatenation (or the patterns of edges). Experiment #3: Comparison of Different Architectures of ConvNet In the third experiment, two architectures of ConvNet: simple ConvNet and VGG16, were represented as the simple and complicated network structures, respectively. It is worth noting that VGG16 has the more complicate (deeper) network than simple ConvNet. Figure 7 shows that in the Wafer dataset, the average error rates under the simple ConvNet and VGG16 fall between 0.4% and 0.57%. The average error rates range from 5.35% to 6.47% in the ECG dataset, as shown in Figure 8. It can be seen, for each network structure, there is no significant different under different transformation methods. Further statistical analysis through the Kruskal-Wallis's analysis of variance (Kruskal-Wallis ANOVA) proves that the error rates of these two ConvNet architectures are insignificantly different (p-value = 0.87 in the Wafer dataset and p-value > 0.999 in the ECG Experiment #3: Comparison of Different Architectures of ConvNet In the third experiment, two architectures of ConvNet: simple ConvNet and VGG16, were represented as the simple and complicated network structures, respectively. It is worth noting that VGG16 has the more complicate (deeper) network than simple ConvNet. Figure 7 shows that in the Wafer dataset, the average error rates under the simple ConvNet and VGG16 fall between 0.4% and 0.57%. The average error rates range from 5.35% to 6.47% in the ECG dataset, as shown in Figure 8. It can be seen, for each network structure, there is no significant different under different transformation methods. Further statistical analysis through the Kruskal-Wallis's analysis of variance (Kruskal-Wallis ANOVA) proves that the error rates of these two ConvNet architectures are insignificantly different (p-value = 0.87 in the Wafer dataset and p-value > 0.999 in the ECG dataset). It simply means the complicated network structures does not necessarily guarantee better classification results. Table 3 shows the execution times of the simple ConvNet and VGG16 in processing the Wafer dataset and ECG datasets. It is obvious to show that VGG16 took more than ten times longer than the simple ConvNet in processing time, but the prediction accuracy improvement was insignificant. In short, the results of experiments show the interesting insights: encoding MTS data into colored concatenating image as inputs of the simple ConvNet can significantly improve the classification, however, the complicated network might not further improve it. Table 3 shows the execution times of the simple ConvNet and VGG16 in processing the Wafer dataset and ECG datasets. It is obvious to show that VGG16 took more than ten times longer than the simple ConvNet in processing time, but the prediction accuracy improvement was insignificant. In short, the results of experiments show the interesting insights: encoding MTS data into colored concatenating image as inputs of the simple ConvNet can significantly improve the classification, however, the complicated network might not further improve it. Comparison of Different Classification Tools In literature, many methods were proposed to classify binary classes in Water and ECG MTS data. Table 4 enumerates the error rates conducted by different methods [4,[37][38][39]. Please note that the average error rates are all limited to one-dimensional data transformation except our proposed methods starting with "concat". As shown in this table, the proposed framework which uses three Comparison of Different Classification Tools In literature, many methods were proposed to classify binary classes in Water and ECG MTS data. Table 4 enumerates the error rates conducted by different methods [4,[37][38][39]. Please note that the average error rates are all limited to one-dimensional data transformation except our proposed methods starting with "concat". As shown in this table, the proposed framework which uses three encoding methods with RGB by ConvNet produces better prediction accuracy in classifying Wafer and ECG datasets, indicated as bold face. In fact, the proposed concat-MTF-RGB can generate the best result (error rate = 0.4) in Wafer dataset while concat-GADF-RGB can obtain the best result (error rate = 5.35) in ECG dataset when comparing with previous works in literature. Therefore, once again, we can conclude that concatenating the encoded RGB images from multivariate time series data as the inputs of ConvNet following the proposed framework can significantly improve the classification accuracy, especially for the binary classification problems. 1.92 14.5 STKG-SVM-K3 [37] 1.23 14.7 STKG-NB-K5 [37] 3.69 13.01 STKG-IF-PSVM-DT+M [37] 0.84 21.77 STKG-IF-NB-SVM+M [37] 2.23 9.71 normDTW [38] 3.85 16 combDTW [38] 2.01 16 LSTM-FCN [39] 1 15 MLSTM-FCN [39] 1 14 ALSTM-FCN [39] 1 14 MALSTM-FCN [39] 1 14 Conclusions MTS classification tries to classify multiple univariate time-series data and predicts a class based on the learned patterns. This study proposed a framework of concatenating 2D images transformed from time series data as RGB input channels for ConvNet training. In this work, by following the convention, three image encoding methods: GASF, GADF, and MTF were used to encode MTS data into two-dimensional images after PAA dimension reduction. Then the MTS 2D images were concatenated as a big image separated by RGB channels to feed into ConvNet for binary classification. In order to investigate the impacts of: (1) the transformation methods, (2) the sequence of concatenation, and (3) the complexity of network structure on classification performance, a series of experiments were conducted. Three transformation methods, three different random sequences of concatenation (only for Wafer dataset), and two kinds of ConvNet architectures (simple ConvNet vs. VGG16), were used to assess the effects of these adjustments on the prediction accuracy. Based on experimental results, the proposed framework applying the concatenated RGB images and with simple architecture of ConvNet can significantly improve the classification results. It is interesting that the selection of encoding methods does not affect the prediction outcome significantly. Also, the sequence of image concatenation is not significant for classification accuracy. These findings actually release the troublesome of choosing the image transformation method and the order of image concatenation. Besides, the experiment of conducting the two ConvNet (simple and complicated VGG16) show they produced insignificantly different results based on colored concatenating images as inputs. This "simple is enough" finding can enlighten MTS classification practitioners that always starting with the simple network rather than complicated one when applying deep learning methods on MTS classification problem. Again, the proposed framework with encoding images and simple ConvNet architecture was compared with other methods published in the past literature. The proposed framework produced promisingly the lowest error rates in both Wafer and ECG datasets where multivariate variables are inputs to classify binary class (normal vs. abnormal). There are several future directions to further study the model. First, in this work, only one ConvNet was used for training data. Another framework which utilizes parallel ConvNets for each time series data and joins them in the last layer for prediction can be constructed. It would be worth evaluating if the parallel network will improve the accuracy. Second, developing a transformation method that can preserve both the dynamic and static information in the temporal range at the same time, or filter out irrelevant noise in the time series may be helpful to increase the feature distinctiveness in the images. Third, it might be interesting to check if more monochrome than RGB can improve the classification further. Last but not least, as the current framework was applied in binary classification datasets only, multiclass classification can be explored to assess the proposed framework performance.
2020-01-02T14:02:44.397Z
2019-12-27T00:00:00.000
{ "year": 2019, "sha1": "bb403fea514aa0323b223935c1f6422eeabca251", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1424-8220/20/1/168/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "9bd2f60e9d37a1c37da659ada8583d3cb84e41a6", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science", "Medicine" ] }
56053181
pes2o/s2orc
v3-fos-license
Impact of Ge 4 + Ion as Structural Dopant of Ti 4 + in Anatase : Crystallographic Translation , Photocatalytic Behavior , and Efficiency under UV and VIS Irradiation Nanometric particles of germanium-doped TiO2 were prepared by homogeneous hydrolysis of TiOSO4 and GeCl4 in an aqueous solution using urea as the precipitation agent. Structural evolution during heating of these starting Ge-Ti oxide powders was studied by X-ray diffraction (XRD) and high-temperature X-ray powder diffraction (HTXRD). The morphology and microstructure changes were monitored by means of scanning electron microscopy (SEM), Raman and infrared spectroscopy (IR), specific surface area (BET), and porosity determination (BJH). The photocatalytic activity of all samples was determined by decomposition of Orange II dye under irradiation at 365 nm and 400 nm. Moderate doping with concentration upto value 2.05 wt.% positively influences azo dye degradation under UV and Vis light. Further improvement cannot be achieved by higher Ge doping. Effect of the annealing (200, 400, and 700◦C) on photocatalysis and other properties has been assessed. Introduction TiO 2 photocatalyst is in the focus of numerous studies due to its attractive characteristics and application in the remediation of environmental contaminants, that is, water disinfection [1].Despite its potential application, the fast recombination of photo-generated electron-hole pair on the surface or in the lattice of TiO 2 limits its practical use.Couple of strategies has been proposed to enhance the effectiveness of the photocatalysis.Recent techniques include, for instance, controlled structures with exposed active facets [2,3].However, doping of metal ions in TiO 2 is the widespread quite common technique to suppress the rate of that recombination.Doping of a metal ion in a semiconductor is known to affect both photophysical behavior and photochemical activity.Enhanced photocatalytic activity over binary metal oxides and transition metal, noble metal, or non-metaldoped TiO 2 has been widely reported [4][5][6][7][8].Transition metal ion doping may improve the trapping of electrons and inhibit the electron-hole recombination [9].The incorporation of germanium (Ge) into titania (TiO 2 ) creates an attractive semiconductor.TiO 2 -Ge nanocomposite thin films were synthesized by RF magnetron sputtering from targets fabricated from a mixture of TiO 2 (P25 Degussa) and Ge powders [10].Titania-germanium nanocomposite, which comprises Ge nanodots in the TiO 2 matrix, is an interesting thermoelectric [11], optoelectronic [12], and photovoltaic material [13,14].The preparation of titania (TiO 2 ), germania (GeO 2 ), and binary TiO 2 -GeO 2 oxide gels with different Ti/Ge ratio gels based on sol-gel method with surfactant-assisted mechanism and their application for dye-sensitized solar cells were reported [15].Adding a small amount of commercial GeO 2 into aqueous suspension significantly enhanced the photocatalytic activity of titaniabased photocatalysts (P25) for the degradation of dyes [16]. In the present study a more cost-effective and environmentally friendly method is used for the photocatalyst preparation: homogenous precipitation of acid aqueous Experimental 2.1.Preparation of Samples.All chemical reagents used in the present experiments were obtained from commercial sources.TiOSO 4 , GeCl 4 , and urea were supplied by Sigma-Aldrich, Czech Republic. Nanometric particles of doped titania were prepared by homogeneous hydrolysis of TiOSO 4 and GeCl 4 in aqueous solutions using urea as the precipitation agent.In a typical process, 100 g TiOSO 4 was dissolved in 100 mL hot distilled water acidified with 98% H 2 SO 4 .The pellucid liquid was diluted by 4 L distilled water and a defined amount of GeCl 4 was added using microsyringe (0 to 1 mL, Hamilton).The resulting solution was mixed with 300 g of urea (see Table 1) and heated at 100 • C for 6 h under stirring until pH reached 7.2; at this pH gaseous ammonia is released from the mixture.The formed precipitates were decanted, filtered, and dried at 105 • C. Eight new doped titania samples denoted as TiGeXX (where XX = 05, 07, 10, 15, 25, 30, 60, and 90 is the amount of GeCl 4 in mL) were prepared (see Table 1).In order to see how ion influences the structural alteration of anatase TiO 2 during heating, the whole set of samples was annealed at 200, 400, and 700 • C in a muffle furnace with the temperature rate 10 • C min −1 during 2 hours. Characterization Methods. Diffraction patterns were collected with diffractometer PANalytical X Pert PRO equipped with conventional X-ray tube (Cu Kα radiation, 40 kV, 30 mA) and a linear position sensitive detector PIXcel with an antiscatter shield.A programmable divergence slit set to a fixed value of 0.5 deg, soller slits of 0.02 rad, and mask of 15 mm were used in the primary beam.A programmable anti-scatter slit set to fixed value of 0.5 deg., Soller slit of 0.02 rad, and Ni beta-filter were used in the diffracted beam.Qualitative analysis was performed with the DiffracPlus Eva software package (Bruker AXS, Germany) using the JCPDS PDF-2 database [17].For the quantitative analysis of XRD patterns we used Diffracplus Topas (Bruker AXS, Germany, version 4.1) with structural models based on ICSD database [18].This program permits to estimate the weight fractions of crystalline phases and mean coherence length by means of Rietveld refinement procedure.The internal standard addition method with rutile (10 wt.%) was used for amorphous phase determination [19]. The sample TiGe09 was studied by in situ high temperature XRD in air with a PANalytical X'Pert PRO diffractometer using Co K radiation (40 kV, 30 mA) and a multichannel detector X'Celerator with an anti-scatter shield, equipped with a high temperature chamber (HTK 16, Anton Paar, Graz, Austria).The measurements started at room temperature and finished at 1200 • C. The XRD measurements using rutile as an internal standard (10 wt.%) were performed to evaluate the content of amorphous phase in doped titania samples. Scanning electron microscopy was performed with Quanta 200 FEG (high-resolution field emission gun SEM microscope, FEI Czech Republic) equipped with an energy dispersive X-ray spectrometer (EDS).Specimens for morphological investigations were prepared by droplet evaporation of samples dispersion on a carbon-supported SEM target.The specimens have been imaged in the low-vacuum mode using accelerating voltages of 30 kV. The specific surface areas of samples were determined from nitrogen adsorption-desorption isotherms at liquid nitrogen temperature using a Coulter SA3100 instrument with outgas 15 min at 150 • C. The Brunauer-Emmett-Teller (BET) method was used for surface area calculation [20].The pore size distribution (pore diameter, pore volume, and micropore surface area of the samples) was determined by the Barrett-Joyner-Halenda (BJH) method [21]. Diffuse reflectance UV/VIS spectra for evaluation of photophysical properties were recorded in the diffuse reflectance mode (R) and transformed to absorption spectra using the Kubelka-Munk function [22].A Perkin Elmer Lambda 35 spectrometer equipped with a Labsphere RSA-PE-20 integration sphere with BaSO 4 as a standard was used. Raman spectra were obtained with DXR Raman microscope (Thermo Scientific, USA).532 nm laser was used at a power set from 0.2 to 3 mW in aim to obtain optimal Raman signal.Powdered samples were scanned at 15points mapping mode under 10x objective in an automated autofocus mode. DTA-TG-MS measurements were carried out using a simultaneous Netzsch Instrument STA 409 coupled to quadrupole mass spectrometer Balzers QMS-420 under dynamic conditions in the air (flow rate 75 mL min −1 ).The samples were heated at the rate of 3 Photocatalytic activity of samples was assessed from the kinetics of the photocatalytic degradation of 0.02 M Orange II dye (sodium salt 4-[(2-hydroxy-1-naphtenyl)azo]benzene-sulfonic acid) in aqueous slurries.The amount of the dye in the experiment by far exceeds the sorption capacity of the titania.For azo dye degradation, the complete mass balance in nitrogen indicated that the central -N=Nazo group was converted in gaseous dinitrogen, which is ideal for the elimination of nitrogen-containing pollutants, not only for environmental photocatalysis but also for any physicochemical method [23].Direct photolysis employing artificial UV light or solar energy source cannot mineralize Orange II [24].Kinetics of the photocatalytic degradation of aqueous Orange II dye solution was measured by using a selfconstructed photoreactor [25].The photoreactor consists of a stainless steel cover and quartz tube with fluorescent Narva lamp with power 13 W and light intensity ∼3.5 mWcm −2 .Black light lamp (365 nm) for UV and warm white lamp (over 400 nm) for visible light irradiation were used.Orange II dye solution was circulated by means of membrane pump through a cell.The concentration of Orange II dye was determined by measuring absorbance at 480 nm with VIS spectrophotometer ColorQuest XE.The 0.5 g of titania sample was sonicated for 10 min with an ultrasonic bath (300 W, 35 kHz) before use.The pH of the resulting suspension was taken as the initial value for neutral conditions and under the experiment was kept at value 7.0. X-Ray Diffraction Analysis. The XRD patterns of the doped titania samples are shown in Figure 1.Only the diffraction lines of anatase (ICDD PDF 21-1272) can be seen.The ionic radius 0.054 nm [26] is smaller than ionic radius of Ti 4+ (0.0605 nm) [27] and germanium can be easily substituted for Ti 4+ into TiO 2 lattice.The increasing germanium content leads to the tendency in prevalence of amorphous part phase.This phenomenon can influence the photocatalytic activity of as-prepared samples.The crystallite size, anatase and amorphous phase contents, and cell parameters a and c of anatase (calculated by Rietveld refinement) are presented in Table 1. The high temperature XRD pattern of the sample TiGe90 (with the highest content of ion) is presented in Supplement Figure S1. of the Supplementary Material available online at doi:10.1155/2012/252894The measurement started at room temperature and was completed at 1200 The temperatures above 850 • C lead to the formation of crystalline GeO.The anatase-to-rutile transition begins at a higher temperature than in nondoped anatase; it starts at 900 • C and continues up to 1000 • C, which is by about 100 • C higher than nondoped titania [28] and by about 500 • C higher than Ru-doped titania [29]. As it was already mentioned, the thermodynamic stability of polymorphs depends on the crystallite size [30].The temperature of the phase transition can be also understood as the temperature at which a critical crystallite size is achieved.The dependence of crystallite size on the temperature (see Supplement Figure S2a) shows that the phase transition of anatase begins when the crystallite size is ∼68 nm.Compared with the results [28], the temperature shift by ∼100 • C for the anatase to rutile phase transition is associated with faster growth of the anatase crystallite size (see Supplement Figure S2b).The unit-cell parameters of the sample TiGe90 prepared by in-situ heating experiments increase almost linearly, which reflects the thermal expansion of the structures (see Supplement Figure S2c-d).An exception is the unit-cell parameter of anatase a, which decreases with increasing the temperature up to 600 • C.This reduction is probably caused by evolution of physically and chemically bounded water or other volatile structural impurity. Stoichiometry. The doped titania samples were studied up to 800 • C by simultaneous thermogravimetric and differential thermal analysis (TG/DTA).For results of the analysis of representative sample TiGe07 see Supplement Figure S5.Below 200 • C, the weight loss is considered to be due to the escape of surface-bound water and/or carbon dioxide.The subsequent weight loss with the maximum rate at temperature 300 • C is due to the escape of CO 2 , which was possibly closed in the titania structure.Hydrolysis of urea leads to its degradation and in situ evolution of a large excess of CO 2 and NH 3 .Ammonia reacts with water to form NH 4 OH and CO 2 effervesces from the reaction solution.The presence of ions leads to preferred sorption of CO 2 to titania surface, in contrast to Ru 3+ doping [31]. Because DTA-TG results indicated some thermal changes of the titania specimens, the prepared samples were heated at temperatures 200, 400, and 700 • C to check a possible influence of the volatile admixtures on photocatalysis.The calcination was carried out in a laboratory muffle furnace with heating rate of 10 • C min −1 ; the desired temperature was kept for 2 hours.BET surface area and total pore volume of the calcines are listed in Supplement Table 2 and their photocatalytic activity in Table 1.At 200 • C release of surface-bound water increases the specific surface area, but further annealing leads to its decrease as a consequence of growing particles size.In a dynamic experiment at the temperature around 300 • C, CO 2 is released from the titania structure (see Supplement Figure S5) which is accompanied by modification of the mesoporous texture, pore size grows from 3 to ∼5 nm (see Supplement Figure S6) for sample denoted TiGe07. In order to see how incorporation influences the structural interchanges of anatase lattice and phase transition temperature of anatase to rutile modification all doped samples (where Ge content varying between 0.84-12 wt.%) have been heated at three different temperatures: 200, 400, and 700 • C; three new sets of thermal treated samples were obtained (see Table 1).The calcination was carried out in a laboratory muffle furnace with rate of 10 • C min −1 and was kept 2 h at desired temperature. Surface Areas and Porosity. The specific surface area of the as-prepared samples, calculated by the multipoint Brunauer-Emmett-Teller (BET) method, total pore volume, micropore surface area, and micropore volume are listed in Table 1.Barrett-Joyner-Halenda (BJH) pore-size distribution plot and nitrogen adsorption/desorption isotherms (inset) are shown in Figure 2 and are characteristic for all prepared samples.According to IUPAC notation [32], microporous materials have pore diameters < 2 nm and macroporous materials have pore diameters > 50 nm; the mesoporous category is in the middle.The mean pore size in the prepared photocatalysts is around ∼3 nm and the pore size distribution is relatively narrow.All samples have a type IV isotherm, which is characteristic for mesoporous material with hysteresis, typical for large-pore mesoporous materials, and can be ascribed to capillary condensation in mesopores.The high steepness of the hysteresis indicates the high order of mesoporosity.All samples have type A hysteresis loop according de Boer's characterization [33].This hysteresis type is connected with pores in the form of capillary tubes open at both ends, wide ink-bottle pores, and wedge-shaped capillaries. The thermal treatment of the samples leads to changes of BET surface area, pore volume, and pore-size distribution.Although the samples calcined at various temperatures showed the same type of isotherm, the quantity of adsorbed nitrogen was found to be different for different temperatures.The nitrogen adsorption ability of the samples heated to 200 • C showed an increasing trend in comparison with nonthermal-treated samples (see Supplement Table 2).In addition, the adsorption and desorption were found to cause a growth of large-pore volume at 200 • C (except for the sample TiGe15) in relation with noncalcined series (see Table 1).The best microstructural properties at 200 • C were achieved with the sample TiGe25 (2.86 wt.% ): this is attributed to the extremely high surface area (387.3 m 2 /g −1 ) and pore volume (0.4298 cm 3 /g −1 ) of that specimen.Further heat treatment (400 • C) caused moderate reduction of both surface area and pore volume.Calcination performed at 700 • C for 2 hours leads to inevitable decrease of surface area and pore volume: for instance the BET of TiGe05 sample (0.84% ) heated at that temperature is only 13.2 m 2 /g −1 and BJH is 0.0677 cm 3 /g −1 .The surface area and pore volume data obtained for the samples calcined at various temperatures reveal that all samples can be used for photocatalysis except for the highest tested temperature (700 • C).It is worth to mention that pore size showed an increasing trend with rise of the calcination temperature.The TiGe05 and TiGe07 samples heated at 200, 400, and 700 • C showed a mesopore range of 5 nm and match well the characteristic of meso-structured materials.The mesoporous character of heated TiGe05-90 samples can be assigned to the synthetic parameters such as precipitation time, pH, temperature, and the annealing temperature.During the thermal treatment chemically and physically bound water and CO 2 are released and pores of larger radii are formed. The SEM images of germanium-doped titania samples are presented in Figure 3.The titania powder consists of 2-3 μm spherical particles with a narrow particle size distribution.With increasing content, the size and shape of these spherical clusters remain unchanged and confirmed that structural alteration take place inside the spheres (see Table 1). Raman and IR Spectroscopy. The Raman spectra of the samples are presented in Figure 4.The specific vibration modes are located around 151 cm −1 (Eg), 399 cm −1 (B1g), 515 cm −1 (B1g + A1g), and 638 cm −1 (Eg) indicating the presence of the anatase phase in all of these samples.The measured frequencies of peak positions vary between the samples, with increasing content of Ge at low frequency Eg mode shifts from 150.1 to 151.1 cm −1 (see Supplement Table S1).Although this could be theoretically attributed to a change in the particle size, the shift of the position band of the Eg signal of the anatase signal should rather to some specific effect of the insertion into the TiO 2 framework or nonstoichiometry of titania be formed by the soft synthesis route.Maximum of the Eg Raman band of annealed doped anatase specimen prepared by urea route is shifted by more than 6 cm −1 with respect to pure anatase 144 cm −1 [34,35].This shift is further continuously increasing by about 2 cm −1 with growing Ge content up to the specimen 6 mL.The FWHM (the full width in the half maximum) of pure and Ge-doped anatase is as large as 23-28 cm −1 with no obvious Ge concentration trend up to 6 mL GeCl 4 addition.Because XRD-estimated mean coherence length of the anatase is >10 nm, the shift of nondoped TiO 2 and large FWHM cannot be explained by phonon confinement due to limited particle size, for which particle size must have been <6 nm [34,36].The departure of the Raman band characteristics with respect to pure anatase must hence be attributed to nonstoichiometry, analogously as it was reported in the case of CeO 2 [37], and anatase [35] or other phenomena related to nonideal crystal lattice of anatase [38].All prepared specimens can hence be considered as defective anatase, with minor but clear influence of Ge doping on the Eg band maximum. Figure 5 shows the IR spectrum of the doped titania prepared by homogeneous hydrolysis with urea.The broad absorption peaks about 3400 cm −1 and the band at 1623 cm −1 correspond to the surface water and the hydroxyl groups [39].The small band at ∼1395 cm −1 can be assigned to adsorbed carbonates on surfaces of TiO 2 ; CO 2 could have been originated from the urea decomposition [40].Low frequency bands in the range <500 cm −1 correspond to the Ti-O-Ti vibration of the network [41].No other absorption bands were identified; therefore, the Ge atoms substitute Ti atoms in the all samples series TiGe. Diffuse Reflectance UV/Vis Spectroscopy.Supplement Figure S3 presents UV/Vis absorption spectra of the doped TiO 2 , nondoped TiO 2 , and GeO 2 .The anatase has a wide absorption band in the range from 200 to 385 nm and the GeO 2 has a UV absorption band close to 325 nm.The diffuse reflectance spectra were transformed by performing the Kubelka-Munk transformation of the measured reflectance according to the following equation: where R is the reflectance of an "infinitely thick" layer of the solid [42]. In comparison with the pure titania slight red shift of the absorption edge of doped samples is observed up to ∼ 2,6 wt.% of.Conversely, with a further increase of Ge content the absorption edges are blue shifted.This could be caused perhaps by a presence of amorphous GeO 2 (see below). The method of UV/Vis diffuse reflectance spectroscopy was employed to estimate the band-gap energies of the prepared germanium-doped TiO 2 samples.Firstly, to establish the type of band-to-band transition in these synthesized particles, the absorption data were fitted to equations for direct band-gap transitions.The minimum wavelength required to promote an electron depends upon the bandgap energy E bg which is commonly estimated from UV/Vis absorption spectra by the linear extrapolation of the absorption coefficient to zero using the following equation: where A is the absorption according to (1), B is absorption coefficient, and hν is the photon energy in eV calculated from the wavelength λ in nm [43,44]: In case that the fundamental absorption of the titania crystal possesses an indirect transitions between bands, then n = 2, for direct transition between bands n = 1/2 [45,46].The energy of the band gap is calculated by extrapolating a straight line to the abscissa axis, when α is zero, then E bg = hν [47].Supplement Figure S4 shows the (Ahν) 2 versus photon energy for a direct band-gap transition.The value of 3.20 eV for sample denoted as TiGe0 is reported in the literature for pure anatase nanoparticles [46,48].The value of band-gap energy decreases only for sample denoted TiGe10 and TiGe15 (3.05 eV), in other samples is E bg in the range 3.1-3.2eV.The band gap of bulk GeO 2 is 3.7 eV, which corresponds to absorption above approximately 350 nm.The decrease in band-gap of TiGe25, TiGe30, TiGe60, and TiGe90 is probably due to increased amount of amorphous phase GeO 2 . Photocatalytic Tests. According to the degradation pathway proposed in [49], the main byproducts formed by the ozonation of azo dye are organic acids, aldehydes, ketones, and carbon dioxide.Demirev and Nenov [50] suggested that the eventual degradation products of azo dye in the ozonation system would be acetic, formic, and oxalic acids.The reaction pathway for the visible light-driven photocatalytic degradation of Orange II dye in aqueous TiO 2 suspensions is schematically shown in [51]. On kinetics of heterogeneous photocatalysis for decomposition of model compounds such as dyes Orange II can be used Langmuir-Hinshelwood equation [52,53]: where r is the degree of dye mineralization, k is the rate constant, t is the illumination time, K is the adsorption coefficient of the dye, and [OII] is the dye concentration.At very low concentration of the dye, in the validity of Lambert-Beer Law [54]: where A is the absorbance, c, the dye concentration, l the length of absorbent layer, and ε is the molar absorption coefficient, it is possible to simplify (4) to the first order kinetic equation: and after integration: The calculated degradation rate constants k (min −1 ) for a reaction following the first order model kinetics of Orange II dye degradation at 365 nm (black light) and 400 nm (warm white light) are shown in Table 2 and the course of degradation is shown in Figure 6.Moderate Ge doping up to the concentration 2.05 wt.% positively influences the azo dye degradation, but higher doping is not beneficial.The most active sample TiGe15 (containing 2.05 wt.%) is nonthermally treated and it has high contribution (74.1%) of very well-crystallized anatase nanocrystals with mean coherence length of 14-15 nm in a mixture with an amorphous phase (25.9%).The content has a positive effect on the porosity of titania.Higher level doping of reduces photocatalytic activity, probably due to decreasing particle size and total pore volume.A lower photocatalytic activity of TiGe30, TiGe60, and TiGe90 can be attributed to the content of amorphous phase of the germanium oxide and/or with a blue shift of the UV/VIS absorption edge (increase of E bg ).The contribution of Ge-amorphous phase can be expected to have significant influence, because the photocatalytic activity is lower compared to nondoped TiO 2 [55].For the comparison of Ge-doped titania with other materials see Supplement Table S3. All samples as received by wet synthesis as well as those annealed at the three calcination temperatures were subjected to the photocatalytic activity assessment.Thermal treatment affects surface area, pore size distribution, and porosity of the samples as mentioned above.Calcination at 200 and 400 • C does not significantly affect the photocatalytic activity under 365 nm irradiation, while the activity under >400 nm irradiation decreased in most cases.Interestingly, the specific surface area and porosity of the catalysts increased after 200 and 400 • C calcination.Obviously (and somehow surprisingly) these parameters are not the factors improving the titania photoactivity.Obviously the quality of the catalysts surface is deteriorated for the photodegradation under >400 nm irradiation.Calcination at so low temperatures cannot cause recrystallization or particle growth, and so we assume that dehydration/dehydroxylation of the titania surface could be responsible for that surface quality worsening.Indeed, TG/EGA analysis confirmed a loss of H 2 O < 150 • C from the catalysts.This explanation is in agreement with the finding that photoactivity of titania obtained by a sol-gel route can be enhanced by the increase of the concentration of the surface hydroxyls [56].Indeed, an oxygen overstoichiometry of titania specimens obtained by crystallization from aqueous solutions, which can only be rationalized by the presence of a pair OH − groups instead of one O 2− , has recently been confirmed by XPS analysis [57,58]. Conclusion Small and moderate doping (0.84-2.65 wt.%) of titania has positively affected photocatalytic degradation of Orange II at both UV and Vis irradiations.The highest photocatalytic with TiGe15 sample (2.05 wt.%) with high contribution (74.1%) of very well-crystallized anatase nanocrystals and the BET surface area of 237.8 m 2 g −1 .The pore size distribution measurement showed that this total surface area can be attributed mainly to mesopores with mean pore radius 5 nm. The effect of the sample annealing at 200, 400, and 700 • C was evaluated.200 • C annealing caused removal of surfacebound water and carbon dioxide, which resulted in a moderate increase of the surface area and the pore volume.However the photoactivity of the samples slightly decreased, probably as a consequence of the surface dehydroxylation of titania.Thermal treatment at higher temperatures (400 and 700 • C) leads to a linear decrease of the surface area and the pore volume and resulted in lowering the photoactivity.The best photocatalytic activity in the entire set of thermally treated series was achieved with samples TiGe07/200, TiGe07/400, and TiGe07/700 (1.23 wt.%). • C with 50 • C steps in temperature range 25-550 • C and 25 • C steps in the second temperature interval 550-1200 • C. Diffraction lines of anatase (ICDD PDF 21-1272), rutile (ICDD PDF 21-1276), and GeO (ICDD PDF 30-0590) were observed during the heating.No diffraction lines of other phases were observed at temperatures up to 850 • C (except for Pt sample holder diffraction lines), which means that is either noncrystalline (amorphous) or full incorporated into the anatase structure. Table 1 : The EDX analysis, surface area, porosity, crystallite size, anatase and amorphous phase contents, and cell parameters a and c. Table 2 : Photocatalytic activity of prepared doped titania samples.
2018-12-10T23:45:11.604Z
2012-01-01T00:00:00.000
{ "year": 2012, "sha1": "0a199128e2ff29f3694dc296346f0783a3c31f82", "oa_license": "CCBY", "oa_url": "https://downloads.hindawi.com/journals/jnm/2012/252894.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "0a199128e2ff29f3694dc296346f0783a3c31f82", "s2fieldsofstudy": [ "Materials Science" ], "extfieldsofstudy": [ "Materials Science" ] }
46930183
pes2o/s2orc
v3-fos-license
Deep Graphs We propose an algorithm for deep learning on networks and graphs. It relies on the notion that many graph algorithms, such as PageRank, Weisfeiler-Lehman, or Message Passing can be expressed as iterative vertex updates. Unlike previous methods which rely on the ingenuity of the designer, Deep Graphs are adaptive to the estimation problem. Training and deployment are both efficient, since the cost is $O(|E| + |V|)$, where $E$ and $V$ are the sets of edges and vertices respectively. In short, we learn the recurrent update functions rather than positing their specific functional form. This yields an algorithm that achieves excellent accuracy on both graph labeling and regression tasks. INTRODUCTION Tasks which involve graph structures are abundant in machine learning and data mining. As an example, social networks can be represented as graphs with users being vertices and friendship relationships being edges. In this case, we might want to classify users or make predictions about missing information from their profiles. Many such tasks involve learning a representation for the vertices and the edges of the corresponding graph structures. This often turns out to be difficult, requiring a lot of task-specific feature engineering. In this paper, we propose a generic algorithm for learning such representations jointly with the tasks. This algorithm requires no task-specific engineering and can be used for a wide range of problems. The key idea is to realize that many graph algorithms are defined in terms of vertex updates. That is, vertex features are updated iteratively, based on their own features and those of their neighbors. For instance, the PageRank algorithm [Page et al., 1998] updates vertex attributes based on the popularity of its parents. In fact, there even exist numerous graph processing frameworks based on the very idea that many graph algorithms are vertex centric [Low et al., 2010, Malewicz et al., 2010. Unlike prior work, we do not posit the form of the vertex update function but instead, we learn it from data. In particular, we define a recurrent neural network (RNN) over the graph structure where the features of each vertex (or edge) are computed as a function of the neighboring vertices' and edges' features. We call the resulting model a Deep Graph (DG). The vertex features can be used to perform multiple tasks simultaneously, e.g. in a multi-task learning setting, and the proposed algorithm is able to learn the functions that generate these features and the functions that perform these tasks, jointly. Furthermore, it is able to learn representations for graph vertices and edges that can, subsequently, be used by other algorithms to perform new tasks, such as in transfer learning. Finally, apart from the graph structure, this algorithm is also able to use attributes of the vertices and edges of the graph, rendering it even more powerful in cases where such attributes are available. Our work is arguably the first to take the notion of deep networks literally and to apply it to graphs and networks. To some extent our work can be interpreted as that of learning the general dynamics of brain "neurons" in the form of a mapping that takes the vertex state and the neighbors' states and transforms their combination into a new vertex state. By using sufficiently powerful latent state dynamics we are able to capture nontrivial dependencies and address the vanishing gradient problem that usually plagues recurrent networks. We performed several experiments showing the usefulness and power of DGs. Initially we attempted to learn PageRank and HITS (an algorithm also used for ranking vertices based on their popularity [Kleinberg, 1999]) in order to confirm that our approach can learn simple graph features. Our results indicate that DGs can learn to compute PageRank and HITS scores by only applying a vertex update equation only 6 times, as opposed to hundreds or thousands of iterations that the original algorithms require. Furthermore, we also performed vertex classification ex- Figure 1: Vertex updates on a graph. Circles represent vertices, rectangles with colored circles inside them represent vectors, arrows represent edges, gray colored vectors correspond to the feature vectors that our method learns to compute as a function of the neighboring vertex feature vectors, and red and blue colored vectors correspond to vertex attributes that are either observed, or computed as a learned function of the corresponding feature vector. periments on 5 diverse data sets, and DGs were shown to significantly outperform the current state-of-the-art. BACKGROUND Notation. We denote by G = (V, E) a graph with vertices V and edges E, i.e. for some vertices i, j ∈ V we have (i, j) ∈ E. Moreover, when available, we denote by ψ(i) and ψ(i, j) attributes associated with vertices and edges, respectively. We allow for directed graphs where (i, j) ∈ E does not imply (j, i) ∈ E, and we also allow for cases where ψ(i, j) = ψ(j, i). Furthermore, we denote the set of incoming neighbors of vertex i by N in (i). That is, the set of vertices such that for each vertex j in that set, there exists an edge (j, i) ∈ E. We denote the set of outgoing neighbors of vertex i by N out (i). That is, the set of vertices such that for each vertex j in that set, there exists an edge (i, j) ∈ E. Also, we denote the set of all neighbors of vertex i by N (i) N in (i) ∪ N out (i). It is our goal to compute local vertex (and occasionally edge) features φ(i) and φ(i, j) based on the graph structure G, that can be used to estimate vertex and edge attributes ψ(i) and ψ(i, j). In the next few paragraphs we review some existing algorithms that do exactly that. Subsequently we show how our proposed approach can be viewed as a more powerful generalization of those algorithms that allows for more flexibility. PageRank. The most famous algorithm for attaching features to a directed graph is arguably the PageRank algorithm of Page et al. [1998]. It aims to furnish vertices with a score commensurate with their level of relevance. That is, the PageRank is high for important pages and low for less relevant ones. The key idea is that relevant pages relate to other relevant pages. Hence, a random surfer traversing the web would be more likely to visit relevant pages. A simple definition of the algorithm is to iterate the following updates for i ∈ {1, . . . , |V |} until convergence: Here π i is the PageRank score of vertex i, and λ ∈ [0, 1] is a damping factor. Note that in this case φ(i) = π i . We know that this iteration is contractive and thus it will converge quickly. To summarize, PageRank consists of repeatedly using vertex features φ(i) to recompute new vertex features based on a rather simple, yet ingenious iteration. HITS: The HITS algorithm of Kleinberg [1999] follows a very similar template. The key difference is that it computes two scores: authority and hub. Authority scores are computed using the hub scores of all incoming neighbors of vertex i. Likewise, hub scores are computed using the authority scores of all outgoing neighbors of vertex i. This amounts to the following two iterations: where π i is the hub score of vertex i and ρ i is its authority score. It is easy to see that this iteration diverges and in order to avoid that, we normalize all authority and hub scores by the sum of squares of all authority and hub scores respectively, after each iteration. Note that in this case φ(i) = [π i , ρ i ]. Thus, HITS also consists of repeatedly using vertex features φ(i) to recompute new vertex features based on a simple iteration. Weisfeiler-Lehman: Another algorithm commonly used on unattributed graphs is the famous algorithm of Weisfeiler and Lehman [1968] which can generate sufficient conditions for graph isomorphism tests. Unlike PageRank and HITS, it is a discrete mapping that generates vertex features which can be used to uniquely identify vertices for suitable graphs. Its motivation is that if such an identification can be found, graph isomorphism becomes trivial since we now only need to check whether the sets of vertex features are identical. For simplicity of exposition, we assume, without loss of generality, that there exist collisionfree hash functions 1 h : 2 V → N. That is, we ignore the case of h(i) = h(j) for i = j. Then, the algorithm consists of initializing φ(i) = 1, for all i ∈ V , and subsequently performing iterations of the following equation, for i ∈ {1, . . . , |V |}, until convergence: In other words, the algorithm computes a new vertex hash based on the current hash and the hash of all of its neighbors. The algorithm terminates when the number of unique vertex features no longer increases. Note that whenever all vertex features φ(i) are unique, this immediately allows us to solve the graph isomorphism problem, since the iteration does not explicitly exploit the vertex index i. Also note that (3) can be extended trivially to graphs with vertex and edge attributes. This is simply accomplished by initializing φ(i) = ψ(i) for all i ∈ V and setting 2 : In other words, we use both vertex and edge attributes in generating unique fingerprints (i.e., features) for the vertices. The Weisfeiler-Lehman iteration is of interest in the current context, because it can be used to generate useful features for graph vertices, in general. In their prizewinning paper, Shervashidze and Borgwardt [2010] use iterations of this algorithm to obtain a set of vertex features, ranging from the generic (at initialization time) to the highly specific (at convergence time). They allow one to compare vertices and perform estimation efficiently, since vertex hashes at iteration k will be identical, whenever the k-step neighborhood of two vertices is identical. Their algorithm is very fast and yields high quality estimates on unattributed graphs. However, it has resisted attempts (including ours) to generalize it meaningfully to attributed graphs and to situations where vertex neighborhoods might be locally similar rather than identical. Message Passing: Inference in graphical models relies on message passing [Koller and Friedman, 2009]. This is only exact when the messages are exchanged between maximal cliques in a junction tree, but it is nonetheless often used for approximate inference in general graphs. In these general cases, the algorithm is commonly referred to as loopy belief propagation. As all algorithms already mentioned in the previous paragraphs, it also consists of iteratively updating some features of each vertex, φ(i), by incorporating information coming from the neighbors of vertex i, commonly referred to as messages. The outgoing messages of a vertex are obtained by using the local clique potentials in combination with all of its incoming messages, with the exception of the one for which the outgoing message is to be computed. This algorithm can be used to provide features for vertices and there has already been some initial work in that direction (e.g., by Li et al. [2015]). 2 For brevity, we have slightly abused the notation for edge directionality here, but the concept described should remain clear. All of the algorithms presented in this section can be viewed as a special case of the following iteration: for some function f . We use vertex features of neighbors to compute new vertex features, based on a smartly chosen update function f . The locality of this update makes it highly attractive for distributed computation. For PageRank and HITS that function form is explicitly provided by equations 1 and 2, respectively, and for the Weisfeiler-Lehman iteration we can simply replace f by h. Furthermore, we can see how message passing can also fit in this framework by noting that all outgoing messages for a vertex can be computed given all its incoming messages, and these messages could also be passed on appropriately by, for example, tagging each outgoing message with the destination vertex identifier. PROPOSED APPROACH We are now in a position to introduce the key contribution in this work -the Deep Graph (DG). It consists of the insight that rather than specifying the update equation (shown in equation 5) manually, we are at liberty to learn it based on the tasks at hand. In other words, the update equation now becomes part of the learning system in its own right, rather than being used as a preprocessor, based on heuristics and intuition. In its most general form, it looks as fol-lows: That is, we use inherent vertex and edge attributes in the iteration. Moreover, the system is parametrized by θ in such a manner as to make the iterations learnable. Note that this approach could also handle learning features over edges in the graph by trivially extending this update equation. However, we are not going to cover such extensions in this work. Now that we have defined the form of the update equation, we need to specify a number of things: • The family of tasks amenable to this iteration. • The parametric form of the update equation. • Efficient methods for learning the parameters θ. Note that because of the recursive nature of equation 6, and given a parametric form for the update function f , the computation of the features for all graph vertices consists of an operation that resembles the forward pass of a recurrent neural network (RNN). Furthermore, as we will see in section 3.3, learning the parameters θ resembles the training phase of an RNN. Thus comes the name of our approach. An illustration of Deep Graphs is shown in figure 2. In what follows we omit the vertex and edge attributes ψ, without loss of generality, in order to simplify notation. TASKS AMENABLE TO OUR ITERATION Machine Learning usually relies on a set of features for efficient estimation of certain scores. In particular, in the case of estimation on graphs, one strategy is to use vertex features φ(i) to compute scores g i = g(φ(i), w)). This is what was used, for example, by Shervashidze and Borgwardt [2010] in the context of the Weisfeiler-Lehman algorithm. The authors compute fingerprints of increasing neighborhoods of the vertices i and then use these fingerprints to attach scores to them. For the sake of concreteness, let us consider the following problem on a graph: we denote by X {x 1 , . . . , x M } ⊆ V the training set of vertices for which some labels Y = {y 1 , . . . , y M } are available (these could be certain vertex attributes, for example). Our goal is to learn a function g that can predict those labels, given the vertex features that f produces and some parameters w. We thus want to find a function g such that the expected risk: (7) for some loss function l, is minimized. In other words, we want to minimize the loss l on vertices not occurring in the training set. Note that there exist quite a few generalizations of this -for example, to cases where we have different graphs available for training and for testing, respectively, and we simply want to find a good function g. Obviously R {V \X, g} is not directly available, but a training set is. Hence, we minimize the empirical estimateR {X, g}, often tempered by a regularizer Ω controlling the complexity of g. That is, we attempt to minimize: In the present context, the function g is provided and it is a function of the vertex features φ(v) parameterized by w. The latter denotes our design choices in obtaining a nonlinear function of the features. In the simplest case this amounts to g(v; w) = φ(v) w. More generally, g could be a deep neural network (DNN). In that case, we will typically want to add some degree of regularization in the form of Ω{g} (e.g., via dropout [Srivastava et al., 2014] or via weight decay [Tikhonov, 1943]). It is clear from the above that a parametric form needs to be provided for g and l. Without loss of generality we provide here some example such function forms, for two different kinds of frequently occurring tasks in machine learning. Regression. One common task is regression, where the output of g is a real number. That is the case, for example, if we wanted to learn PageRank or HITS (more on this in section 4). In this case g could be modeled a multi-layer perceptron (MLP) [Rumelhart et al., 1986] without any activation function in the last (i.e., output) layer 3 , whose input is the feature vector of the corresponding vertex and whose output is the quantity we are predicting (e.g., a scalar for PageRank, and a vector of size two for HITS). Furthermore, l can be defined as the squared L 2 norm between the correct output and the MLP output produced by g. Classification. Another frequently occurring task is vertex classification, where the output of g is a vector of probabilities for belonging to each of a set of several possible classes. In this case we could also use a MLP with a softmax activation function in the output layer. Furthermore, l in this case can be defined as the cross-entropy between the correct class probabilities and the MLP output probabilities produced by g. UPDATE EQUATION FORM Without loss of generality we are going to consider the case where there is no distinction within the set of incoming (or outgoing) edges. We thus need to define a function that is oblivious to order. One may show that when enforcing permutation invariance [Gretton et al., 2012], it is sufficient Step k-1 Step k Figure 3: Illustration of the "unrolling" of the DGs RNN over the graph structure. This procedure is used for computing the derivatives of the loss function with respect to the model parameters. The "unrolling" of the small part of our network presented in figure 2, is shown here. to only consider functions of sums of feature maps of the constituents. For conciseness we drop edge features in the following (the extension to them is trivial). In our case this means that, without loss of generality we can define f as: where: For instance, we could define f as follows: where σ is the sigmoid function and W , W in , W out , and b constitute the parameters θ of f that need to be learned. Even more generally, f could be defined as a MLP or even as a gated unit, such as a Long-Short Term Memory (LSTM) [Hochreiter and Schmidhuber, 1997] or a Gated Recurrent Unit (GRU) [Cho et al., 2014], in order to deal with the vanishing gradients problem of taking many steps on the graph [Pascanu et al., 2013]. PARAMETER LEARNING The parameters of our model consist of θ, the parameters of f , and w, the parameters of g. We can learn those parameters by minimizing the empirical estimate of the risk defined in equation 8, as was already mentioned in section 3.1. For that, we are going to need to compute the derivatives of the empirical risk estimate with respect to both w and θ. In the next sections we describe how those derivatives are defined and we then discuss about the optimization method that we use to actually minimize the empirical risk estimate of our model. Derivative with respect to w For the derivative with respect to w we simply have the following: where we simply applied the chain rule for differentiation. If g is an MLP as proposed in section 3.1, the term g(v;w) ∂w can be computed by backpropagating the empirical risk gradient [Rumelhart et al., 1986]. Derivative with respect to θ For the derivative with respect to θ, the derivation is not as simple due to the recursive nature of f . A common approach to deal with this problem is to use backpropagation through structure [Goller and Kchler, 1996] (based on backpropagation through time [Werbos, 1990]). We will approximate f by considering its application K times, instead of applying it until convergence, where the value K can be set based on the task at hand. That is, the algorithm is going to take a maximum number of K steps in the graph, when computing the features of a particular vertex. This is often referred to as "unrolling" an RNN and an example illustration for DGs is shown in figure 3. At the k th step, the feature vector of vertex i is defined as follows: Furthermore, the empirical risk is evaluated at the K th step only (i.e., after having applied our feature update equation K times and having taken K steps in the graph, for each vertex). Under this approximate setting, the gradient with respect to θ can be defined as follows: where for any k ∈ {1, . . . , K} and any v ∈ V , we know, due to the concept of total differentiation: and for l < k, we have that: These quantities can be computed efficiently by backpropagating the ∂R{X,g} ∂φ k (i) for all i ∈ V , as k goes from K to 1. It is easy to see that the local cost of computing the derivatives is O(N (v) + 1), hence one stage of the computation of all gradients in the graph is O(|E| + |V |). Thus, the overall complexity of computing the derivatives is O(K|V | + K|E|). Optimization Equipped with the definitions of the empirical risk estimate derivatives with respect to the model parameters, we can now describe our actual optimization approach. An important aspect of the derivative definitions of the last two sections is that, for computing the derivative with respect to the feature vector of a single vertex, the algorithm will very likely have to visit most of the vertices in the graph. Whether or not that happens depends on K and on the connectedness of the graph. However, research has shown that for many kinds of graphs, the algorithm will very likely have to visit the whole graph, even for values of K as small as 3 [Backstrom et al., 2012, Ugander et al., 2011, Kang et al., 2011, Palmer et al., 2002. This implies that stochastic optimization approaches that are common in the deep learning community, such as stochastic gradient descent (SGD) and AdaGrad [Duchi et al., 2011], will likely not be useful for our case. Recent work, such as that by Martens [2010], has shown that curvature information can be useful for dealing with non-convex functions such as the objective functions involved when working with RNNs. For that reason, we decided to use the Broyden-Fletcher-Goldfarb-Shanno (BFGS) quasi-Newton optimization algorithm, combined with an interpolation-based line search algorithm that returns a step size which satisfies the strong Wolfe conditions. Furthermore, we set the initial step size for the line search at each iteration by assuming that the first order change in the objective function at the current iterate will be the same as the one obtained in the previous iteration [Nocedal and Wright, 2006]. EXPERIMENTS In order to demonstrate the effectiveness of our approach we performed two experiments: a vertex ranking experiment and a vertex classification experiment. Both experiments involve performing a task using only the graph structure as observed information. In the next paragraphs, we provide details on the two evaluations, the data sets we used, and the experimental steup. Vertex Ranking Experiment. For this experiment, the goal was to make DGs learn to compute PageRank and HITS scores (as described in section 2). This is a regression problem, and as a result, we set the form of g to an MLP with a single hidden layer. Thus, g looks as follows: where w {W hidden , b hidden , W out , b out }. Furthermore, we selected for the number of hidden units to be twice the size of the feature vector. This is a design choice that did not seem to significantly affect the performance of our model. In order to obtain true labels for the vertices, we ran PageRank and HITS over all graphs, each for 1, 000 iterations. For PageRank, we set the damping factor to be equal to 0.85, as is often done in practice. The loss function l in this case was defined as follows: For simplicity in evaluation, we used the following quantities as scores for the graph vertices: for PageRank, we simply used the PageRank score and for HITS, we used the sum of the hub and the authority scores (note that they are both positive quantities). We evaluated the resulting scores in the following way: we sorted the graph vertices by decreasing true score, and we computed the mean absolute Vertex Classification Experiment. For this experiment the goal was to make DGs learn to perform binary classification of vertices in a graph. We set the form of g as follows: where w {W out , b out }. For the data sets we used, ground truth vertex labels were provided and were used to evaluate our approach. The loss function l in this case was defined as the cross entropy loss function: where p g(v; w). We decided to use this loss function as it has proven more effective than the mean squared error (MSE), in practice, for classification problems utilizing deep learning [Golik et al., 2013]. The metric we used is the area under the precision recall curve (AUC). DATA SETS We used the following data sets in our experiments, in order to be able to compare our results with those of Shervashidze [2012], which, to the extent of our knowledge, is the current state-of-the-art in vertex classification (some statistics about those data sets are shown in table 1): • BLOGOSPHERE: Vertices represent blogs and edges represent the incoming and outgoing links between these blogs around the time of the 2004 presidential election in the United States. The vertex labels correspond to whether a blog is liberal or conservative. The data set was downloaded from http://www-personal.umich. edu/˜mejn/netdata/. • DREAM CHALLENGE: This is a set of three data sets that come from the Dream 5 network inference challenge. Vertices in the graph correspond to genes, and edges correspond to interactions between genes. The vertex labels correspond to whether or not a gene is a transcription factor in the transcriptional regulatory network. We obtained these data sets from [Shervashidze, 2012] through personal communication. • WEBSPAM: This is the WebSpam UK 2007 data set, obtained from http://barcelona.research. yahoo.net/webspam/datasets/. Vertices in the graph correspond to hosts, and edges correspond to links between those hosts. The vertex labels correspond to whether a host is spam or not. Note that not all of the vertices are labeled in this graph and so, in our experiments, we only train and evaluate on subsets of the labeled vertices. However, the whole graph structure is still being used by our network. EXPERIMENTAL SETUP For our experiments, we performed 10-fold crossvalidation to select model parameters, such as the vertex feature vector size and the maximum number of steps K that our algorithm takes in the graph (as described in section 3.3.2). For all experiments, we used feature vectors sizes of {1, 5, 10} and K values of {1, 2, 6, 10}. We used the best performing parameter setting based on crossvalidation on 90% of the labeled data and then we evaluated our methods on the remaining 10%. The results of this evaluation are provided in the following section. We tried two options, for the features update equation: 1. SIGMOID: This the update equation form shown in equation 11. We denote this method by DG-S in the presentation of our results. 2. GATED RECURRENT UNIT (GRU): In order to avoid the well-known problem of vanishing gradients in RNNs [Pascanu et al., 2013], we also tried to use the gated recurrent unit (GRU) of [Cho et al., 2014] as our update function form. GRU was initially proposed as a recursion over time, but in our case, this becomes a recursion over structure. Furthermore, what was originally an extra input provided at each time point, now becomes the vector formed by stacking together φ in (i) and φ out (i). We denote this method by DG-G in the presentation of our results. We initialized our optimization problem as follows: • All bias vectors (i.e., vectors labeled with b and some subscript in earlier sections) were initialized to zero valued vectors of the appropriate dimensions. • Following the work of Jzefowicz et al. [2015], all weight matrices (i.e., matrices labeled with W and some subscript in earlier sections) were initialized to random samples from a uniform distribution in the range − 1 √ nrows , 1 √ nrows , where n rows is the number of rows of the corresponding weight matrix. For the BFGS optimization algorithm described in section 3.3.3 we used a maximum number of 1, 000 iterations and Table 2: The results for the PageRank experiment are shown in this table. "Rank" corresponds to the number of the highest ranked vertices in the graph that are considered in the evaluation in each experiment (as described in the beginning of section 4). The "Min" and "Max" rows correspond to the minimum and the maximum PageRank score of those vertices that are considered in the evaluation in each experiment. Our methods' results are highlighted in blue. The numbers correspond to the mean absolute error (MAE) of the predictions (a lower score is better). Rank" corresponds to the number of the highest ranked vertices in the graph that are considered in the evaluation in each experiment (as described in the beginning of section 4). The "Min" and "Max" rows correspond to the minimum and the maximum HITS score (i.e., the sum of the authority and the hub scores) of those vertices that are considered in the evaluation in each experiment. Our methods' results are highlighted in blue. The numbers correspond to the mean absolute error (MAE) of the predictions (a lower score is better). Data Data Set an objective function value change tolerance and gradient norm tolerance of 1e − 6, as convergence criteria. We are now in a position to discuss the results of our experiments. RESULTS Vertex Ranking Experiment. The results for the PageRank and the HITS experiments are shown in tables 2 and 3, respectively. It is clear from these results that DGs are able to learn both PageRank and HITS with good accuracy. What is most interesting about these results is that DGs are able to learn to compute the scores by only taking at most 10 steps in the graph. In fact, for most of our results, the algorithm takes just 6 steps and produces scores close to PageRank and HITS scores that were computed after applying the corresponding iterations 1, 000 times. Therefore, even though our approach requires a training phase, it can compute PageRank and HITS scores much more efficiently than the actual PageRank or HITS algorithms can. Furthermore, the results are encouraging because they also indicate that DGs are flexible enough to learn arbitrary vertex ranking functions. Moreover, they can combine attributes of vertices in learning these ranking functions, which could prove very useful in practice. Also, as expected, DG-G seems to almost always outperform DG-S. This is probably due to the fact that it is better at dealing with the vanishing gradients problem that afflicts DG-S. It would be interesting to apply our models to larger graphs and increase the value of K in our experiments, in order to confirm that this actually the case. We leave that for future work. We also trained DGs using one graph and apply them on another. We noticed that performance did not significantly differ when our algorithm, having been trained on any Dream challenge graph, was applied on any other of these graphs 4 . This is intuitive since the graphs corresponding to these data sets are very similar. DGs were also able to do well when using combinations of the Dream challenge data sets and the Blogosphere data set. However, they seemed to perform poorly when using combinations involving the WebSpam data set. The graph in that case is much larger and has significantly different characteristics than the other graphs that we considered, and thus, this result is not entirely unexpected. However, given the rest of our very encouraging results, we believe that we should be able to extend our models, using appropriate forms for functions f and g, to generalize well over different graphs. Vertex Classification Experiment. Our results for the vertex classification experiment are shown in table 4. DGs either outperform all of the competing methods by a significant margin, or match their performance when they perform near-perfectly. The most impressive result is for the WebSpam data set. The best competing method achieves an AUC of 0.67 and DGs are able to achieve an AUC of 0.98. That alone is a very impressive and encouraging result for DGs, because it indicates that they can learn to perform diverse kinds of tasks very well, without requiring any task-specific tuning. RESULTS SUMMARY Deep Graphs are able to outperform all competing methods for vertex classification tasks by a significant amount (they achieve an AUC of 0.98 when the best competing method achieves 0.67). Furthermore, Deep Graphs can learn to compute PageRank and HITS scores by only applying a vertex update equation 6 times, as opposed to hundreds or thousands of iterations that the original algorithms require. These are encouraging results that motivate future work for the Deep Graphs framework. CONCLUSION We have proposed Deep Graphs (DPs) -a generic framework for deep learning on networks and graphs. Many tasks in machine learning involve learning a representation for the vertices and edges of a graph structure. Deep Graphs can learn such representations, jointly with learning how to perform these tasks. It relies on the notion that many graph algorithms, such as PageRank, Weisfeiler-Lehman, or Message Passing can be expressed as iterative vertex updates. However, in contrast to all these methods, DGs are adaptive to the estimation problem and do not rely on the ingenuity of the designer. Furthermore, they are efficient with the cost of training and deployment being O(|E| + |V |), where E and V are the sets of edges and vertices, respectively. In particular, DGs consist of a recurrent neural network (RNN) defined over the graph structure, where the features of each vertex (or edge) are computed as a function of the neighboring vertices' and edges' features. These features can be used to perform multiple tasks simultaneously (i.e., in a multi-task learning setting) and DGs are able to learn the functions that generate these features and the functions that perform these tasks, jointly. Furthermore, learned features can later on be used to perform other tasks, constituting DGs useful in a transfer learning setting. We performed two types of experiments: one involving ranking vertices and one classifying vertices. DGs were able to learn how to compute PageRank and HITS scores with much fewer iterations than the corresponding algorithms actually require to converge 5 . Moreover, they were able to outperform the current state-of-the-art in our vertex classification experiments -sometimes by a very significant margin. These are encouraging results that motivate future work for the DGs framework. We are excited about numerous future directions for this work. Our first priority is to perform more extensive experiments with bigger data sets. Then, we wish to try and apply this work to knowledge-base graphs, such as that of Mitchell et al. [2015], and explore interesting applications of DGs in that direction.
2018-06-04T17:24:18.000Z
2018-06-04T00:00:00.000
{ "year": 2018, "sha1": "5242a9a4a92d819140c0bf52786eabfc24940476", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "5242a9a4a92d819140c0bf52786eabfc24940476", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science", "Mathematics" ] }
270488173
pes2o/s2orc
v3-fos-license
Cervical cancer screening and vaccination acceptability and attitudes among Arab American women in Southeastern Michigan: a qualitative study Among Arab-American women in Michigan, rates of cervical cancer screening are lower than those in non-Hispanic White and Black women in the state. A deep understanding of the Arab community’s perspective on cervical cancer screening is needed to address the disparity in rates across populations in Michigan. Arab and Chaldean women across Michigan were invited to participate in Zoom-based focus groups to understand the attitudes, acceptability, and barriers of cervical cancer screening among this population. Four focus groups with a total of 19 women aged 30 to 61 were conducted. The focus groups were conducted in English, Arabic, or both languages. The guided discussion was focused on knowledge of cervical cancer and Human papillomavirus (HPV) and its transmission, attitudes towards HPV vaccination, and attitudes towards cervical cancer screening. HPV self-sampling as an alternative to traditional provider-based screening was specifically discussed as this has been proposed as a way to increase screening in hard-to-reach populations. The conversations revealed insights related to barriers at the individual and community levels for screening and vaccination, attitudes towards preventive health care including screening, a need for accessible women’s health literature, and health education. The women also discussed vaccine hesitancy related to HPV and COVID-19, suggesting a need for targeted community interventions. Study design The purpose of this study was to gather information on the attitudes and beliefs that Arab American women who reside in Michigan have about cervical cancer risk, screening, and vaccination against HPV.Any woman who identified as Arab, Arab American, and/or Chaldean aged between 30 and 65 (i.e., those eligible for cervical cancer screening via HPV testing 13 ) was invited to participate in the focus groups.Participants were recruited through women's only beauty salons in Dearborn, MI; via ACCESS social media channels and newsletters; and through community social media pages on Instagram.The goal was to recruit 3-6 women per focus group.The focus group sizes were kept small based on best practices for Zoom-held studies, due to the potentially sensitive nature of the topics, and the tight-knit nature of the community [14][15][16] . Four focus groups were conducted via Zoom 17 during the summer of 2021.One focus group was conducted solely in Arabic, two in English, and one in a combination of both languages.Women completed a brief anchoring survey with demographic questions and a self-assessment of HPV knowledge.The focus group guide provided an overall structure for the focus groups and covered beliefs about cervical cancer and HPV risk, attitudes towards traditional methods of cervical cancer screening (i.e., Pap smears), self-collection sampling methods for cervical cancer screening, screening uptake, attitudes towards HPV vaccination, and barriers that may exist for screening and vaccination.The Health Belief Model and Social Cognitive Theory were used in developing the focus group guide 18 . Data processing and analysis The number of women in each focus group was five, four, seven, and three.All participants identified as Arab, Arab American, and/or Chaldean.Focus group recordings were transcribed, translated if necessary, and uploaded to the qualitative software platform, Dedoose Version 9.0.17 19.Three members of the study team engaged in the coding and analysis of the focus groups (LK, HG, and LH).An inductive approach to coding was employed.Initially, one focus group was coded using a collaborative open coding process with descriptive codes 20 .A discussion was carried out to determine suitable codes and definitions, and the agreed-upon codes were then used to create a codebook.This focus group was then recoded, and the codebook was further revised after the emergence of new codes.The remaining three focus groups were then coded using this codebook.Code reports were prepared.Ultimately, the codebook included 38 primary codes with 12 sub-codes; a total of 1,216 codes were applied to 886 excerpts.Thematic content and ethnographic analysis techniques were applied to identify patterns and themes across several meetings 21 .Inter-coder reliability was assessed qualitatively throughout the process 22 . Results The four focus groups consisted of 19 women from across Michigan (see Table 1).The mean age of participants was 37.9, and 61.1% of the participants were married.The anchoring survey was completed in English by 68.4% of the participants, however, only two of the focus groups were conducted entirely in English.More than half Barriers Most of the participants were familiar with and had done a Pap test in the past.They felt that neglecting your health puts you at risk for cervical cancer.They shared their experiences with and feelings about having a Pap test.Overall, the women had negative associations with Pap tests, citing discomfort, embarrassment, and anxiety over waiting for results.Particularly, the embarrassment stems from the state of undress and position required for a Pap test, especially in front of a male physician.However, for women who were up to date on their screening, discomfort did not stand in the way of their scheduling screening test. Neglect: "I believe the people most at risk for this disease are the ones who do not maintain their health, the ones who do not do their yearly exams or failure to do diagnostic tests that have to be done yearly." (Group 3). Discomfort: "And you know, it just-, also just the-, I feel like I can feel it.The scraping feeling so I have to prepare myself mentally when I do go to a Pap Smear." (Group 1). Embarrassment: "Okay, so I feel like I have to prepare myself mentally for it, because, you know [laughs], I'm just opening up my legs for a doctor." (Group 1).Additionally, three major themes emerged from the analysis: (1) lack of health knowledge and health literacy; (2) ambivalent attitudes towards vaccination; and (3) areas of potential intervention for screening and vaccination. Lack of health knowledge and health literacy In addition to trying to understand what the participants' understanding of cervical cancer and its risk factors were, they were also asked about cervical cancer screening practices, both theirs and within the community.The focus group moderators shared with the participants the low cervical cancer screening rates in the Arab American community in Michigan 9,10 , and none of the participants were particularly surprised.The participants identified this lack of awareness and a lack of women's health literacy as a major barrier to screening uptake amongst the women in their communities.Several participants expressed a belief that this is more prominent among immigrants, who may have not had access to Pap tests or a formalized health education: www.nature.com/scientificreports/Immigration and health literacy: "For me, I am from Iraq, we do not-we do not have an awareness of these diseases.We do not have awareness about girls and women's health.By the time these diseases are diagnosed, it is so late.When I came to the US that's when I heard about this and started getting tested, I was afraid." (Group 3). "There are some women who if you tell them they have to do these tests, they say well back in my country I never did those tests and there was nothing wrong with me, so why do it now or tell anyone to do it now?"(Group 3). Lack of health literacy: "There isn't anything about women's health and the health needs of women and their children, there just isn't an awareness that these things are important for women, maintaining health is important to let women live their lives, which could be taking care of her husband and raising her kids.But there is just no awareness, there needs to be more." (Group 3)."There just isn't awareness, medical centers, awareness programs, doctors, or someone to tell us about our girls' health, our boys health.Especially our girls.If we do not know, how will we be able to teach them?" (Group 3). "They don't understand -they don't know the reality of these diseases.They do not see a need to go and get tested." (Group 4) Ambivalent attitudes toward vaccination Part of the discussion revolved around vaccination against HPV-participants were asked if they had heard of the HPV vaccine, and if they had children, whether their children were vaccinated.If the participants did not have children, they were asked if they would vaccinate their children.The responses were mixed.While some participants were very vocally supportive of the vaccine for all their children, others did not know that they could vaccinate their sons, and still others were not even aware of the vaccine.Some participants noted that there exist pockets of vaccine hesitancy amongst the Arab American community in Michigan.This anti-vaccine sentiment was shared by some participants in the focus groups.Many participants spoke of vaccination, broadly, and discussed the HPV vaccine in relation to the COVID-19 vaccine.They also discussed vaccines that are regularly administered to children and those immigrating to the United States.Some participants also mentioned initial concerns about the vaccine due to the stigma around sexual behavior, which they said is also reflected in community attitudes. In favor of vaccination: "When the doctor told me [about the vaccine], I said of course.Anything, you know, we want to protect our kids from everything." (Group 4)."I did not realize it was meant for boys until he explained, and if anything, that is going to cause-you know-some kind of protection.I was for it, so I gave it to him." (Group 2). "I wasn't aware-the doctor called me and said she had a shot that she had to take.So I asked them, what shot is this?They told me, and I said ok, no questions asked as they say.This is protection for my daughter." (Group 4). Unaware of vaccination: "To be honest, I have no idea what vaccinations my sons have gotten.My son is 16, he got some vaccines then.They get all the vaccines, but I will be asking the doctor, my children's doctor, if they've taken the [HPV] vaccine and if they need to take it.If she says yes, then of course, they will get the vaccine.Thank you so much for bringing this to my awareness.Because we did not know." (Group 3). "I never heard of a vaccine-HPV vaccine.Even the doctor never mentioned it."(Group1) Vaccine hesitant: "I see it like cases where someone take it and then like there is a big reaction to it.And that is why I did not [ vaccinate my children]. " (Group 2). "If I had a daughter, I don't think I'm gonna push for any vaccination until they make their own choices.So, like my son, I don't-, I'm not gonna vaccinate him, but if he decided later on, he wanted to take it, it's up to him.Again, I-, as the last question I mentioned, I'm not really a fan of vaccinations so that's why I'm not gonna push my son for any vaccinations." (Group 1). Stigma related to sexual behavior: "Me personally, I was against my daughter getting it just because I was not educated enough on it and I did feel like-, well she is not going to be sexually active at a young age nor should she have to take it.But her pediatrician-you know-had a very thorough discussion with me.And then I ended up having her do it." (Group 2). Areas of potential intervention for screening and vaccination The participants, regardless of their stance on vaccination, were all eager to discuss ways to improve the health of their communities.HPV self-collection was discussed as a strategy to improve screening uptake.The participants also spoke to the importance of improving health literacy and increasing awareness.They also discussed the need to combat fatalism. Self-collection Participants were shown pictures of several HPV self-collection devices 23 and asked to comment on whether they would consider using them in lieu of a Pap test.Attitudes towards self-collection were mixed.Some women were enthusiastic, citing a lack of embarrassment and the flexibility to do the test on their own time. Lack of embarrassment: "It seems really nice -using it at home and then popping it in the mail. No embarrassment, without cost. It seems comfortable, you know what I mean? I'm for the device. " (Group 4). "Revealing myself that way to the doctor is embarrassing for me. That's it. That's why I would use the device. " (Group 3) Convenience: "I possibly might, it is more convenient for me because I am at home versus having to go to the doctor. And if-, and if I am concerned about something, I can always talk to my doctor"(Group 2). Concerns about properly using device: "Personally, I would rather just go to the doctor, because its better.It's more safe that way.I just don't think that I can do it properly at home.At the doctor's office, she knows everything, so you know." (Group 3). Vol:.( 1234567890 Participants were also concerned about the potential costs and time associated with being responsible for the self-collection, especially if the kit had to be purchased at a pharmacy or picked up at a clinic. "But if you put it on the shelf, some women may say ok I'll do it next week, or oh I don't have enough money right now, and they'll put it off. So honestly it goes back to it: give your bread to the baker. The doctor will just follow this [up] more. " (Group 4). "I mean it defeats all purpose, why you are going to the clinic to begin with. You rather have the doctor do it and do it correctly. " (Group 2). "But like having to go and buy it, sometimes it is kind of you know, it will like reduce that chances that people would take it." (Group 2) Increasing awareness As previously noted, the participants cited a lack of awareness as a barrier to screening uptake.Participants stressed the need for tailored programming, especially for immigrant women.Several women mentioned the fact that they did not receive an adequate education about their health and well-being.The participants also noted the importance of discussing health issues with their children and modeling healthy behaviors from all members of the family. Need for awareness: "Women need awareness, education." (Group 4)."There is a lack of awareness in our Arab community, awareness is very weak.Although I say, thank God, we have progressed and have some light awareness but it's not very deep, there isn't anything about women's health and the health needs of women and their children.There just isn't an awareness that these things are important for women, maintaining health is important to let women live their lives, which could be taking care of her husband and raising her kids.But there is just no awareness, there needs to be more." (Group 4). Fatalism Among participants, there was a consensus that fatalism impacts the way their peers consider their health.The focus group participants posited that fatalism and a laissez-faire attitude toward health is an issue that prevents their peers from seeking care. Fatalism: "You can have a woman who is educated and aware, and she can meet with them and teach them about the disease and tell them they have to go get the Pap smears, and then a woman goes to her husband or her father and says, I need to have this done.They will reply saying "leave it to God." (Group 3). Laissez-faire attitude: "Because they do not think it is going to happen to them, you know.I have my good friend: her mom was just diagnosed with breast cancer stage one.They have a history of cancer in their family and my friend who is in her thirties refuses to check.I told her, I am like, "You need to go check and make sure every year, especially because it runs in your family, and like every generation has had it." So, she refuses to because she is like, "That is not going to happen to me.I am healthy." So, I think that-, well I put it out there what my thoughts are.But that is my biggest concern with cancer in the community, just not thinking it is going to happen to them.And think most people think like that. Discussion There is limited understanding of why screening rates for cervical cancer in the Arab American community are low.To address this gap in the literature, we carried out a series of focus groups with Arab American women residing in Michigan about their screening behaviors as well as the screening behaviors within their communities.Our study found that low cervical cancer screening rates in this population of Arab American women was due to multiple reasons: negative associations with screening, including discomfort and embarrassment; anxiety and difficulty making time to schedule appointments (termed by participants as neglect and fatalism); and a lack of cancer prevention awareness amongst the participants and community.Broadly, the women expressed mixed feelings about HPV self-collection such that it is not clear this approach, at least alone, will be successful in improving screening rates. Our results help to flesh out cervical cancer screening views as there are sparse data on attitudes towards cervical cancer prevention amongst women from the Middle East and North Africa 11,[24][25][26][27][28][29][30] .Two previous studies, one in Dearborn, Michigan, and one in New York City, focused on the attitudes of women who identify as Arab, Arab American, or Chaldean 25,31 .Similar to our findings, the KinKeeper study carried out in Dearborn, Michigan among women aged between 21 and 70 11,[28][29][30] identified lack of a doctor recommendation and competing interests as reasons for low screening uptake.Our participants indicated health literacy and awareness as barriers to cervical cancer screening on a community level which was not observed in the KinKeeper study among their Arab American participants.Further, unlike the KinKeeper study, fatalism was a common theme in our study and the study conducted in New York City among Arab American women. Relatedly, three studies on cervical cancer screening have been carried out among Muslim women.The studies conducted in San Francisco and Chicago 24,27 included a limited number of Arab and Arab Americans whereas the study in Pennsylvania exclusively explored the attitudes of Arab Muslims 26 .Similar to our findings, the San Francisco-based study which was carried out among those aged 18 and 25 identified family pressures and health care costs and access 24 .The study carried out in Pennsylvania also identified healthcare costs and access as barriers to screening uptake 26 .Conversely, the Chicago-based study carried out amongst Muslim Americans aged between 18 and 65 found that viewing health problems as a punishment from God was associated with lower uptake 27 , a theme that did not emerge in our study.Fatalism was not associated with cancer screening uptake in the Chicago study.The differences across studies could be due to the difference in populations: both our study and the New York study 25 exclusively recruited Arab Americans, regardless of religion, whereas in Chicago, the participants were Muslims, regardless of ethnicity 27 . Beyond barriers to screening, our study evaluated the acceptability of various HPV self-screening methods as an alternative to Pap tests, which has not yet been done in this population.The acceptability of alternative screening methods was mixed and likely would need to be scaffolded by health education efforts by trusted community organizations.Our study also evaluated attitudes towards HPV vaccination.Additionally, our study sheds light on a growing trend in the Arab American population: sentiments of vaccine hesitancy [32][33][34] .Vaccine hesitancy has strongly emerged as a topic warranting deep understanding in recent years, especially with the rise of the COVID-19 pandemic.This rising attitude has been documented amongst migrant groups globally and in the United States [35][36][37][38][39][40][41] .Because cervical cancer is a preventable disease through a combination of screening, treatment of precancerous lesions, and vaccination, efforts to understand vaccine hesitancy in Arab American communities is crucial.However, in the United States there is hesitancy overall towards the HPV vaccine 42 , as well as documented hesitation amongst immigrant groups [43][44][45][46] .To our knowledge, there has been no study evaluating vaccine hesitancy amongst Arab Americans, except for hesitancy to COVID-19 vaccination 33,34 .Further investigation is warranted to understand this phenomenon in the Arab American population. Our study has several notable strengths.First, to our best knowledge, this is the first qualitative study to explore cervical cancer screening and HPV vaccination amongst women who identify as Arab, Arab American, and/or Chaldean women.Previous studies focused on Muslim women 24,27 or Arab Muslim women 26 , which fails to account for the diversity of the Arab American population.Further, extrapolating results from Muslim women to describe reasons why Arab American women have poor screening is problematic as the majority of Muslim women in the United States are not Arab 47 . While Michigan is an ideal place to study Arab American health due to the densely concentrated population in Southeastern Michigan 48 , Arab Americans live all over the United States and its territories 49 ; as such, there may be significant differences between Arab Americans residing in California or New York as opposed to Michigan.For example, a general health-focused study carried out in New York amongst Arab American immigrants found that a lack of culturally competent care and concerns about discrimination and potential deportation were the biggest barriers to seeking cervical cancer screening 25 .However, these were not concerns shared by participants in our focus groups.This indicates that there may be geographically patterned reasons why Arab American women have poor uptake of cervical cancer screening. The results of our study provide several exciting future directions for research.The attitudes towards vaccination, particularly vaccine hesitancy, warrant further exploration, especially as vaccination becomes a contentious subject in the United States.Though mixed, the overall positive attitudes towards HPV self-collection as a form of cervical cancer screening are also encouraging and indicate potential for piloting a self-collection method among Arab, Arab American, and/or Chaldean women to determine if self-collection could improve screening rates. Lastly, our findings indicate a strong need for a focus on health awareness and health literacy amongst this population.Within the focus groups, various women expressed how they themselves tell their friends and family about the importance of cervical cancer screening and health maintenance.However, there was a consistent discussion about the need for a concerted effort to educate women of all ages within the Arab American community.Our findings also indicate that awareness campaigns and improvements in health literacy need to occur at the community level.Community-based interventions to improve health outcomes have been practiced with success amongst various minority populations.For example, a cluster randomized trial found that non-Hispanic Black men with uncontrolled hypertension had a reduction in blood-pressure when their barbers engaged in health promotion and encouraged their clients to participate in pharmacist-led interventions 50 .Additionally, a randomized trial found that community healthcare worker-facilitated HPV self-sampling led to increased cervical cancer screening amongst Haitian-American women 51 .Community workers, organizations, and centers, such as houses of worship and ethnic affinity centers, are uniquely positioned to engage in health awareness campaigns and interventions. Conclusions Women who identify as Arab, Arab American, or Chaldean in Michigan identified many reasons why there is poor uptake of cervical cancer screening in their communities.The primary barriers they identified were a lack of health knowledge and health literacy.Through the focus groups, vaccine hesitancy also emerged as a potential barrier to cervical cancer prevention within this community. Table 1 . General characteristics of participants (N* = 19).*Complete data available for 17 participants; partially complete data available for 1 participant; and missing data for 1 participant. Table 2 . Topics and themes discussed and explored.
2024-06-15T06:16:07.865Z
2024-06-13T00:00:00.000
{ "year": 2024, "sha1": "a5dfe06affba046c56c8297dd2ae1d3a30b59bc9", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "bbb1b92d2221009e7c168e57a692ffc592aa8cb7", "s2fieldsofstudy": [ "Medicine", "Sociology" ], "extfieldsofstudy": [ "Medicine" ] }
30797068
pes2o/s2orc
v3-fos-license
Evaluation of Rapid Prenatal Human Immunodeficiency Virus Testing in Rural Cameroon ABSTRACT Pregnant women (n = 859) in rural Cameroonian prenatal clinics were screened by two rapid human immunodeficiency virus (HIV) antibody tests (rapid tests [RT]) (Determine and Hema-Strip) using either whole blood or plasma. One additional RT (Capillus, HIV-CHEK, or Sero-Card) was used to resolve discordant results. RT results were compared with HIV-1 enzyme immunoassay (EIA) and Western blot (WB) results of matched dried blood spots (DBS) to assess the accuracy of HIV RTs. DBS EIA/WB identified 83 HIV antibody-reactive, 763 HIV antibody-nonreactive, and 13 indeterminate specimens. RT results were evaluated in serial (two consecutive tests) or parallel (two simultaneous tests) testing algorithms. A serial algorithm using Determine and Hema-Strip yielded sensitivity and specificity results of 97.6% and 99.7%, respectively, whereas a parallel RT algorithm using Determine plus a second RT produced a sensitivity and specificity of 100% and 99.7%, respectively. HIV RTs provide excellent alternatives for identifying HIV infection, and their field performance could be monitored using DBS testing strategies. Rapid testing (RT) methods for detecting the presence of human immunodeficiency virus (HIV) antibodies in serum or plasma were developed in the late 1980s, and improved RT assays are continually being introduced. Throughout the last decade, these assays have demonstrated excellent sensitivities and specificities, but many tests still require a laboratory to process the specimens and to maintain the integrity of the test components (5,10,19,20,26). The World Health Organization (WHO) has recommended the use of rapid HIV tests for blood safety, for surveillance, and for patient diagnosis in prenatal clinics or voluntary counseling and testing (VCT) centers (1,3). The benefits of such testing include the ability to inform patients of their HIV serostatus at the time of testing (11,25) and to promptly identify HIV-infected expectant mothers so that therapy for the prevention of mother-to-child transmission (PMCT) of HIV can be provided in a timely manner (16). Ideally, these tests would be used in serial or parallel testing strategies that would allow for screening and confirmation of initially reactive specimens in a single clinic visit (1). New HIV RT that can use whole-blood specimens have been available for several years; however, little data exist about their performance in rural, resource-poor settings. To address this question, an effective method for assuring the quality of wholeblood RT results is needed. Whole blood dried on filter paper (dried blood spots [DBS]) has been applied in large-scale HIV surveillance studies for well over a decade (13). DBS can easily be collected at the same time as the whole-blood specimens for the rapid HIV assays and could be used to validate the on-site RT results. DBS are well suited for this process, since they are easy to collect, to store, and to ship to larger laboratories for supplemental testing. Several HIV antibody detection enzyme immunoassays (EIA) are approved by the United States Food and Drug Administration (FDA) for use with DBS, and other internationally available assays could be adapted for DBS testing. DBS collection and storage has been well standardized (2), and quality control and quality assurance programs have been in use for over 15 years with effective protocols readily available (14). Our study evaluated the performance of several HIV RT algorithms in rural, prenatal clinics in Northwest Province, Cameroon, and compared the RT results to those determined by EIA and Western blotting (WB) from matched DBS specimens. MATERIALS AND METHODS Patient counseling and specimen collection. After approval by the local Institutional Review Board, the study was conducted in Cameroon at prenatal clinics in Mbingo and Banso Baptist Hospitals and in Belo Baptist Integrated Health Center from February 2000 to November 2000. The patients received voluntary HIV counseling and testing at no charge as part of standard prenatal care. The counselor met privately with each patient, obtained a sexual history, discussed the risks and benefits of all antenatal tests, and obtained informed consent. One of the benefits of testing was the provision of nevirapine preventive treatment at no charge to HIV-infected women and to their newborns. Only 5% of the patients refused HIV testing. The laboratory collected 2 ml of whole blood by venipuncture for the prenatal testing, which included HIV, if the patient consented. The blood was dispensed into test tubes containing EDTA and gently mixed to prevent coagulation. Five spots of whole blood (100 l/spot) were pipetted onto blood collection cards (Schleicher and Schuell grade 903; Keene, NH) from the venipuncture collections. The spots were air dried for 3 to 4 h, put into moisture-resistant bags (Bitran series; Fisher Scientific, Atlanta, GA) with desiccant (Multisorb Technologies, Buffalo, NY), and were stored frozen at Ϫ20°C prior to shipment to the Centers for Disease Control and Prevention (CDC), Atlanta, Ga., for testing. The remaining whole blood was centrifuged for 5 min at 1,000 rpm, and the plasma was removed and stored at 4°C. Whole blood was used for the Hema-Strip test; plasma was used for all additional RTs. Trained laboratory staff performed the testing at all sites, and prenatal clinic nurses or counselors provided postcounseling for all of the laboratory results on the same day. Rapid tests. RTs in the study were Determine HIV-1/2 (DT) (Abbott Laboratories, Tokyo, Japan), Hema-Strip (HS) (ChemBio, Medford, NY), Capillus HIV-1/HIV-2 (CP) (Trinity Biotech, Galway, Ireland), Sero-Card (SC) (Trinity Biotech), and HIV-CHEK (HC) (Johnson and Johnson, New Brunswick, NJ). DT and HS are lateral flow assays; CP is an agglutination assay; and SC and HC are flowthrough, immunodot assays (7). The assays were selected either for their ability to use whole-blood specimens, for their simplicity, or for their capacity to be used as supplied without additional equipment or reagents. The HS tests were performed according to the manufacturer's protocols using whole-blood specimens and were scored accordingly. The remaining tests were performed according to the kit inserts using the plasma aliquots to complete the HIV screen and to resolve discordant results. Prospective evaluation of rapid testing algorithms. Women (n ϭ 859) attending the three prenatal clinics were tested by two RTs in parallel using wholeblood or plasma specimens. Discordant results were resolved using a third RT. The choice of the third test was dependent on the tests available in the clinics at the time of testing. The data were analyzed as if parallel and serial testing algorithms had been used. Parallel RT algorithms compared the results of two different rapid tests. Concordant reactive and nonreactive specimens were considered definitive, while discordant results were resolved with a third test. The concordant results of two of the three assays were taken as the correct result. The serial testing algorithm considered initially nonreactive specimens as true HIV antibody-negative specimens. Reactive samples were evaluated with a second test and, if reactive, were considered as HIV antibody positive. Discordant specimens were resolved using the third RT results. HIV-1 serologic testing. The diagnostic assays used to test the DBS, plasma, or whole-blood specimens employ different testing formats and different viral antigens and were used in different combinations to screen and then to confirm initial reactive test results (Table 1). HIV-1 reference testing was performed at the CDC, using DBS protocols approved by the U.S. Food and Drug Administration for the Genetic Systems HIV-1 rLAV (rLAV) kit (Bio-Rad Laboratories, Hercules, CA). Initially reactive specimens were retested by the same assay in duplicate, and samples that were reactive in at least two of the three tests were confirmed by Western blot (WB). WB testing was done using a miniaturized WB method previously described for DBS specimens (12) or by the Novapath HIV-1 Western blot (Bio-Rad Laboratories) as follows. Specimens were prepared by eluting a 6-mm punch of the DBS with 200 l of 0.15 M phosphate-buffered saline plus 0.05% Tween, pH 7.4 (PBST) (Sigma Chemical Co, St. Louis, MO) for 2 h with shaking, or overnight at 4°C. One-hundred microliters of eluted DBS was added to 900 l of the specimen diluent in the Bio-Rad WB kit, and the procedure was performed as described in the kit insert. The method was validated and quality controlled using strongly reactive, weakly reactive, and nonreactive DBS controls provided by the National Center for Environmental Health of CDC (15). The HIV-1 EIA/WB testings of DBS were used as the referent results and were performed without knowledge of the RT results. HIV-1/HIV-2 serologic testing. All DBS were tested for HIV-1/HIV-2 antibodies using a modified procedure for the Uniform II plus O assay (Organon Teknika, Boxtel, The Netherlands). A 6-mm punch from the DBS was eluted in 200 l of PBST overnight. After mixing, 75 l of the eluate was mixed with 75 l of the specimen diluent in the appropriate well, and the remainder of the assay was performed according to the manufacturer's instructions. Specimens reactive in the Uniform II assay were further tested by Select HIV-1/HIV-2 assay (Biochem Immunosystemes, Quebec, Canada) to identify the presence of HIV-1 or HIV-2 antibodies. Fifty microliters of DBS eluate was mixed with 50 l of Select dilution buffer, and the assay was performed according to protocol. The discrimination of HIV-1 from HIV-2 specimens is based on relative signal to cutoff (S/CO) ratios which was calculated according to procedures described in the product insert. Reactive specimens for HIV-1 and HIV-2 were further tested by the HIV-1 WB as previously described or by HIV-2 WB (Genelabs Diagnostics, Singapore) as follows. One-hundred microliters of eluted DBS was added to 900 l of the specimen diluent, and the procedure was performed as described in the kit insert. After EIA/WB testing, the specimens were classified as follows: 83 (10.1%) HIV-1 antibody reactive; 763 (88.4%) nonreactive for HIV-1 antibody; and 13 (1.5%) rLAV reactive, WB indeterminate (see Fig. 3A). The indeterminate specimens were excluded from further analysis, since the true status of the sample could not be determined. RESULTS Patient population. The patient population (n ϭ 859) consisted of women of childbearing age attending the three prenatal clinics. The age of the participants ranged from 15 to 50 (mean, 26.6 years; median, 24.2 years). Twenty-two occupations were reported, with farmer (42%) and housewife (30%) being the most frequent. Eighty-two percent of the women were married, and 46 percent reported only one sexual partner over the previous 3 years. Two thirds of the women reported completing 7 years or less of formal schooling, and 7% indicated some postsecondary education. HIV rapid testing performance. The study was done to determine the effectiveness of RTs to identify HIV-infected individuals by comparing the RT results with those determined from the matched DBS. However, in field applications many factors can impact the testing process, and, in this case, the selected RTs were not available at all sites throughout the course of the study; thus, the number of tests performed with each assay varied ( Table 1). The effectiveness of the RTs was evaluated using serial or parallel testing algorithms. Sensitivity and specificity of each algorithm were calculated based on the total number of RT and matched, definitive DBS results available from the three collection sites (n ϭ 846). Serial rapid testing algorithm. Field specimens tested by RT were evaluated in a serial testing algorithm (Fig. 1), and the data were compared to the results of the DBS reference testing. DT detected an initial 94 HIV antibody-reactive specimens and 752 antibody-negative reactions (750 true negatives, 2 false negatives). Secondary testing of the initial reactive samples was performed using HS (n ϭ 73) or HC (n ϭ 21). HS found 63 HIV antibody-reactive samples (61 true positives, 2 false positives). Ten specimens had discordant results between DT and HS, and these were retested by either CP or SC. One specimen was reactive by each tertiary test, but both of these were nonreactive by the reference methods (2 false positives). Of the 21 specimens tested by HC, 19 HIV antibody-positive specimens were identified (19 true positives). Two specimens were HIV antibody negative by HC. When these were tested by SC, one HIV antibody-positive and one antibody-negative specimen were identified, and these results were concordant with the DBS results. The sensitivity and specificity of the serial RT algorithm using DT as the screening assay were 97.6% and 99.7%, respectively. Parallel testing algorithm. Results from an additional 29 women were excluded from this analysis because only DT results were available. Of the 817 specimens tested by at least two RTs, 82 were reactive by DT (Test 1) and by either HS or HC (Test 2); 718 were concordantly seronegative by two assays (Fig. 2). Retesting the discordant specimens with Test 3 (CP, 12 specimens, and SC, 5 specimens) resulted in 2 true-positive, 2 false-positive, and 13 true-negative results. In total, the parallel algorithm classified 86 specimens as HIV antibody reactive and 731 as HIV antibody negative. Eighty-four of the 86 rapid test-reactive specimens were confirmed as seropositve by DBS EIA/WB, and all of the 731 seronegative specimens were nonreactive by DBS serology (sensitivity, 100%; specificity, 99.7%). Attempts were made to contact the two women with false-positive RT results, but they were not able to be located in these remote settings that had no phones or street addresses. EIA detection of HIV-1/HIV-2 antibodies from DBS. Since rLAV had such a high initial reactive rate and detects antibodies only to HIV-1, this assay may not be the most appropriate for screening African specimens or for quality assurance purposes. The DBS (n ϭ 858; 1 specimen was exhausted) were also screened with the HIV Uniform II plus O (UNF) assay, which identified 138 specimens as initially reactive, including all but one of the HIV antibody-positive specimens identified by rLAV/WB (Fig. 3B). The discordant specimen was slightly below the cutoff of the UNF assay and was weakly reactive by rLAV and by WB (only gp160, p55, and p24 bands were observed). All of the UNF initially reactive specimens were further tested using the Select HIV-1/2 (SEL) assay. Results of the HIV-1 SEL confirmed 82 HIV-1 antibody-positive speci- 3A) were nonreactive when tested by the UNF/SEL testing algorithm for antibodies to HIV-1/HIV-2 (Fig. 3B). Reagent cost of testing algorithms. The reagent cost of testing 846 specimens by the serial testing algorithm was $1,481 ($1.75 per specimen) and was less than half the cost of the parallel algorithm: $2,894 for the 817 specimens ($3.54 per specimen). These costs reflect all testing done within each algorithm, including the resolution of discordant specimens. Both RT algorithms are significantly less expensive than the $7.76 per specimen cost of the EIA/WB for the DBS specimens. The use of two EIAs (UNF and SEL) for the DBS testing reduced the number of WB tests needed to resolve all specimens to 19 (6 HIV-1, 13 HIV-2) and had an average cost per specimen of $2.34. DISCUSSION The World Health Organization (WHO) has encouraged the use of RT in areas where EIA testing is not feasible and where the rapid return of test results can significantly improve counseling and treatment options, such as voluntary counseling and testing (VCT) centers and prenatal clinics (4). In addition to lower costs, a major clinical advantage of RTs is the ability to use them in serial or parallel testing algorithms to screen patient specimens and to retest initial reactive results in a single clinic visit (21). Evaluations of combinations of HIV RTs have yielded 100% sensitivity and specificity; however, these studies have used serum or plasma as the specimen, and most were performed in laboratory settings (6,8,22,23). Our field study in rural Cameroon used whole-blood and plasma specimens from prenatal patients for rapid HIV screening, so that same-day posttest counseling could be done and nevirapine therapy administered to infected women and their infants to prevent mother-to-child HIV transmission. Since this study was conducted, this program has expanded to over 100 facilities in 6 of the 10 provinces of Cameroon (24). The use of HIV RTs eliminates the indeterminate status that results from the EIA/WB testing algorithm. All of the indeterminate specimens except one were negative by at least two of the RTs and by the UNF and SEL EIAs. Twelve indeterminate specimens had characteristic banding patterns of one to three core antigen-related bands which appear to be nonspecific. In population-based studies, 10 to 15% of the EIA initial reactive specimens may show one or more gag-related bands on WB. However, most of these specimens do not represent developing HIV infections (9,17). The one specimen that was reactive by the RT and had an indeterminate status could have been a recent infection, since WB results displayed some glycoprotein and polymerase antibody reactivity. Thus, the RT results of these 13 WB-indeterminate specimens that were reported to the patient were consistent with the results of the dual EIA testing strategy for the DBS (except for the one RT-positive specimen) and were probably correct. Efforts were made to determine the outcome of the 13 women with indeterminate HIV WB results, but no additional information was available at this time; thus, these data were excluded from the analyses. The choice of serial versus parallel rapid testing algorithms relates to cost factors as well as performance. The study by Koblavi-Deme et al. demonstrated that the added cost of the parallel algorithm is about 2.5 times as much as the serial algorithm in reagents alone and was not warranted, since their sensitivity and specificity were not improved (18). A cost analysis of the tests used in this study yielded similar results. In our study, however, two additional HIV antibody-positive specimens would have been identified by the parallel algorithm versus the serial algorithm, albeit with significant increases in reagent costs. The decision on the choice of different RT HIV testing strategies will depend on the performance characteristics of the testing algorithms in a given country or region, the availability of RTs, the purpose of the testing, and the program budgets. For the PMCT program and other VCT in Cameroon, clients with a single negative RT result in the serial RT algorithm are advised about the possible time delay between infection and HIV seropositivity, and they are directed to seek a repeat test in 3 to 6 months. However, blood donations are currently screened using a parallel RT algorithm, which minimizes the risk of HIV infection through contaminated blood products. Effective methods for providing quality assurance for RT results have not been developed. If DBS specimens were collected at the same time that the RTs were performed, the DBS could be tested later to determine the accuracy of RT results. Tests approved for use with DBS in the United States are not generally available in foreign markets and do not test for antibodies to HIV-2. The third-generation EIAs for the detection of antibodies to HIV now available internationally significantly improved sensitivity by increasing the volume of sample added to the assay. Such volumes are unattainable when eluting DBS, since the amount of serum in a 6-mm punch is approximately 5 l. However, we did not observe major differences in the ability to detect HIV antibody-positive specimens in our study using the Uniform II plus O assay (only one weakly reactive specimen was not detected). In fact, the signal-to-cutoff ratios (S/CO) were superior to those determined by the rLAV test due to the lower cutoff associated with the UNF assay. Similar observations were noted for the Select HIV-1/2 assay on the limited number of specimens that required additional testing. The specificities of the UNF and SEL tests were actually superior to that observed with the FDA-licensed rLAV kit, which had a substantial number of repeatedly reactive specimens that were slightly above the cutoff value and had to be resolved by WB. Absorption of moisture by the DBS or exposure to heat are also known to affect DBS quality (15). The initial reactive rate of the DBS specimens tested in this study decreased as personnel became more familiar with DBS collection and storage requirements. With additional modifications, the two EIAs could possibly improve specificity, while yet maintaining the same level of sensitivity. The use of the UNF followed by the SEL test in our study could serve as a model of a dual EIA method for confirming RT results using DBS. Optimal conditions for using existing third-generation EIAs with DBS specimens must be determined and their sensitivity and specificity verified, particularly with recent seroconversion specimens. We are currently evaluating additional EIAs for use with DBS using an elution buffer that would be compatible with the different EIAs and would make a dual EIA testing algorithm for DBS specimens more feasible. In summary, we have shown the effectiveness of serial and parallel HIV RT algorithms for identifying HIV-infected women in rural clinics in order to institute therapy to reduce perinatal HIV transmission. This program has been greatly expanded in Cameroon, and the efficacy of the intervention strategy is reported elsewhere (24). Additionally, the utility of DBS for confirming HIV RT results and for quality assurance purposes has been demonstrated. Adoption of RT and DBS testing strategies should be considered when expanding HIV testing programs to provide HIV-1 perinatal intervention therapies and to extend seroprevalence studies into global populations.
2018-04-03T01:46:04.745Z
2005-07-01T00:00:00.000
{ "year": 2005, "sha1": "31584cb0646074086285f6db2a513e23b703c2dd", "oa_license": null, "oa_url": "https://cvi.asm.org/content/12/7/855.full.pdf", "oa_status": "GOLD", "pdf_src": "Highwire", "pdf_hash": "150b7484ca6443d27ba5a4e4e43cd15ae6899f5e", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
244257811
pes2o/s2orc
v3-fos-license
NOx emission modeling at cement plants with co-processing alternative fuels using ANN The use of wastes as alternative fuel (AF) at the cement plants for clinkerization processes has increased in recent years as sustainable waste management. Such co-processing of AFs at cement plants causes some changes in the composition of the plant emissions, depending on waste type, kiln thermal power, thermal substitution rate, etc. Emissions of nitrogen oxides (NOx) are among the major environmental concerns in these plants. The paper includes a modeling study of NOx emissions at a cement plant during the co-processing of AFs. Due to the nonlinear characteristic of the relationship between operational parameters and NOx emissions, the artificial neural network (ANN) approach was applied and studied. The study showed that NOx emissions can be predicted satisfactorily by using ANN at cement plants. Therefore, the model proposed may be used by cement plant operators to estimate their emission levels before starting the use of a new fuel source. Introduction studies have been conducted. While using alternative fuel, gas flow, kiln stoichiometry has been 1 studied and it has been observed that alternative fuel changes kiln conditions under different 2 scenarios [19]. In order to reduce the NO x emissions generated in the calciner, a laboratory-scale 3 fluidized bed was created and trials were made and the efficiency of CaO, CO, and CO 2 4 concentrations in reduction was examined [20]. The effects of rotary kiln gas composition on 5 NO x and ammonia leakage on SNCR systems were investigated by rotary kiln simulation [21]. 6 Considering the operational conditions of cement plants and only conventional fuel consumption, 7 emission estimates were made with the help of the ASPEN Plus program and compared with the 8 standards [16]. It has been studied to create a model with an artificial neural network for 9 ammonia emission in the cement mixture made with fly ash from thermal power plants that 10 reduce NO x by SNCR and SCR methods [22]. It has been observed that the studies have been 11 conducted on NO x reduction do not involve the use of alternative fuels but focus on the 12 efficiency of reduction systems. Since it is thought that predicting at what levels the NO x 13 emissions will occur before the waste is fed may increase the alternative fuel usage rates in 14 cement plants, it will allow operation by preserving the kiln conditions, it is thought that it will 15 eliminate the need for an additional measure for NO x reduction, so it has been observed that it is 16 necessary to study cement plant emissions with a prediction model. 17 The aim of this study is to estimate the NO x emissions by using artificial neural network 18 modeling while the thermal substitution rate of alternative fuel changes. With the use of this 19 model established with an artificial neural network, it is aimed to reach the highest level of waste 20 usage and determine the waste mixture in the right proportions without experiencing problems in 21 emission limit values. In this way, depending on the type and amount of waste to be used, it will be possible to preset the operating conditions of the kiln, so production can be made without 1 sudden changes in operating conditions and without damaging the product quality. Knowing the 2 emissions caused by waste mixtures and thermal substitution rates will facilitate kiln operation 3 and prevent sudden emission releases. The estimation of the NO x emissions generated by 4 different kiln conditions, at different mixing ratios, when different wastes are fed, will facilitate 5 the selection and supply of waste, will allow the kiln conditions to be kept constant for the 6 emissions to occur, and will eliminate the need for an additional reduction system, and thus will 7 provide environmental and economic benefits. Since it was realized that no studies were carried 8 out with waste and emissions in the rotary kilns of cement plants to estimate emissions, a study 9 was conducted on the estimation of NO x emissions during waste usage to eliminate this 10 deficiency found in the literature. In this way, it is planned that the use of alternative fuels, which 11 the world is moving rapidly, will enable the reduction of fossil-based fuel use by determining the 12 effects on the NO x emissions of the plants and to see the effects on air pollution with its possible 13 contribution to the reduction of emissions. 14 15 17 Cement is a hydraulic binder that can harden in water and air and then has a certain strength and 18 volume, and its production takes place by firing and grinding natural raw materials such as clay 19 and limestone at high temperatures. Other important inputs of production other than raw 20 materials can be listed as electricity, conventional fuels such as coal, petroleum coke, fuel oil, natural gas, and secondary fuels grouped as end-of-life tires, fuel derived from waste, waste oils, 1 waste sludge. Technical specifications of the cement plant were given in Table 1. To use any type of waste as an alternative fuel in the cement industry, it is necessary to 7 know the composition of the fuel. Generally, the priorities in fuel selection are cost and easy 8 accessibility. In addition, physical properties such as calorie, ash content, moisture, toxicity 9 (organic content, heavy metals), volatile content, density, size, and homogeneity are also 10 important parameters. To use liquid, solid, coarse, or powdered wastes in the kiln, there should 11 be flexible feeding systems. Waste can be fed from the kiln inlet, main burner, or pre-calciner. 14 The study was carried out in an integrated cement plant with a preheater dry system kiln. The 15 plant is a typical cement plant including crusher, raw material homogenization unit, raw meal mill where the raw materials are grinded and prepared to be fed into the kiln, rotary kiln where 1 clinkerization or the combustion process takes place and emissions are released, waste storage 2 and feeding system, cement mills, and packaging unit. Emissions in the study were received 3 from the continuous emission measurement system (CEMs) connected to the main stack of the 4 rotary kiln. 5 The rotary kiln on this study is 64 m long, 4.6 m in diameter, and with a clinker 18 The cement plant monitors the dust and gas emissions released into the atmosphere from the Level 2 "QAL2") was carried out by an accredited measurement laboratory. The device is 9 allowed to be calibrated only by an accredited laboratory that is assigned to calibration by the 17 This study was carried out by feeding different wastes to a preheater rotary kiln through the main 18 burner and/or kiln inlet and monitoring the plant under real conditions in four months to create a 19 NO x emission estimation model using the Artificial Neural Network. No waste feeding, single 20 feeding, and/or mixed feeding of waste, feeding amounts, and mixing ratios were determined by 21 the plant according to the wastes that are available and can be supplied. The kiln conditions and the emissions that were released into the atmosphere during the feeding of the wastes were 1 monitored in real-time. 2 While the thermal substitution rate of alternative fuel varied in a wide range from 0% to 3 39%, the monitored 120-day NO x emission data was received directly from the continuous 4 measurement device on the main stack and monitored without interfering with the actual 5 conditions of the kiln. The transmission, recording, and evaluation of NO x data received from 6 CEMs to the electronic data evaluation system have been in 10-second average values. The 10-7 second readings were recorded as minute, half-hour, and daily averages. ANN Application Procedure and Modelling It is aimed to predict the NO x values obtained from the continuous emission measurement system During the study, the number of days when alternative fuel was not used was only 10 out 6 of 120 d. Alternative fuels were fed into the kiln through the main burner and/or kiln inlet. 7 Alternative fuels could be listed as, UT, TDF, MIW, and FBW. UT and TDF were mainly used 8 at the kiln as AF. 9 The network was designed as an input layer consisting of 10 neurons, a hidden layer of 18 A total of 120 data obtained from experimental data were used for ANN modeling to 19 training, validation, and testing network. All data used in the modeling are given in the Table S1 20 in Supplementary Material. Statistical values of parameters used in modeling are given in Table 2. 84 randomly selected data out of 120 data were used for training and 18 data for validation. 1 The remaining 18 data, which were never introduced to the network, were used for testing the 2 network. The input and output data used should be given to the network in a certain format, NOx Emissions with Co-processing AF 14 NO x emissions used in this study were received from continuous emission monitoring device as 15 daily average. AF consuming data while co-processing was obtained from plant daily report as total amount used in a day. All data were obtained while the plant was running. In this study, no 1 laboratory data were used. The real condition of the kiln while co-processing was observed and 2 all evaluation was made according to these real values. In this study, changes in kiln NO x 3 emission values were observed while kiln run with no AF feeding, single type AF feeding, and 4 AF mixture feeding at different substitution rates AF consuming. Results showed that kiln 5 behavior changes with type and thermal substitution rate of AF. Also, it was determined that a 6 higher substitution rate of AF has not provided a higher reduction in NO x emission. Investigated 7 data prove that to decrease NO x emission at higher rate, AF type and substitution rate are 8 important and they have to be determined as optimum. 9 During 120 days, only 10 days' kiln was run without AF, and the NO x emission average 10 of these days was calculated as 626,9 mg/Nm 3 . average values and reduction rates of these mixtures were calculated as 437,3 mg/Nm 3 , 30%; 19 314,47 mg/Nm 3 , 50%; 441,5 mg/Nm 3 , 30%; 559,2 mg/Nm 3 , 10%, respectively. Fig. 1(a) and (b) show NO x emission with single-type AF co-processing. It can be seen 1 from these graphs it is not possible to say that with higher AF rate provide a higher reduction in 2 NO x emission. Fig. 2(a) and (b) show NO x emission with mixture of AF co-processing. It can be seen 8 from these graphs higher reduction at NO x was reached by consuming UT+TDF+MIW mixture. Graphs show that between 12-14% thermal substitution rate of UT feeding, 38.7% 5 reduction was reached at NO x emissions. When UT+TDF+FBW was fed into the kiln NO x 6 emissions were decreased at a higher rate than a single use of UT+TDF. All NO x emission 7 during co-processing occurred below NO x level occurred at kiln without co-processing. However, The performance values obtained from the analysis of the NO x model are shown in Fig. 3. The graphic in Fig. 6 was prepared in order to determine the differences between the 3 experimental and predicted values. Here, N represents the number of data, P represents the estimated data value, E represents 1 the experimental data value, ̅ the average of the experimental data. Statistical parameters 2 obtained from the models are given in Table S2 https://www.worldcementassociation.org/blog/sustainability/co-ordinated-waste-disposal-as- 1 part-of-the-chinese-waste-free-city-vision.
2021-10-18T17:54:48.578Z
2021-09-28T00:00:00.000
{ "year": 2021, "sha1": "e6f5871a933751733809de998fd7b4b3069bdac6", "oa_license": "CCBYNC", "oa_url": "http://www.eeer.org/upload/eer-2021-277.pdf", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "4025118e7d84497332620eef32e6f6ffcc0772dc", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Environmental Science" ] }
53086367
pes2o/s2orc
v3-fos-license
Hyper-Process Model: A Zero-Shot Learning algorithm for Regression Problems based on Shape Analysis Zero-shot learning (ZSL) can be defined by correctly solving a task where no training data is available, based on previous acquired knowledge from different, but related tasks. So far, this area has mostly drawn the attention from computer vision community where a new unseen image needs to be correctly classified, assuming the target class was not used in the training procedure. Apart from image classification, only a couple of generic methods were proposed that are applicable to both classification and regression. These learn the relation among model coefficients so new ones can be predicted according to provided conditions. So far, up to our knowledge, no methods exist that are applicable only to regression, and take advantage from such setting. Therefore, the present work proposes a novel algorithm for regression problems that uses data drawn from trained models, instead of model coefficients. In this case, a shape analyses on the data is performed to create a statistical shape model and generate new shapes to train new models. The proposed algorithm is tested in a theoretical setting using the beta distribution where main problem to solve is to estimate a function that predicts curves, based on already learned different, but related ones. Introduction The interpolation among different tasks and extrapolation of knowledge to new unseen tasks is a great challenge in the machine learning community and has been thoroughly explored in the past few decades. More specifically, one of the main areas that had brought significant advances in machine learning, such as the zero-shot learning (ZSL), is the transfer learning area. Contrary to the traditional machine learning setting, the main purpose of transfer learning is to reuse past experience and knowledge to solve the current problem. Normally, machine learning algorithms focus on isolated tasks that cannot be inherently used for other tasks. From all the training data associated with a specific task, the goal of transfer learning is to assist on the learning task for a future problem of interest. Therefore, this kind of domain proposes to solve the problem of transfer knowledge from different, but similar, tasks (Pan and Yang, 2010). Techniques that enable knowledge transfer represent progress towards making machine learning as efficient as human learning (Torrey and Shavlik, 2009). The ZSL topic is framed into the transductive transfer learning setting, where it is considered unsupervised transductive transfer learning. It can be considered as unsupervised because no information (both input and output feature space) is used from the target task for learning, whereas most transductive solutions apply domain adaptation techniques between input feature spaces from both source and target tasks in order to learn a common feature space. In the context of transfer learning, source task is a problem already learned or solved and target task is the future problem to solve. This way, ZSL does not assume any input or output data from the target tasks, making the problem much harder to solve. However, some information should be provided in order to perform knowledge transfer from existing tasks (source tasks) to a new task (target task). This information is normally called side-information and is characterized by meta-information about the tasks themselves. It is based in this side-information that the relation among source and target tasks is learned. In sum, the requirements for ZSL are 1) already learned source tasks, e.g. as estimated functions, and 2) task descriptions from both source and target tasks. Hence, ZSL addresses a very specific problem from transfer learning that is very challenging to solve. In the present work we propose a novel algorithm called hyper-process model (HPM) that differs from existing ZSL solutions by presenting a specific implementation for a regression setting. On one hand, most of the existing works in literature only address classification problems, like image or haptic data classification. On the other hand, there are a couple of works that generalize these algorithms to regression problems, but never propose a specific implementation that can truly leverage the properties of regression. This way, HPM performs a shape analysis to data and tries to correlate it with the task descriptions. From this perspective it would be possible to know what are the shape variations that most relate to certain task properties. The intuition behind using a shape analysis is that data itself have interesting properties to leverage that can be used to better understand task relations comparing with general frameworks. Additionally, these general ZSL frameworks make use of model coefficients to learn relations among source tasks, which is dependent on the method used for training that needs to be the same for all source tasks. By analyzing data drawn from trained models instead, this dependency is removed. In our perspective, this is key to improve the performance of ZSL problems for regression. Up to our knowledge, this is something that only has been explored in 2D and 3D computer vision settings, concretely in statistical shape models (SSM), where modes of deformation are learned to understand particular changes in shapes and ultimately allow to generate new shapes based on these modes of deformation. No algorithm in ZSL makes use of such shape analysis either for classification nor regression problems. The major contributions of the present work lie, first, in presenting a clear definition of a regression problem to ZSL. So far, only concrete definitions are available for classification problems and we truly believe that this regression definition to ZSL can help other researchers to easily frame their problems into this topic, and therefore discover already proposed works to assist solving their challenges. Secondly, we present an algorithm that is specific for regression problems in ZSL achieving better results than the ones proposed in general frameworks. Third, we explore the suitability of ZSL for regression in particular applications areas, such as chemistry and cyber security. This work is organized in 4 more sections. Section 2 presents an extended related work and clear roadmap for ZSL area, where the most interesting works are detailed and discussed. Moreover, in Section 3 the proposed HPM algorithm is detailed along with all the methods used to build up the algorithm. In Section 4 a theoretical scenario is explored where the HPM is directly compared with an existing approach from general frameworks. Finally, Section 5 presents an extensive discussion about the benefits of HPM over existing general frameworks, along with other application areas where ZSL can be successfully applied. Zero-Shot Learning One of the most intriguing and fascinating capabilities of humans is to generalize upon multiple and diverse tasks. When only presented with few examples, humans can quickly learn particular features of a certain object or task, distinguishing it from different classes of objects. The human capability to generalize allows to extrapolate and infer which kind of physical object it might be from previously seen examples of different object classes. As presented by Biederman (1987), humans have the capability to identify and distinguish about 30,000 objects, and for each of these objects there was no need for showing a million images of the same object in order to recognize and discriminate it from other objects, as it is often required in deep learning approaches such as in convolutional neural networks (CNNs). In fact, a great majority of humans would become confused if such an amount of images of the same object was shown to them. Instead, based on a small amount of images, or even from an object description, humans can generalize by extracting certain features of an object and form high level representations. By relating all the information learned and internal object representations, it is easier to learn from small amount of pictures. This capability to extract particular features and properties of an object and then generalize to other unseen classes of objects is one of the greatest challenges in artificial intelligence nowadays. A definition for this particular type of problem was first presented by Larochelle et al. (2008) where it first called zero-data learning and defines it as follows: "Zero-data learning corresponds to learning a problem for which no training data are available for some classes or tasks and only descriptions of these classes / tasks are given.". Humans can imagine and mentally visualize certain objects when reading a book or an article, or just by thinking about certain past stories. Based on 1) a description of the object and 2) prior knowledge about the world, humans can materialize such imagined objects by drawing, sculpturing or even 3D modeling and recognize these if seen somewhere else. This is the main idea to explore in zero-data learning, that was afterwards named as zero-shot learning. If this description about an object is available, based on all the learning throughout lifetime humans can match their own mental visualization of an object with the physical one, and determine if these are the same or somehow similar in certain features. Normally, in such situations, intuition plays a significant role by matching an already learned object, problem or pattern, and immediately recognizing it without great effort. Such concepts are the ones that ZSL is based on to build a set of algorithms and strategies for machine learning. The main motivation behind ZSL is that, as depicted and explored by Larochelle et al. (2008), the number of tasks is far too large and data for each task is far too little. We have already seen some great advances in artificial intelligence where systems reach superhuman capabilities in very specific tasks. Despite all these great achievements, these are not even close to the generalization of human capability and knowledge transfer from a set of tasks to new unseen ones. ZSL can be one of the tools to achieve such generalization capability. Related Work As already discussed, one of the first works related with the ZSL area is presented by Larochelle et al. (2008) where the authors first make a definition of zero-data learning in order to distinguish their work from others and address specific issues that were not addressed until that time. In their work, they present two different approaches to the problem: 1) input space view and 2) model space view. The first approach uses a concatenation of the input x and the task / class description d(z) for a given task z, and by using a supervised learning algorithm train a model f * (.) to predict y z t . Hence, for a new class z * and input x * , one could predict the output by using f * ([x * , d(z * )]). The second approach is more model-driven, and is defined by f z (x) = g d(z) (x). By defining a joint distribution p(x, d(z)) one can then set g d(z) (x) = p(x|d(z)) and learn a probabilistic model that estimates the input x belonging to class d(z). However, a different way to achieve model space view is also presented. This uses the model parameters θ to train a model that maps class descriptions into model parameters. Assuming a family of functions h θ (x) parametrized by θ, if one defines a function q(d(z)) where the output is the same as the parameter space of θ, then the output for a particular x with class description of d(z) is h q(d(z)) (x). With this, the model space view is obtained by f z (x) = h q(d(z)) (x). For testing, 3 different datasets were used: 1) Character recognition in license plates, considering characters from 0 to 9 and A to Z and 3 others accentuated characters, summing up a total of 40 classes and 200 samples per class; 2) Handwritten alphanumeric character recognition with characters from 0 to 9 and A to Z with 39 examples per class; 3) Molecular compound dataset provided by a pharmaceutical company where the main idea is to develop a system that could identify if a molecular compound x t is active y z t = 1 in the presence of a biological agent z for each of the 7 provided agents. The authors have used multiple machine learning techniques to model each of the problems, from support vector machines (SVM) to artificial neural networks (ANN). The results for the character recognition show that the classification error tends to decrease as the number of classes increases, as expected, where the SVM with Gaussian kernel provides almost perfect discrimination of unseen characters. NNet-0-1 yields also good results being the second best technique. As for the handwritten characters all the models have nearly the same behavior by decreasing the error with an increasing number of classes. In both (1) and (2) datasets, there's no clear difference between input and model space view, apart from the SVM rbf that produced near perfect classifications. As for the molecular compound dataset (3), model space view performed better than input space view, performing better than random ranking. Other interesting work that paved the way towards a more formal definition and theory of zero-shot learning is the work of Palatucci et al. (2009). In their work a two-stage approach is presented where the same concept as task / class description is used as before but now called semantic feature space. For their approach, the first stage is related with mapping brain images X d of dimension d into a semantic feature space F p of dimension p, defining the following function S : X d → F p . Then, the second stage is to map this semantic feature space F p into the desired class label Y where another function is defined L : F p → Y . Hence, the main idea is to train a classifier H that can map the input X d into the correct class label Y using the two presented functions, where H = L(S(.)) called semantic output code (SOC) classifier. The main reason to separate the learning into two stages and avoid training directly a function is that one wants to predict class labels that are not present in the training phase. Therefore, the goal is to train S with a set of inputs that map into certain class labels, and train L with a larger spectrum of class labels. The dataset used contains neural activity observed in 9 different human participants while watching 5 specific words from 12 different categories, summing up a total of 60 words. Two different knowledge bases were created for the semantic feature space for all 60 words, being one based on corpus5000 and the other on human218. As for the first stage, the authors used multiple output linear regression to learn S and 1-nearest neighbor classifier to learn L. As for the experiments, S was only trained with 58 brain images, where a leavetwo-out-cross-validation was performed, and L with all 60 image classes. This resulted in 3,540 comparisons and the approach achieved a performance of 80.9% for the human218 and 69.7% for the corpus5000. Moreover, the inputs for two classes left out of S were used representing a bear and a dog, and the classifier clearly distinguished between the two in almost all the 10 semantic questions selected for discrimination. Finally, the authors expanded the knowledge base used to train L using mri60 (with 60 nouns) and noun940 (940 nouns), and tried to predict the correct word for the held-out input and semantic feature using again both human218 and corpus5000. For this experiment, the authors calculated the median and mean rank accuracy, where for noun940 the median rank accuracy is above 90% and mean about 80% for human218 and for corpus5000 the median is around 79% and mean of 70%. As for the mri60, the median rank accuracy was around 88% and mean 79% for human218 and similar median and mean were obtained for corpus5000. One of the key differences between the presented work and the one presented by Larochelle et al. (2008) is that only the input of a brain image is required in order to classify the corresponding word for SOC, contrary to the need for both input and task description, as seen in the expression used by the authors h q(d(z)) (x). This is one of the greatest advantages of using a two stage approach where classes can be learned in the latent space even when there's no input available for all the classes. However, one of the disadvantages is the training of two different functions where the performance of the function in the first stage greatly influences the performance of the whole classifier, even if the performance of the function in the second stage is good. This error accumulation of from the first to the second stage can invalidate the whole SOC approach for different application scenarios. A similar two-stage approach called cross-model transfer (CMT) was proposed by Socher et al. (2013) where the main idea is to train a model that is able to map image features into a word vector space, and then have a second model that is trained to classify these word vectors into the correct label classes. Again, it is assumed that more classes are present in the second stage rather than in the first. In the first stage, based on the work of Coates and Ng (2011), the authors have extracted a set of unsupervised image features from raw image pixels in order to map these into a semantic space (word vector). Hence, for the semantic space, the authors have used an unsupervised model from Huang et al. (2012) which is composed by 50-dimensional word space. As for the second stage, the authors want to first assess if the presented image is from seen or unseen classes, so then labels can be chosen based on likelihood. The main motivation for such an approach comes from the analysis performed on the semantic features where images from unseen classes are close to related images from seen classes, but not as much as the images from the same seen class. One of the main goals of this approach is not only to develop a solution that yields good results for unseen classes, but also perform well in images that belong to already learned classes. For that, two novelty detection strategies were applied using outlier techniques. The authors have tested the approach in two different datasets, namely the CIFAR-10 and CIFAR-100. This represents one of the earliest works of ZSL and most of the recent ones do not use these datasets to test their implementation, so the results are not pertinent in this context, but only the technique itself. Another interesting work worth referring that is also related with this two-stage approach was first introduced by Lampert et al. (2009) and then further extended by Lampert et al. (2014), where two different techniques were presented: 1) direct attribute prediction (DAP); and 2) indirect attribute prediction (IAP). The authors propose a probabilistic model as a way to address the problem of predicting the class labels of images, where the test classes were not seen / used in the training process. Hence, training classes Y = {y 1 , ..., y K } are disjoint from Z = {z 1 , ..., z L } test classes. For the DAP technique, a probabilistic model was used to estimate the probability of binary-value attributes given a certain image, so unseen images at test phase could also have an estimate into this attribute space. Hence, the authors modeled p(a|x) = M m=1 p(a m |x), where a m is the attribute representation and x the corresponding image. Moreover, a probabilistic model was also trained to estimate the probability of a certain attribute set is from a specific unseen class. This model was defined as p(z|a) = p(z) p(a z ) [[a = a z ]], where a z is the attribute representation for the unseen class z, and [[a = a z ]] is the Iverson's bracket notation (Knuth, 1992), where [[P ]] = 1 if condition P is true, and 0 is false. By combining both stages, the final probabilistic model is expressed as: The predictions from image to unseen class were then made using maximum a posteriori (MAP). As for the IAP technique, instead of a two-stage approach, an additional stage was used. First a mapping between image and training classes is performed, as a regular multiclass classifier, estimating p(y k |x) for each training class y k . Then, a mapping between training classes and attributes is made p(a m |y) = [[a m = a y m ]], resulting in a model that maps images in attributes as p(a m |x) = K k=1 p(a m |y k )p(y k |x). To test these techniques in a ZSL setting, the authors created the nowadays well famous to ZSL, animals with attributes (AwA) dataset, composed by over 30,000 images, 50 animal classes and 85 semantic attributes. Additionally, the authors also tested these in existing datasets such as the aPascal/aYahoo and SUN Attributes datasets. As for the experimental setup, using the AwA dataset, 40 classes were used in the training process and the remaining 10 in the test phase (5-fold cross validation). For the SUN Attributes dataset a 10-fold cross validation approach was used, meaning that approximately 637 classes were used for training and 70 classes for the test phase. The authors compare both DAP and IAP with two other methods from ZSL. Both have the same principle of training a classifier for the training set, and then estimate the most similar test class by using the trained classifier for the test images. This way, a test image is classified into a train class, and then the most similar test class is chosen using to different similarity criteria: 1) Hamming distance (CT-H) and 2) cross correlation (CT-cc). In the overall cases, the DAP and IAP are better than CT-cc and CT-H, where there is not much difference between both DAP and IAP. The work of Qiao et al. (2016) presents an algorithm that was greatly inspired by the DAP algorithm, where the authors explored the relations and dependence between the attributes to increase the performance of the system. For this purpose, the authors consider a chain of dependent attributes where the joint probability of each attribute for a specific class is calculated, contrary to DAP which calculates the marginal probability. However, due to high amount of attributes it is difficult to calculate these joint probabilities, so first a clustering algorithm is applied to organize attributes into sets. Only after this process these probabilities are calculated for each of the sets. Finally, the classes are predicted using MAP estimation, as used in DAP. As for the datasets used, the authors have tested and compared their approach with the AwA and aPascal-aYahoo. Despite being an interesting work where different properties of attributes were explored, the results did not significantly improve compared with DAP. The best accuracy achieved in the AwA (aPascal-aYahoo) dataset is 44.14% (24.4%) while for DAP the accuracy was 42.5% (22.6%). First introduced in Akata et al. (2013) and then extended and generalized by the same group , the attribute label embedding (ALE) is presented as an alternative that outperforms some of the DAP method limitations (Lampert et al., 2014). The authors state that ALE overcomes the limitations of being 1) a two-stage learning approach for ZSL problem, by 2) assuming the attributes on AwA are independent among themselves and 3) is not extendable to other sources of side information. Regarding 1) the problem is associated with not assuring that both attribute (first stage) and class prediction (second stage) are optimal because the learning process is not performed jointly, but separately. Hence, perhaps the prediction of attributes might by optimal, but not for class prediction. On 2) by assuming that attributes are independent, like "has stripes" and "has paws" for the AwA dataset, no additional information can be leveraged to increase the performance of the system, and they explore a hierarchical method to address such an issue. Finally, limitation 3) is related with only using the attributes available on AwA and no other complementary information such as textual descriptions that can be automatically processed. This last aspect is particularly interesting when little training data is available and other sources might increase the performance of the system. Apart from the proposed ALE algorithm, the main contribution from the authors is a framework for learning label embedding with attributes in a ZSL problem. Hence, first a label embedding should be defined as a set of attribute vectors that correspond to a class label. The same way images can have some attributes that define stripes and color, the classes themselves also can have these attributes. Complementary to the previously described approaches, this means that both images and class labels have a latent represen-tation of its own, instead of only the input image. To these latent representations, we should call image embedding to the image latent representation and label embedding to class latent representation. Hence, assuming that image embedding is defined by θ : X →X and label embedding by ϕ : Y →Ỹ , the prediction function can be defined as such: where W are the parameters that should be learned to predict the correct class y for the input x. Based on this, the authors define an optimization problem to minimize the empirical risk and learn the model parameters in order to maximize the compatibility between image and label embeddings. Hence, the label embedding ϕ(y) for each class can be learned as well from the data, the same way as W . The label embedding for all classes is defined as Φ and as a matrix of stacked ϕ(y), where each row is a class. The only restriction is that the dimension of the embedding should be found, but a strategy such as cross-validation can be used. Another option is to define the label embedding a priori as side information, as normally occurs in the previous algorithms for the image embedding. The authors define the ALE algorithm on top of the presented framework. For that, an already existing algorithm called web-scale annotation by image embedding (WSABIE) proposed by Weston et al. (2010) was used as a baseline to formulate ALE. For the optimization process, the authors use stochastic gradient descent (SGD) as a convex-function is not guaranteed. One of the greatest contributions of the this work is that ALE algorithm is compared with a handful of algorithms, apart from the DAP already stated. It is easy to see that if anyone is capable of defining a label embedding, the ALE algorithm can be readily used since these attributes are seen as side information. Therefore, the authors explore other kinds of embeddings such the hierarchical label embedding (HLE) first proposed by Tsochantaridis et al. (2005) or the word2vec label embedding (WLE) proposed by Frome et al. (2013). For the ZSL problem, these 4 algorithms were considered: DAP, ALE, HLE, WLE. Finally, the authors test the algorithms in the AwA dataset and CUB-200-2011 (CUB) with three different types of label embeddings: 1) Attributes that describe each class; 2) Hierarchical structure that represent each class; 3) Word2Vec based on English-language Wikipedia. For the ZSL experiment, a 5-fold CV approach was used where 40 classes were used for training and 10 for testing. For the CUB dataset a 4-fold CV was used with 150 classes for training and 50 to test. The authors also test three different types of embeddings encoding: 1) Continuous between 0 and 1; 2) Binary being either 0 or 1; 3) Binary being either -1 or +1. The first assessment using only the ALE algorithm indicates a significant difference between continuous and binary encodings, favoring the continuous encoding. Also the regularized version of the optimization function was used where it seems that only the 2 -normalization benefits the performance and not the mean-centering µ parameter. As a direct comparison with DAP and ALE, ALE has a better performance with 48.5% classification accuracy compared with 40.5% from DAP. As for the comparison between the three different proposed embeddings, ALE, HLE and WLE were compared in both datasets. In this experiment, ALE (AwA: 48.5%; CUB: 26.9%) perform better than HLE (AwA: 40.4%; CUB: 18.5%) and WLE (AwA: 32.5%; CUB: 16.8%) in both datasets. Additionally the authors tested the concatenation of ALE and HLE embeddings, where the best results were obtained with 49.4% for AwA and 27.3% for CUB. The work presented by Akata et al. (2015) proposes a new approach called structured joint embedding (SJE). The difference between SJE and the previously presented ALE algorithm is mainly on the optimization function, where the authors preferred the unregularized structured SVM as follows: where the loss function is the same as presented in ALE. For the optimization, again SGD was used and the regularization is performed by early stopping when using the validation set of cross-validation. The authors also present a additional approach based on multiple output embeddings. The algorithm learns the best transformation W for a specific output embedding, and according to the given input embedding the best class is selected based on a confidence in each of the embeddings. As a certain output embedding can benefit more some classes than others, this approach uses multiple output embeddings and learns the best according to the provided input embedding. For this case, the authors, instead of using equation 2, have updated the compatibility function as such: for K output embeddings, and where K k=1 α k = 1. As for embeddings, the authors used input embeddings from a CNN presented in the DeViSe (Frome et al., 2013), Fisher vectors (FV) used in ALE (Akata et al., 2013) and features extracted from googlelenet (GOOG) (Szegedy et al., 2015). From the output embeddings, the features used were human engineered attributes, hierarchical features and text corpora also used in ALE. The datasets used were the AwA, CUB and standard dogs (Dogs), where Dogs do not have human annotated attributes. One of the main conclusion is that the attributes engineered by man significantly increase the performance of the system, where for all datasets the combination of unsupervised and supervised embeddings performed better than unsupervised and supervised embeddings alone. This means that by including unsupervised extracted features can greatly benefit the ZSL setting. The approach presented by Xian et al. (2016) is called latent embeddings (LatEm), and is a direct extension of the SJE where a nonlinear piece-wise compatibility function is explored, opposed to the linear one used in SJE. This nonlinear compatibility is explored by learning a collection of linear models, where each linear model maximizes the compatibility among image-class embedding pairs. For the optimization routine, the same method as SJE is used where SGD is used. Hence, the authors present different approaches to optimize such a parameter and select the best model: 1) Find the best K using a cross validation strategy by trying out 2, 4, 6, 8 and 10 linear models; and 2) Novel pruning based strategy. As the first approach is relatively straightforward, the intuition behind the pruning approach is that models that do not frequently maximize the compatibility between input and output embeddings are not of great importance and do not increase the performance, while increasing its complexity. The greatest benefit of the pruning approach when compared with cross validation is that only one model needs to be trained due to its adaptation during time to choose the best K value. The datasets used are the same as the ones used in SJE (AwA, CUB and Dogs) with both supervised and unsupervised embeddings so a direct comparison between the two approaches was made. By only using one type of embedding at a time and not making any sort of combination, the LatEm approach surpasses the SJE in all embeddings for all datasets, but for CUB with human annotated attributes. However, the best results are reported when the authors combine all the unsupervised embeddings with the supervised ones. Some other interesting works were also proposed for the image classification problem in ZSL and worth mentioning, such as the deep visual-semantic embedding model (see Frome et al., 2013), the joint latent similarity embedding (JLSE) (see Zhang and Saligrama, 2016), the convex combination of semantic embeddings (CONSE) (see Norouzi et al., 2014), the semantic similarity embedding (SSE) (see Zhang and Saligrama, 2015), the embarrassingly simple approach to zero-shot learning (ESZSL) (see Romera-Paredes and Torr, 2015), the synthesized classifiers (SYNC) (see Changpinyo et al., 2016), the semantic autoencoder for zero-shot learning (SAE) (see Kodirov et al., 2017), the simple exponential family framework (GFZSL) (see Verma and Rai, 2017), the zero-shot classification with discriminative semantic representation learning (DSRL) (see Ye and Guo, 2017), the feature generating networks (FGN) (Xian et al., 2018b) and the gaze embeddings (GE) for zero-shot image ilassification (Karessli et al., 2017). For a comprehensive survey of ZSL methods for image classification please refer to the work presented by Xian et al. (2018a). One of the most interesting applications of ZSL outside image classification domain is related with object identification using haptic devices presented by Abderrahmane et al. (2018). In their work, the authors use the DAP algorithm to recognize a set of objects by grasping those with a robotic hand with tactile fingertips. The main idea behind the ZSL setting is to be able to correctly recognize an object that the system was not trained for. Hence, from cutaneous and kinesthetic information of the robotic hand, the system should correctly say that the object it is holding is, e.g. a plastic bottle, lamp or cup of tea, without any prior information about this specific object. For that, the authors used the attribute-based approach also presented in DAP, where, in this case, classes have associated a set of attributes that describe the object. In order to test the proposed approach, the authors use a PHAC-2 dataset containing information about 60 different objects Y , with 24 annotated attributes A describing those objects and both haptic X b1 and kinesthetic X b2 information. The results show a classification accuracy 39% over a 5 random splits with 50 objects in the train set and 10 on the test set. Complementary to these results, the authors have set up a experiment with a real robotic hand with the same kind of sensing devices described before, where 20 objects were used together with 11 attributes that describe each object class. The tests were performed using three different methods: 1) local DAP (LDAP) where only one grasp was made; 2) data-fusion multi-grasp DAP (DF-MDAP) where multiple grasps were performed and "super-grasp" was calculated based on the mean values from all grasps; 3) similar classification for multi-grasp DAP (SC-MDAP) performs multiple grasps, and for each one gets a classification using LDAP until an object is classified with the same label k times. The results show that SC-MDAP is the best approach for this setting, followed by SC-DAP and then LDAP, leading to almost 100% accuracy in the test set with only 4 to 5 grasps. In line with the previous work in the sense that task descriptors are used is the one presented by Isele et al. (2016) where the model parameters of a policy based approach in a reinforcement learning (RL) setting is predicted based on a set of defined task descriptors. This work makes use of the same principle as Larochelle et al. (2008) and Pollak and Link (2016), but the methods used to achieve it are different. The main goal of the present work is to jointly learn a sparse encoding of both model parameters θ (t) from a policy π θ and task descriptors φ(m (t) ) in an latent representation, where m (t) is the task description for task t. Hence, in order to learn this sparse encoding the authors defined policy parameters as θ (t) = Ls (t) and the encoding of task descriptions as φ(m (t) ) = Ds (t) . Both L and D should be learned in order to reconstruct back the θ and φ(m (t) ), where s ∈ S should be a shared coefficient. To this joint learning the author call coupled dictionary learning and to the whole algorithm task descriptors for lifelong learning (TaDeLL). The rational behind such algorithm is that similar task descriptions have similar policies, so information can be learned from these two different spaces. Therefore, the authors perform an adaptation to the policy gradient (PG), first introduced by Sutton et al. (2000), where both L and D parameters are optimized. The authors perform a set of tests in three different simulated environments: 1) spring mass damper (SM); cart pole (CP); and 3) bicycle (BK). For each of the domain 40 different tasks were tested, where another 20 tasks were used to tune the regularizers values. The TaDeLL algorithm was compared with other algorithms, such as PG-ELLA, GO-MTL, single-task learning using PG. Additionally, the authors also tested a different version of TaDeLL called TaDeMTL where the learning is performed in a offline multi-task learning fashion. From the tests performed, TaDeLL performed the best in all the testing scenarios, representing a promising approach for the RL field, and more specifically for lifelong learning. Ultimately, the paper presents a general framework that is extensible to classification and regression problems by using their learning algorithm. As previously described, this kind of approach already explored first by Larochelle et al. (2008) and around the same time by Pollak and Link (2016), where a model of models was built making use of the model parameters applied to industrial scenarios. Hyper-Process Modeling So far, we have seen ZSL approaches that try to solve the problem of classifying new instances from classes that were not used in the training process. This means that a trained algorithm tries to correctly label a new instance from a class without being trained to do so. One of the key aspects to achieve a good performance is to have an additional feature space (often called latent space) that describes each task, where normally a meta-description of each task is used, apart from input and output feature spaces. This is one property that inspired the development of the proposed approach. Despite the good results achieved in the works presented in the previous section, non of them are neither designed nor applied to regression problems. Regression maps certain inputs into a set of continuous output variables, while in classification the output is either 0 or 1, or in a range between 0 and 1 such as in probabilistic models. Nevertheless, regression is used in these works to help, e.g. map the inputs into a continuous latent space such as the coefficients of a linear classifier, as presented in Larochelle et al. (2008). However, the application of the problem is never regression. Most of the works are related with classification problems ranging from image classification to molecular compound matching or object classification from haptic data. Here, we present an approach called hyper-process model (HPM) that addresses the problem of ZSL for regression problems. Although, it should be stated that some of these works presented in Section 2 are general enough to be used in regression applications, but were not designed to take advantage on its inherent properties, such as output continuous variables. For the remaining of this Section, we will first make a definition of the ZSL problem in a regression setting, and then describe all the methods used to build up the HPM. Finally, we will present and describe the proposed algorithm. Problem Definition As a first step, we would like to define the problem of ZSL to regression. Up to our knowledge, this is the first work that makes such a definition for regression. Related with image classification, most of the ZSL techniques take advantage on the difference between input images, which is something normal where two different objects are displayed. Assuming inputs for a certain class / task as X i ∈ X for class i, we can say that these techniques assume P (X i ) = P (X j ) where marginal distributions among classes are not the same. This means that the difference between the images can be learned to separate both from different classes. Contrary to this, for ZSL in regression problems the inputs for different tasks could be the same and the responses might be different according to their specific task. For example, the amount of traffic in different parts of the city can be the same P (X i ) = P (X j ), where i and j represent different parts of the city, but the air quality might be different because of different amounts of vegetation. If one part of the city has more vegetation, the air quality is higher, and vice versa. Most of the works first try to map the input into a latent space, which normally is a task descriptor, that can be generally expressed as G : X n → F k , where X ∈ R n are inputs and F ∈ R k are the task descriptors. In order to successfully learn the differences between tasks or classes, there should exist some difference between the task inputs like cubes and spheres, or cats and houses. Therefore, the assumption of P (X i ) = P (X j ) is implicit in the context of image classification, which might not hold true for regression. Hence, this draws the first difference between ZSL works for classification and regression, where it is not assumed that the marginal distribution of inputs from different tasks is different, and hence the proposed technique is applicable for problems where P (X i ) = P (X j ). Additionally, another key difference between ZSL for classification and regression is that multiple image classes are learned at the same time as a multi-task learning fashion. For the particular case of ZSL in classification, we have already seen from Section 2 that this learning normally occurs in two different steps: 1) Learning a mapping between inputs and task description, and 2) Task description into class labels. This means that only one classifier should learn the differences between images and correctly predict the corresponding task description, and also a classifier that handles the predicted task description and correctly classifies it into the desired class labels. Ultimately, the final goal of ZSL in classification is to provide a new unseen image and correctly predict the label from a class not used in the learning process. Opposite to this idea, for the regression setting, the main idea is to build a whole new predictive function suitable for the new unseen task, where multiple inputs can be fed as a regular regressor. Therefore, for each source task, a regressor needs to be previously learned and together with the task description, a new function should be derived for a target task. The only work that uses the same approach is the already depicted technique called model space view presented by Larochelle et al. (2008). Additionally, the same principle was applied to solve a concrete problem in the area of manufacturing systems named hypermodel (HM) (Pollak and Link, 2016). Despite these techniques being in fact applicable for regression, the proposed approach overtakes some limitations of such techniques. These will be presented later in this section, and will be highlighted and explained with a theoretical example. In sum, we can define the ZSL for regression problem in the context of this work as the generation of a predictor that can be used in a new, unseen task, based on 1) task descriptions for both source and target tasks and 2) a set of predictors, one for each source task. Hence, we should define a task description as c i ∈ C for task i, where C is defined as all the source task descriptions; and the predictors as f i ∈ F , where F is defined as a set of functions. For the latter, we should define a function as f i : X → Y , where X and Y are the input and output feature spaces, correspondingly. Additionally to all of this, we should also define a function that maps the task descriptions into a latent space L : C → Z p , where Z is the latent space in a p-dimensional space. If each of the predictors of the source tasks has a set of trained parameters θ and all the predictors have the same number of parameters, this approach would be identical to the one presented by Larochelle et al. (2008) where Z p represents the same as θ, so the parameters of the new function θ would be predicted by L providing the target task description c t , being t the target task. However, the key difference between the proposed approach and the one presented by Larochelle et al. (2008) is that the feature space Z p is not the a set of function coefficients. In the proposed approach the feature space is independent from the function coefficients and in fact do not assume that the number of coefficients should be same for all the functions used to learn the source tasks. For example, in order to learn a predictor that maps task descriptions into function coefficients, one should choose the type of machine learning technique to use, such as degree 2 polynomial, to train all the source tasks. In Larochelle et al. (2008) and Pollak and Link (2016), this implies that all source tasks will be trained using the same technique, not exploring the possibility of using the best machine learning technique for each source task. We interpret this as a limitation, where different tasks might have different complexities, and therefore certain types of functions might be more suitable to some tasks, and not to others. In the proposed approach we make use of a widely known technique from the computer vision area to address such a limitation, and create a common feature space for different machine learning techniques. For a more complete explanation, Figure 1 makes a visual comparison between two approaches as a way to clearly make a distinction of ZSL for regression from ZSL for classification, in particular, to image classification. This way, on the left-hand side is a representation of the SJE approach (Akata et al., 2015) that makes use of two latent spaces, namely image embeddings and class embeddings as presented in Section 2.1. In this setting, Figure 1: Comparison between ZSL for image classification and regression. a) Case where a latent representation for both images and classes is used, and a compatibility between these is learned (Akata et al., 2015). b) Case where multiple models are used to learn a hyper-model that maps model coefficients θ into task descriptions C. Upon new task descriptions c , new model coefficients θ can be estimated and a new model is created (Pollak and Link, 2016). This representation is also applicable in the model space view approach from Larochelle et al. (2008). the main idea is to present an unseen image from an unseen class during training, and correctly estimate its label. Contrary to this, the goal of ZSL for regression is to estimate a new model by making use of an unseen task description and previous knowledge about already existing models. Particularly for the hyper-model approach this learning is simply the mapping between coefficients and task descriptions of source models used to estimate the target model. This way, on the right-hand side two stages can be clearly seen. One is related with training the source models to derive the best models' coefficients θ for each task and the other is to train the hyper-model using those coefficients and existing task descriptions. Once a new task description is available, the model coefficients θ can be estimated and a new function can be used F (x l , y l , θ ), where x l are the new input values that can be used to predict y l (orange boxes on the bottom represent the new generated function). Hence, this visual separation allows to clear draw the main differences between classification and regression settings for ZSL, where one tries to label unseen instances in a class not used during training, and the other tries to estimate a whole new function based on previous acquired knowledge of existing functions and task descriptions. In the next two subsections, we will be presenting two different methods used to build the HPM. First we will introduce the hyper-model concept proposed by Pollak and Link (2016) for process models in manufacturing applications, and secondly present the statistical shape model (SSM) first proposed by Cootes et al. (1995) for image segmentation. Ultimately, the HPM can be viewed as an extension to the hyper-model it self, and hence its name. Hyper-Model The hyper-model concept was introduced by Pollak and Link (2016) where a model of models is built and applied to industrial scenarios. Complementarily, the authors introduce the notion of condition that are fixed quantities that govern a certain industrial process, like thickness in metal sheets for welding processes or deep drawing. This concept of condition is what defines each task in the context of ZSL, where different conditions mean different tasks. Assuming that a model is a set of base functions that transforms a certain input into an output, a model has always associated a condition that quantitatively describes the task to learn. Based on this, the main idea is to build a hyper-model to generate models for a whole continuum of conditions, aiming at mapping model coefficients from the base functions into a set of conditions. This way, by providing a set of new conditions, it is possible to derive a new set of coefficients and build a new model for prediction in the context of those conditions only. As one might have realized by now, this approach is independent from being a regression or classification problem. As far as the coefficients of base functions and conditions are available, the hyper-model can be applied to both settings. Defining these conditions as ς n ∈ R c for the n th model and c feature vectors, and a model as z n = f λn (x) that map an input x into an output z n , being λ n the process model coefficients that build f λn , is possible to create a hyper-model. Thus, the model can be represented as a linear combination of some base functions φ and the process models Based on this, the hyper-model allows to relate the model parameters with conditions represented by the following expression: where β are the hyper-model coefficients. Ultimately, also the hyper-model can be expressed as a linear combination of some base functions Ψ and the hyper-model coefficients: This way, we formulate the problem as finding the hyper-model coefficients to derive a transformation function that maps the model coefficients λ to conditions ς. However, as the authors state, a necessary condition for the previous formulation of a hyper-model is a homogeneous representation of all involved task functions, meaning that the base functions used for the models need to be the same, like using a degree 2 polynomial for all the models. This means that if we want to use different base functions for different tasks we need to find a way to bring the model coefficients into the same common representation. The method proposed to tackle this limitation lies in the idea of not using explicitly the coefficients λ of the models, but instead, take a step back and directly use data. As the most suitable machine learning techniques to use highly depend upon data, if two different datasets have different properties, different techniques might be applied, like a support vector regression in one dataset and linear regression in other. As stated by Wolpert and Macready (1997) from the no free lunch theorem, "if an algorithm performs well on a certain class of problems then it necessarily pays for that with degraded performance on the set of all remaining problems". Hence, if different datasets have different types of complexities and properties, the same technique will perform well in some of these datasets, and worse in others. In this context, the optimal solution for such a problem is to use the best algorithm possible for each dataset and takes advantage on this to build the hyper-model. This means that we should use a technique that makes different techniques comparable among themselves and bring them to the same level of abstraction. To that intent, our proposal is to use directly the data instead of the model parameters. In the next subsection we will detail the SSM approach that is suitable to explore the properties of data and therefore, a suitable candidate, but not the only one, to achieve this common representation to train the hyper-model. Statistical Shape Modeling The statistical shape model (SSM) (Cootes et al., 1995) is a widely used technique for image segmentation that analyzes the geometrical properties of a set of given shapes or objects by creating deformable models using statistical information. As a mathematical transform to be applied to these set of shapes, the most common techniques are principal component analysis, approximated principal geodesic analysis, hierarchical regional PCA (Mesejo et al., 2016) and singular value decomposition, where non-affine modes of deformation are calculated. The problem definition for this area of research is framed as a maximization problem to overlap a deformable model in the object / region of interest. The overall idea behind this method is to obtain the optimal affine (rotation, scale and position) and non-affine transformation parameters where a deformable model is built and matches a segment of an image. The same way this method assumes that there exist specific shape variations and these can be quantified forming a deformable model, is the same way that we assumed that these variations also exist in different tasks, and a deformable model for a set of tasks can be derived. There exist multiple recent examples that use such an approach, most commonly in medical imaging, like Shakeri et al. (2016) that proposes a novel approach of groupwise shape analysis that is able to analyze two study groups (healthy and pathological). The main idea is perform a morphological study to predict neurodevelopmental and neurodegenerative diseases, such as Alzheimer's, by quantifying sub-cortical shape variations by using statistical shape analysis. Another example is presented by , where the authors used the SPHARM-PDM (SPherical HARMonic) framework, introduced by Styner et al. (2006), to analyze 20 patients undergoing bilateral sagittal split osteotomy. In a more formal way, a shape can be defined as "all the geometrical information that remains when location, scale and rotational effects are filtered out from an object" (Dryden and Mardia, 1998). Therefore, the information about a shape can be represented in different formats such as landmarks, parametric description and deformation-based (Zhang and Golland, 2016). On one hand, landmarks are a set of points used to describe the shape reliably, where methods can be used to calculate these automatically or be manually annotated, e.g. in a CT scan. On the other hand, the parametric descriptors are functional approximations of the shape, and therefore limiting shapes to a set of coefficients. Ultimately, the deformation representation is based on shape matching between images and a template, considering smooth constraints in the deformation field. Defined as a point of correspondence on each object that matches between and within populations (Dryden and Mardia, 1998), a landmark is a point on a shape that as a direct correspondence in all the other shapes used to build a deformable model. Is this correspondence between points that allows the statistical processing to calculate the deformation of each shape in relation to its mean. Normally, the landmarks are described as a kn element vector x, where x = [x 1 , x 2 , ..., x n , y 1 , y 2 , ..., y n ] T in this case, for the dimensionality of the landmark representation space we have k=2, and n the number of landmarks of a given shape. Hence, to create a deformable model we need multiple shapes being N the number shapes available for analysis. Here we consider a shape as belonging to a specific model or task, and the landmarks of each shape are all the instances from a dataset, where both input and output features can be included. Therefore, each shape is the dataset to build a model. However, since we cannot ensure beforehand that all the collected datasets have the same size for each task, we also cannot assume that the datasets used to build the models are suitable to create the deformable model (remember that all the shapes need to have the same number of landmarks and they should match between each others). Instead, we can take advantage on the generalization capability of the models and sample a dataset per model, which will be considered a shape. This will guarantee that all the shapes have the same number of landmarks. For this intent, we need to ensure that the inputs provided to all the models are the same to guarantee the consistency of all shapes. Again, remember the assumption in the context of the present work of P (X i ) = P (X j ) where the distribution of the input feature space might be the same for all models, and hence is valid to use the same values of inputs to draw a new dataset from a model. Hence, to build the deformable model, only the output data should be used since no information is gained from using input data. Nevertheless, the proposed HPM algorithm can be generalized to regression problems with P (X i ) = P (X j ), where input data can be included as far as the landmarks match between each other. In order to build the deformable model, a mathematical transformation needs to be learned. One of the common techniques used to learn this transformation is the principal component analysis (PCA) introduced by Flury (1988) which assumes a multi-variate Gaussian distribution. We will be describing the decomposition process using PCA, which is composed by the following steps: 1. Calculate the mean shape:x where N is the number of shapes; 2. Calculate the covariance matrix: 3. Calculate eigenvalues and eigenvectors of C: where φ are the eigenvectors and λ are the eigenvalues. When there are fewer instances in the dataset when compared to the number of dimensions, the eigenvectors and eigenvalues can be efficiently calculated as follows: multiplying both sides by D we obtain From equation 15 we infer that φ k = Dq k and λ k = µ k . The result of PCA is a decreasing order of the non-negative eigenvalues that represent the significance of each eigenvector (principal component). These eigenvectors are the modes of variations that allow to deform the model. With the most significant eigenvectors calculated, we can now approximate any training set, x , using the following expression: where b are the parameters for the deformable model, and is given by the following expression: The variation of b allows to change the shape of the deformable model, and normally they are constrained to ±3 √ λ i to provide similar shapes to those present in the original dataset. Based on this, the common representation to be used in the hyper-model are the b values that allows to reconstruct the original shape, instead of using explicitly the model coefficients. We consider the use of the statistical shape model concept the key to explore the best out of the each dataset properties and complexity. The approach taken to learn the deformable parameters for each shape can be seen as an unsupervised way to create a common space where multiple tasks have the same representation. If one uses different techniques to model the datasets as a way to increase generalization, the most organic step to take is to find a way to translate the different representations of model coefficients into the same common space. Despite existing different approaches to create a common feature space from different models in an unsupervised way, taking the step to generate data from the trained models to form shapes, build a deformable model and ultimately use the parameters of this deformable model seemed the most effective, and above all, flexible way of creating this common representation. In the following subsection the proposed algorithm will be detailed, which will glue together the presented methods of hyper-model and statistical shape model. Proposed Approach The main intentions of the present section is to, first, clearly present the full algorithm of hyper-process model (HPM) from the point of using the models trained with different techniques, to the final estimation of the new model to be used as a predictor in a new task. Secondly, it is intended to be reproducible for other researchers, where a step by step description of the algorithm is presented and explained. For that, most of the equations, notations and notions presented earlier are used, being the algorithm description just an organized way to present the approach. Hence, Algorithm 1 presents all the steps required to implement the solution for different contexts of application. The first thing to notice is that the algorithm itself is divided into two different parts, as in the previous two subsections. This was intended so readers can easily relate to what was explained before and quickly find the content associated to each technique. Based on this, the algorithm starts to introduce all the parameters necessary for its execution. As described, all the trained models are required along with the corresponding conditions (which are the task descriptions from ZSL). Moreover, the target condition is required in order to generate the new model. Additionally, one should also specify the number of landmarks to use for each shape, together with two more vectors that define the minimum and maximum values for the input features space. These minimum and maximum vectors are required so one could generate the input values to sample from the trained models. Since we are assuming P (X s ) = P (X t ), only a vector is required and is used in all source tasks to generate shapes. Finally, we assume to have m trained models to deal with. For this algorithm, the SSM first comes into place because the hyper-model is dependent on the common representation of models to be trained. Hence, the first step (line 3) is to generate the input values X according to the minimum, maximum and number of intended landmarks per shape. Since we assume that no information can be drawn between the different inputs from the various models (as stated by P (X i ) = P (X j )), the same input values are used for all the models. Therefore, the shapes S i are built only considering the values from the output feature space, as presented in line 5, where i is a specific model. Algorithm 1 Hyper-Process Modeling 1: procedure HPM(F, ς, ς , n, min, max)(F is a set of source models, ς is a set of conditions associated with each source model, ς is the target condition to be used for model generation, n is the number of data points per shape, min and max are vectors of size r (assuming X i ∈ R r ) with minimum and maximum values for the input features, correspondingly. Finally, m in the number of source models.) 2: Statistical Shape Model : 3: Define the input to sample from existing models: X ← GenerateInput(min, max, n) return f . The next step is calculate the mean shape from all the generated shapes (line 6), where equation 10 is used. In order to get all the eigenvectors to build the deformable model, a decomposition needs to be performed on all the generated shapes and PCA is applied (line 7). One should emphasize again that each shape is a vector of kn elements, where k is the number of features and n is the number of landmarks to use. Therefore, PCA is performed on a m × kn matrix S composed by all the shapes from source models, where these shapes are stacked in rows. Finally, the last step for the SSM is to derive all the deformable parameters for all the models (line 8). This way, equation 17 is used. These are the parameters required to generate back the initial shape based solely on the deformable model. In order to get a good shape reconstruction the number of components chosen when performing PCA is critical, being a trade-off between reconstruction and complexity. On one hand, if few components are chosen, the greater the reconstruction error will be but less dimensions are required, and thus, less complex the problem is. On the other hand, if all the components are chosen, the reconstruction error will be minimum, but the complexity of the problem is far to great to deal with. In these situations, a good rule of thumb is to use the number of components (ordered by decreasing order of model variance) that attend for a cumulative sum of variance of at least 95%. After building the deformable model, together with all the deformable parameters, the hyper-model is ready to be trained. For this case, and as presented by Pollak and Link (2016), one should train a hyper-model using any machine learning technique that seems suitable for the problem, by mapping deformable parameters into conditions. One might think at this stage that would be more suitable to map conditions into deformable param-eters instead, because we can use the trained model to predict the parameters based on new conditions. However, in most of the cases the dimension of the deformable parameters are greater than conditions, so the modeling needs to be made according to line 10. Only in the cases where 1) the dimension of parameters is the same or lower than conditions or 2) multiple models are trained as a hyper-model an each one of those models has only an output variable different from the others, the model can be trained as follows h : ς → b. The implication of building a hyper-model that maps deformable parameters into conditions is visible in line 11, where the technique used needs to be invertible in order to get the new deformable parameters according to the specified new conditions. As an alternative, the level set where the model surface intercepts with the hyper-plane for the intended target condition can be calculated, as performed in the work of Pollak et al. (2011), or formulate a minimization problem where the distance between the predicted and target conditions should be minimized. Once the deformable parameters are obtained from the hyper-model according to the target conditions, the next step is to generate a new shape based on equation 16 as presented in line 12. The last step is to train a model to map the initially generated input values into the generated shape, which corresponds to the output values for that specific condition. Although being out of the scope of the present work, we would like to introduce a new version of the HPM algorithm where P (X i ) = P (X j ), detailed in Algorithm 2. From this assumption, we could not only learn new information about the various output feature spaces from different tasks, but also learn about the input feature spaces. The only restriction about this approach is that the input feature space among different tasks should be the same X i = X j where different distributions can be assumed. We consider this algorithm an expansion on the previous to a more general a broad application. Hence, we call this algorithm HPM2, not only for being the second version of the algorithm but also because it contemplates the two input and output feature spaces in the context of ZSL. Starting from the algorithm's arguments, the first difference is related with the min and max where in HPM2 these represent matrices of size m×r, where m is the number of source models and r is the number of input features. These two matrices are a set of minimum and maximum values for each input per source models, so all the shapes can be generated according to their boundaries. As already explained, the main purpose of the algorithm is to include both input and output information for the ZSL problem. Therefore, a shape now is composed by both feature spaces (line 5). Furthermore, the algorithm remains the same until line 13, where a segregation of inputs and outputs should be made to train a new model in line 14. The Beta Distribution scenario The main goal of this section is to present a theoretical example for the use of HPM algorithm. For that, it would be ideal to have a set of simple functions with different properties, only with one input and output features so the algorithm could be well understood, and also with different observed output values for the same input value. On one hand, simple functions are suitable for this case because a visual feedback can be simply depicted, and on the other hand, with different types of functions the purpose of the HPM can be better grasped. Algorithm 2 Hyper-Process Modeling -Extension with input feature space 1: procedure HPM2(F, ς, ς , n, min, max)(F is a set of source models, ς is a set of conditions associated with each source model, ς is the target condition to be used for model generation, n is the of data points per shape, min and max are m by r matrices (assuming X i ∈ R r and m source models) with all minimum and maximum values, correspondingly.) 2: Statistical Shape Model : 3: Define the input to sample from existing models: Get shape by merging inputs and output vectors: Get the deformable parameter for new shape: b = h −1 (ς ) 12: Get new shape: S =S + φb 13: Get input and output vectors from generated shape: X , Y ← getInputOutput(S ) 14: Train a model for the new task. f : X → Y 15: As the name of the section indicates, for this scenario we will be using a set of functions produced by the beta distribution. This distribution has two different parameters and by providing an input x ∈ R, a different response y ∈ R is observed. By varying these parameters, the shape of functions will also vary. The expression for the probability distribution function (PDF) is as follows: where x is the 1-dimensional input, α and β are the parameters of the distribution and B(α, β) is defined as: By using the beta distribution PDF it is possible to get a set of heterogeneous functions suitable to demonstrate a ZSL problem and apply the HPM to address it. For this case, 25 different functions were generated based on all combinations between α and β values of 0.5, 1, 5, 10 and 15. These set of values were chosen because they generate a set of 4 different types of functions. These are depicted in Figure 2 where 6 different graphs are shown and grouped by hyperbolic and linear functions (top row), exponential functions (mid row) and Gaussian functions (bottom row). The goal for this example is to predict curves outside the 25 functions generated by the presented α and β values. Therefore, in the training phase only the curves generated by the 25 set of beta distribution parameters will be used, where in the test phase 16 different combinations of α and β parameters will be used in HPM algorithm to generate the corresponding curves. Here, the conditions for the hyper-model will be the α and β parameters to train the hyper-model. The set of values for α and β parameters in the test phase are 4, 6, 8 and 12. In order to generate the curves, 20 input values between 0.01 and 0.99 were used to ensure a fair representation of each curve. As for the machine learning techniques used, polynomial regression was chosen with multiple degrees for training both source models and hyper-model in HPM algorithm. In this case, cross-validation was not an option due to the limited number of existing data, specifically 25 for training and 16 for testing. Additionally, as one can see, the chosen test curves from beta distribution are not biased to achieve a better performance on HPM, where all the α and β parameters lie between the parameters of training curves. Hence, we consider this experiment valid and fair to perform. In order to clearly highlight the advantages of HPM in a simple way, this approach will be compared with the one presented by Larochelle et al. (2008) and Pollak and Link (2016) where the coefficients of the base functions from the source models are used to train the model of models (hyper-model). As described earlier, we have stated the limitations of this approach saying that the modeling technique needs to be the same for all the source tasks, where different complexities in the datasets might exist. This means that certain techniques might be better than others for specific datasets. Hence, we should call the approach from Pollak and Link (2016) simply as hyper-model (HM). To highlight the differences between these approaches, Figure 3 shows the fit of two curve types from beta distribution. The dashed blue lines are the training curves and the orange lines are the corresponding fits. On the left side are the fits from a polynomial degree 5 and on the right side the exponential and Gaussian curve fitting results. The main idea is to clearly see that a technique that tries to fit all curves have a lower performance than the most suitable curve fitting techniques for each task. As can be seen, on the left are the curves that the HM will use, and on the right side the shapes that the HPM will use to build the hyper-model. At this point, in order to better differentiate between HM and HPM, we will briefly define its differences. For the HM, the models of models is trained by mapping model coefficients into task description / conditions. Therefore, a function h : λ → ς should be trained, where λ are the model coefficients from a single technique for all the models and ς the task description. Contrary to this, the HPM approach uses deformable parameters from a deformable model proposed by the SSM concept based on a set of shapes, where a shape corresponds to each model. Hence, a function h : b → ς should be trained, where b are the deformable parameters and ς the task description. As stated before, the training phase for the hyper-model is composed by 25 parameters and task descriptions from the source tasks where machine learning techniques can be used to train models that fit these curves and are good generalizations. As in the HM approach, the same technique needs to be used for all tasks, so polynomial regression with varying degrees was chosen due to its flexibility to fit a large spectrum of curves. As for the HPM approach, exponential, Gaussian and polynomial degree 7 were used to model these curves. By analyzing the shape of the curves, it is possible to choose the best technique to fit the data, representing a clear advantage for HPM. For both approaches, different types of hyper-parameters were tested. Regarding HM, different degrees for the polynomial regression were chosen to model the source tasks, while for HPM the best models were trained according to data shape and different number of components were tested when performing PCA. In both cases, the degree and number of components tested were 3, 4, 5 and 6. As for training the hyper-model in HPM and HM, polynomial regression was again used with degrees 3, 4, 5 and 6. For evaluation metrics, the mean of mean squared error (MSE) for each of the test cases was calculated, together with the standard deviation of the MSE. This represents a total of 32 tests performed, 16 for each approach. To test the generalization capabilities of each approach, the generated curves from each approach were tested against the ground truth from the beta distribution with each curve containing 100 datapoints. As previously explained, for the training process only 20 datapoints per curve were used to train the source models. This way we could assess the performance of the proposed approach in a more broad, robust and generic scenario beyond the 20 points when generating a new curve. For the HPM, 100 landmarks per shape were used taking advantage on the generalization capabilities of using the most suitable technique for each dataset according to its properties. As presented in HPM and HPM2 the definition of the hyper-model states that it should be trained by mapping model coefficients into conditions / task descriptions. This was first formulated as such because normally the number of coefficients is greater than the number of conditions. However, as already discussed, it is a non trivial problem to find an inverse for some of the techniques used in machine learning. However, another way to handle such a problem is to use a search algorithm and find the optimal or near-optimal parameters that minimizes the distance between predicted and target conditions. As for this example, we would like to avoid both problems, and directly map conditions into model coefficients. Hence, multiple models were trained where each one maps all conditions into only one model coefficient. This implies that the number of models trained is the same as the output feature space dimension, which in this case are the number of source model coefficients of deformable parameters. This way, the hyper-model is a composition of multiple models, where each source model coefficient is predicted independently from the remaining. This should not be mistaken as ensemble regression, where multiple weak learners are trained with the same input and output features, and then the output of all learners are combined to produce a final prediction. As for the results of both approaches, Tables 1 and 2 present all the 16 tests performed per approach, where in bold font the best result is depicted. In this case, the best result should be considered as the minimum value for mean MSE for all 16 tests performed in the test set. Complementarily to the evaluation metrics, also the coefficient of correlation R 2 for the hyper-models in HPM and HM is presented, so the reader can have a clear idea of its performance depending on the polynomial regression degree. Just before commenting the achieved results, one should highlight that for HM the number of models coefficients used to train the hyper-model is the value Model Degree in Table 1 plus 1 because of the bias term used for training. On the contrary, the Number of Components in Table 2 is exactly the number of parameters used to train the hyper-model. The first thing to refer is that the best set of parameters for HPM performs better than the ones in HM, with a mean MSE of 0.32 for 4 components and hyper-model polynomial degree 4, and 0.48 for model polynomial degree 3 and hyper-model polynomial degree 3 (which is very similar to mean MSE of model degree 6 and hyper-model degree of 3), correspondingly. This supports our hypothesis that by using the SSM concept that can take advantage on different techniques by dealing with shapes instead of model coefficients, a better performance can be achieved when comparing with HM. The second aspect to notice is that by making a direct comparison between each of the tests from both approaches where the model degree and number of components are the same together with the hyper-model degree (we might call it as the same setting), for all tests the HPM performs better. This means that the HPM is more effective than HM for the same problem. Additionally, as previously explained, the number of model coefficients from the polynomial in HM to train the source models corresponds to the model degree plus 1, so for the same setting, the HPM is more efficient because it uses a lower dimension for the problem. For a better interpretation of the above results, Figure 4 presents two test scenarios for the beta distribution with α = 4 and β = 6 on the left and α = 12 and β = 4 on the right, where the dashed blue line is the ground truth from beta distribution and the orange line is the result of the ZSL approach applied. In each row, different settings were used: On the top, HM was used with model degree of 3 and hyper-model degree of 3; On the mid row, again the HM approach was used with model degree of 6 and hyper-model degree of 3; On the bottom row, the HPM approach was tested with 4 components and hyper-model degree of 4. Additionally, below each image the MSE is depicted to assess the distance between the predicted curve and the ground truth. As can be seen, the best results are yielded by the HPM approach and, for the HM, it can also be seen that the MSE increased as the model degree increased from 3 to 6. It can be clearly seen the effect of increasing the polynomial degree used to fit the source models as in the mid row the predicted curves are more complex and irregular than the ones in the top row. Despite being more complex, they do not produce better results and are not a better suit for the presented ground truth. Hence, for the beta distribution scenario we have shown that HPM is, first, more effective than HM due to better performance in all the corresponding settings / tests performed, and second, more efficient because it addresses the same problem with less complexity than the HM approach. This is what we were referring previously as by taking advantage on the regression setting, the performance of ZSL techniques could be improved. Discussion and Main Conclusions As a main motivation to create new techniques for the machine learning community, specifically for the ZSL area, and consequently building technologies that allow, e.g. to assist the integration of new product parts or new machines in an industrial scenario, the HPM algorithm is proposed. From the review of current state of the art techniques to its assessment in test scenarios, the main goal of developing new technology is always assisting humans to perform a more effective and efficient job, or even replace them in case of high risk situations avoiding catastrophic repercussions. As already described in Section 3 when presenting the HPM algorithm, one of the advantages is the shape analysis it performs to the data itself using a deformable model. By analyzing how data varies from model to model, an unsupervised feature space that represents the main modes of deformation was defined and used to generate back new shapes based on new process conditions. Comparing with the hyper-model (HM) approach, using data directly allows for a more detailed analysis of system dynamics. Despite the model coefficients are also related with data shape, in certain situations the HM approach can overfit or underfit the data. Since the main restriction of HM is the assumption that a "model-fits-all" datasets, the same technique will have different performances according to the dataset in hands, and can ultimately overfit data. However, for fairness of comparison, we should highlight that the effect of overfitting can be managed by the use of regularizers, such as LASSO or ridge regression. Since the regularizers allow to shrink the coefficient values to decrease model variance, by using the well known cross-validation strategy the penalization parameters could be optimized in order to find the best model coefficients. Regarding the case of underfitting, this can also be avoided by not using too simple models, e.g. by increasing the polynomial degree. Since regularizers can be used to tackle model Figure 4: Graph plots of two test curves, where the dashed blue line is the ground truth and the orange line is the result of the ZSL approach applied. In the left column are the results for beta Distribution with α = 4 and β = 6, and in the right column the results for α = 12 and β = 4. The first row corresponds to the HM approach with model degree of 3 and hyper-model degree of 3, to mid row refers also to HM approach but with model degree 6 and same hyper-model degree, and the bottom row is the HPM with 4 components and hyper-model degree of 4. complexity, a fair model can be trained that does not suffer too much from both overfitting and underfitting effects. Although this seems as a rule of thumb that can be applied to the HM, one should be aware that a compromise between underfitting and complexity should be made. Let's image that in an ideal situation a relatively high polynomial degree is used to avoid underfitting, and the use of regularizers will address the overfitting effect. In this case, since the polynomial degree is high, so is the number of parameters that the hyper-model in HM should handle, and more complex is the problem. In this case, domain drift can occur, where the trained source models are near optimal, but the hyper-model might not be due to high number of coefficients from the source models. Hence, the solution to this problem is to lower the number of parameters in the polynomial regression when training source models. Now the complexity that the hyper-model in the HM approach should handle is more acceptable and we can state that an optimal hyper-model can be trained. However, since the polynomial degree is decreased once training the source models, in certain situations, the models can be too simple to grasp the system details and generality is lost. This is the case where domain drift occurs again, but now with the source models being sub-optimal and the hyper-model optimal. Hence, a good trade-off between source models and hyper-model complexity should be achieved. In case of HPM, since a shape analysis is made using a decomposition method, the number of eigenvectors that map the shape back to its original feature space from a set of deformable parameters should also be chosen. Hence, this trade-off also exists for HPM. At this point, both approaches are side-by-side. The main difference starts when the learning process of HPM is made jointly with all source tasks available, while for the HM each set of coefficients should be learned for each task separately. Therefore, a holistic perspective is taken for HPM and a more local one for the HM. On top of all that, one should not forget that the most suitable technique for each dataset can be chosen in the HPM. This is an advantage because if the model is a good generalization of data, the predictions made by the model can be considered reliable to be used for the shape analysis of HPM. If these models are not good generalizations, which might occur in HM, valuable information is not grasped and the hyper-model will be trained with a limited view of the system dynamics. In the presented scenario using the beta distribution the results show the advantage of using a shape-based technique rather than using source model coefficients directly to extrapolate task relations in a regression setting. As previously discussed, in the presented scenario the benefits of HPM are depicted, but we believe that a more complex scenario would make clear the advantage of HPM, and in concrete, the shape analysis performed on data. This more complex scenario could be easily imagined where source tasks represent non-linear systems and ANNs might be used to train those tasks. Normally, an ANN for a regression setting has multiple hidden layers and neurons per hidden layer, being composed by tens, hundreds or even thousands of weights to optimize. In that sense, we consider the use of ANNs to model non-linear systems a more likely scenario to happen in real world machine learning applications. Therefore, the use of HM will be very limited because one should train a hyper-model using all the weights from multiple source models. From this, we have a too high dimension to deal with and is impractical to train a hyper-model with such high dimension in the input feature space. Additionally, since the hyper-model maps model coefficients into conditions, if new conditions are given to generate a new model, the problem of finding the most suitable model coefficients would be too complex. Additionally, if one consider the matching between all source model ANNs weights, also the HM might not be the most suitable technique. As already explored thoroughly in literature, two slight different ANNs, particularly the multi-layer perceptrons (MLPs), might provide very similar results but with very different weight values due to the stochastic learning routines that can be used. Although the same thing does not happen in polynomials where similar models normally have close coefficients, these might not be capable of modeling some non-linear systems. Opposite to this, the HPM does not suffer from such issues. Even if the source models are all ANNs with thousands of weights, since a shape analysis is made on the data itself in HPM the dimensionality is greatly reduced and a hyper-model can be trained. Moreover, since HPM does not assume any model coefficient matching from different source models, it does not have any problem dealing with machine learning techniques with a high number of coefficients to optimize, as far it maximizes generality. This way, by only hypothesizing about a more complex scenario where ANNs are used, one can easily understand the inherent benefits of HPM. Finally, we would like to bring into discussion an additional potential problem that HPM can address. This kind of problem is not related with how to better learn a new problem of interest, as the definition of ZSL stated, but which problem of interest is worth pursuing from a large spectrum of possibilities. The intuition behind such an exploratory approach using ZSL for regression if based on the possibility to generate new models for a continuum of conditions. Based on this, a vast number of models can be efficiently generated from a pool of source models, and if correctly assessed, exclude the ones not worth to investigate and select the most promising ones. For this case, lets think about a new scenario. Imagine that a pharmaceutical company wants to explore new processes, like the combination of different chemical agents that have different interactions among them to build new drugs. In a pool of chemical agents, the amount of possible combinations is very high and it is not feasible to try all these combinations. This kind of approach of trying all combinations is similar to "exploration" in search algorithms that need to search for all solution space to find the optimal solution. Rather, a more exploitation-based approach can be taken. In this case, if information about the chemical agents (e.g. some chemical properties as task descriptions), the end result of their reaction and the parameters used for their combination (e.g. properties about the medium used for the reaction), models can be trained to predict the outcome of a reaction based on the experimental parameters, for a specific combination of agents. Hence, by applying the HPM algorithm, models can be generated by providing the new target combination of agents. From this point, a pool of new models can be generated using HPM and ranked according to drug specifications. Then, the best combination of chemical agents used to generate the models can be tested in the lab to provide a more oriented search for the final drug result. Ultimately, if new models are trained based on these new lab experiments, the pool of source models used by HPM can be enlarged. Hence, the process can be repeated until the optimal, or near-optimal, chemical agent combination is found. After experimentation, if the drug requirements are not yet met, a new iteration can be made, where the HPM can be retrained and generate a new set of candidate predictive models. This iterative approach can be performed as many times as required, and in principle, it should be more efficient than experimenting all possible combinations of chemical agents. Even if the experimental results are very different from the model generations using HPM in the first iteration, since the HPM is updated in every iteration with new models, it becomes better and better at generating models and will eventually start converging. The intuition behind this process, is to basically create a gradient among models so that trying all the agent combinations in the lab can be avoided. Of course humans do not perform these kind of experiments blindly and choose the next chemical agents to try at the lab randomly, but rather based on knowledge about the field and previous performed experiments. This last presented approach is performed by humans in almost every task that has a clear goal to fulfill. By measuring how far the result is from the intended goal, fewer changes are tried and better results could be achieved. This is applicable as well in our example, where humans can collect knowledge about the chemical agents to experiment based on a set of previous experiments. However, this gets more challenging when the problem in hands is far too complex in order for humans to understand the gradient-progressing towards the final goal, and often the selection of new experiments is based on intuition. When there is a high number of variables describing the experiments and its end results, machine learning algorithms can help to make a more informative approach in such complex problems, avoiding to use intuition that is hard to justify and no knowledge can be exploited from that. Hence, the main consideration to get is not to blindly rely on machine learning algorithms, mainly because it is not possible to quantify and translate all human knowledge to an algorithm, and neither to completely discard these. Opposite to this, these ZSL algorithms can be seen as complementary tools that humans can take advantage from and ease the burden of hard and heavy work that sometimes one needs to perform.
2018-10-16T11:35:16.000Z
2018-10-16T00:00:00.000
{ "year": 2018, "sha1": "9df358b4aec042a11bf617aee23611d87086ab8d", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "9df358b4aec042a11bf617aee23611d87086ab8d", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science", "Mathematics" ] }
195256969
pes2o/s2orc
v3-fos-license
Are These Truly Rheumatoid Arthritis or Antisynthetase Syndrome Cases? In the study, 228 patients classified as rheumatoid arthritis (RA) according to the 2010 American College of Rheumatology (ACR)/ European League Against Rheumatism (EULAR) classification criteria were included. The aim of the 2010 criteria is to provide early recognition of RA cases and initiation of treatment before structural damage. For this reason, the specificity of the criteria is criticized and the possibility of other rheumatologic pathologies in the early period being misconsidered as RA is emphasized.2 As a matter of fact, in a study of Brittsemmer et al.3 with 455 early arthritis patients, although 51% patients were classified as RA with current criteria, they were defined as non-RA according to expert opinion. For this reason, we believe that the anti-aminoacyl-transfer ribonucleic acid synthetase (ARS) positive patients described by the authors in the study should be examined for whether or not they really had RA. Dear Editor, We read with interest the study of Matsushida and colleagues entitled "The Association of Anti-Aminoacyl-Transfer Ribonucleic Acid Synthetase Antibodies in Patients With Rheumatoid Arthritis and Interstitial Lung Disease" 1 published in your journal. We would like to inform you of some of the issues that drew our attention in the related work. In the study, 228 patients classified as rheumatoid arthritis (RA) according to the 2010 American College of Rheumatology (ACR)/ European League Against Rheumatism (EULAR) classification criteria were included. The aim of the 2010 criteria is to provide early recognition of RA cases and initiation of treatment before structural damage. For this reason, the specificity of the criteria is criticized and the possibility of other rheumatologic pathologies in the early period being misconsidered as RA is emphasized. 2 As a matter of fact, in a study of Brittsemmer et al. 3 with 455 early arthritis patients, although 51% patients were classified as RA with current criteria, they were defined as non-RA according to expert opinion. For this reason, we believe that the anti-aminoacyl-transfer ribonucleic acid synthetase (ARS) positive patients described by the authors in the study should be examined for whether or not they really had RA. Antisynthetase syndrome (AS) is an important clinical condition in the differential diagnosis of early RA. It has been reported that development of myositis can take years after onset of arthritis and interstitial lung disease (ILD), which are also not rarely seen before myositis. 4,5 Of the cases, 10% have disease with other findings and myositis never occurs. 6 The expected pattern of arthritis in AS is polyarthritis in which small joints such as wrist metacarpophalangeal and proximal interphalangeal are symmetrical which is quite similar to RA joint involvement. 7 Anti-cyclic citrullinated peptide (CCP) and rheumatoid factor (RF) positivity can be observed in AS and lead to false RA diagnoses. 8 On the other hand, we do not have established knowledge that ARS is seen in cases with classical RA. In the study of Matsushida et al., 1 non-specific interstitial pneumonitis (NSIP) was reported as ILD type all over the ARS positive patients classified as RA. NSIP is the most common pattern of pulmonary involvement in patients with AS. 9 However, the most common pattern of pulmonary involvement that is expected to be seen in RA cases is usual interstitial pneumonia and it is not found as the most common form of pulmonary involvement in the relevant article of Matsushida et al. 1 The authors noted that biological agents in the treatment of myositis exacerbated the disease and that there was no increase in disease activity in their present cases. Biological treatments used by the patients are as follows: abatacept in two cases and etanercept in one case. Abatacept is a successfully tested agent in resistant myositis. 10 Biological treatments accused of exacerbation in myositis are usually anti-tumor necrosis factor agents. 11 It has been reported in the current study that; in one case treated with etanercept, exacerbation in disease clinic did not seen. However, we do not consider "no increase in disease activity" as evidence for the diagnosis of RA. For all these reasons, we believe that more serious evidence is needed to show that all cases in this article were true RA. We think that the authors should show how long they followed-up these cases, how many patients did not develop myositis, how many patients developed myositis, whether there were typical erosive findings seen in RA, and whether there were Raynaud's phenomenon and capilleroscopic changes in these patients that are more common in myositis. Declaration of conflicting interests The authors declared no conflicts of interest with respect to the authorship and/or publication of this article. Funding The authors received no financial support for the research and/or authorship of this article. Author Response Thank you very much for your valuable feedback. You raised some concerns about whether the cases reported in our study were authentically RA, or actually antisynthetase syndrome characterized by symptoms of arthritis or ILD. We would like to express our appreciation for your informative remarks. We too understand the importance of this point: in fact, this similarity complicated our efforts to diagnose some of the cases in the study. Nevertheless, we included these cases in our report because we deemed a diagnosis of RA to be reasonable at the time of our investigation. Our impetus for starting this research was a RA patient who exhibited persistent polyarthralgia, ulnar deviations, and swan neck deformities in the fingers of both hands, as well as X-ray findings of bone erosion and destruction, who additionally tested positive for anti-PL-12, a kind of anti-aminoacyl-transfer ribonucleic acid synthetase (ARS) antibody. We diagnosed her with classic RA based on hematological findings of anti-CCP and RF positivity, and high levels of metalloproteinase-3 C-reactive protein, and other markers. This patient also had comorbid ILD, which led us to carefully examine her condition using bronchoscopy; yet, we observed no signs of concurrent polymyositis or dermatomyositis. Based on our experience with this patient, we measured anti-ARS levels in many patients with classic RA examined at our institution using the Myositis Profile 3 kit (Euroimmun AG, Lübeck, Germany). Arguments can be made both for and against this kit in terms of its sensitivity and specificity, as while it can conveniently measure many kinds of anti-ARS antibodies, it cannot measure them all. If a sample tested positive for anti-ARS, we checked for cytoplasmic staining in human epithelial type 2 cells using indirect immunofluorescence. However, the truth is that this confirmation method is not definitive, as there may have been antibodies other than anti-ARS that reacted in the cytoplasm. Some patients also exhibited Raynaud's syndrome, as you mention; however, the osseous changes observed were characteristic of RA, and not of arthritic symptoms associated with anti-synthetase syndrome. When writing this paper, we believed the diagnosis of RA to be unproblematic, noting that sometimes even patients with RA can test positive for anti-ARS, in which cases a correlation with ILD is observable. Nevertheless, we also agree with the points you have made, and have re-considered our diagnosis of RA in each case. However, one of our anti-PL-12-positive cases has recently shown symptoms of myositis, indicating he might instead have an overlapping syndrome between RA and polymyositis. The present manuscript does not contain detailed data on RA staging or Raynaud's syndrome co-occurrence for each patient. We are planning to re-examine the cases presented here in greater depth, if possible including re-testing of samples for anti-ARS using immunoprecipitation techniques, and conducting careful long-term observation and follow-up. We hope to prepare another report in the future in the event of significant changes in our results after an in-depth examination of the patients during long-term follow-up. Once again, we would like to thank you very much for your valuable feedback.
2019-09-27T10:15:36.539Z
2019-06-01T00:00:00.000
{ "year": 2019, "sha1": "143a36a003fdd4c8a24175bb43fed62513536acc", "oa_license": "CCBYNC", "oa_url": "https://www.archivesofrheumatology.org/full-text-pdf/985", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "8603765e0592c5e335f0f1667626d0d6329b6cbe", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
257716107
pes2o/s2orc
v3-fos-license
Barriers to global pharmacometrics: educational challenges and opportunities across the globe 1Department of Clinical Pharmacy and Biochemistry, Institute of Pharmacy, Freie Universitaet Berlin, Berlin, Germany 2qPharmetra LLC, Nijmegen, The Netherlands 3Research DMPK, Drug Discovery Sciences, Boehringer Ingelheim Pharma GmbH & Co. KG, Biberach, Germany 4Pharmaceutical Sciences Graduate Program, Federal University of Rio Grande do Sul, Porto Alegre, Brazil 5Division of Clinical Pharmacology, Department of Medicine, University of Cape Town, Cape Town, South Africa 6Department of Pharmaceutical Sciences, Faculty of Chemistry, Universidad de la República, Montevideo, Uruguay 7Department of Pharmacology & Clinical Pharmacology, The University of Auckland, Auckland, New Zealand 8Department of Pharmaceutical Sciences, College of Pharmacy, University of Tennessee Health Science Center, Memphis, Tennessee, USA 9Division of Clinical Pharmacology, Department of Medicine, University of Cape Town, Cape Town, South Africa 10CP+ Associates GmbH, Basel, Switzerland 11Phamacometrics Africa NPC, Cape Town, South Africa 12Center for Pharmacometrics and Systems Pharmacology, Department of Pharmaceutics, College of Pharmacy, University of Florida, Orlando, Florida, USA 13Certara, Inc., Princeton, New Jersey, USA 14School of Clinical Sciences, Faculty of Health, Queensland University of Technology, Brisbane, Queensland, Australia 15Department of Clinical Pharmacy, Institute of Pharmacy, Freie Universitaet Berlin, Berlin, Germany During the past three decades, pharmacometrics has developed considerably and gained recognition globally and across disciplines, causing the demand of well-trained pharmacometricians to outweigh the availability. 1 Thus, high-quality and widespread education of pharmacometricians is key. A symposium and panel discussion titled "Training of the Next Generation of Pharmacometric Talent Around the World" at the third World Conference on Pharmacometrics (WCoP2022, Cape Town) explored this issue and resulted in the perspective presented here. Pharmacometrics is a complex and multidisciplinary scientific field with strong interplay between academia and industry. As a consequence, pharmacometrics education is associated with both opportunities and This is an open access article under the terms of the Creative Commons Attribution-NonCommercial License, which permits use, distribution and reproduction in any medium, provided the original work is properly cited and is not used for commercial purposes. challenges, 2,3 prompting unique initiatives and networks across the globe. [4][5][6] Approximately 15 years after starting a number of educational initiatives, 7-9 their impact needs to be discussed with respect to the initial purpose and how to move forward from them. Therefore, a symposium was held at WCoP2022, a hybrid event with approximately 300 participants worldwide, which aimed to share different perspectives on pharmacometric education and to obtain a global perspective. THE SYMPOSIUM: INTERNATIONAL PERSPECTIVES AND AUDIENCE INPUT The symposium consisted of six talks where international speakers with diverse backgrounds, all of which are actively training pharmacometric talents, presented their experiences and perspectives on the required skillsets of future pharmacometricians, education and training concepts, and related challenges and opportunities, followed by a panel discussion with contributions from the audience. The talks and discussion made clear that different challenges and opportunities for pharmacometrics training exist across the globe, with some being specific for certain localities and others being consistent worldwide, showing high intra-and interregional differences. A depiction of the main ones is provided in Figure 1 and more details, including the recordings of all talks, representing successful examples, can be found in the Supplementary Material. In the remainder of this perspective, we aim to synthesize the insights of the symposium into three clearly grouped challenges and opportunities which need tailored approaches: the movement of human capital, the multidisciplinary nature and the unbalanced spread of pharmacometrics across regions. Challenges and opportunities related to the movement of human capital ("braindrain") Pharmacometricians trained in low-and middle-income countries often move to the European Union or United States, leading to a brain-drain and difficulties in establishing not only larger teaching and training groups in these countries but also a stronger local pharmacometrics community ("geographic brain-drain"). The use of remote working opportunities should in theory allow a dampening of this geographic brain-drain, especially in an in silico field such as pharmacometrics. However, as echoed by the panelists during the symposium and the panel discussion (Supplementary Material), in practice this is not (yet) the case and both pharmaceutical companies and consulting companies hire mostly from EU, USA and Oceania or require working visas at these locations. Furthermore, academic centers, regardless of location, encounter increasing difficulty to attract and retain researchers at the post-doctoral, early to midcareer level and even senior faculty level. Not re-filling open positions as already observed presents a global threat eventually leading to fewer academic centers that offer opportunities for the next generation in the long-term. This in turn makes teaching and training activities and sustainability more challenging, leading to a loss of critical mass to sustain teaching ("academic F I G U R E 1 The local and global challenges for pharmacometrics training and education. Green dots with arrows represent locations of the speakers of the World Conference on Pharmacometrics symposium, and colored circles represent the different challenges and opportunities to tackle them, which are further discussed in the text. The ratio of academic to geographic movement of human capital is depicted semantically using two colors in the same circle. brain-drain"; Figure 2). Both brain-drains especially the case in LMIC, where a critical mass of pharmacometricians is not always present. A minimum expression of a critical mass is needed to generate human resources and capabilities at a local level, and furthermore to connect the local community with the rest of the field globally. As long as this is not achieved, the above-described brain-drain will be the status quo in those countries, preventing the local communities to significantly supply the world with human resources. Therefore, aiming for trained pharmacometricians developing the discipline outside of Europe and US, e.g. by joint efforts such as Pharmacometrics Africa, RedIF and PAGANZ should be seen as part of the way forward. However, a stronger collaboration between academia and industry could be an opportunity for local development of pharmacometricians within an academic setting. Initiatives such as internships, industry mentors, on-site visits and projects using shared data (e.g., also from preclinical studies) are potential ways in which students and mid-level researchers can strive for a "best-of-two-worlds" approach. Academia and consulting companies are currently fostering this approach by "sharing" and offering part-time employees training and supervising opportunities. This might reduce the need to move away from their local or academic position. This stronger collaboration could also lead to increased mentoring support from industry in the form of e.g. industry mentors, expert knowledge sharing and providing guidance to students and early-career scientists. Indeed, sharing expertise back to academia is one of the essential points how industry partners can contribute towards moving forward from the status quo. Lastly, financial and logistic support for student scholarships, attendance of conferences, or fund academic-industry exchanges, especially from industry, for the training of the next generation of pharmacometrics talents might stimulate dedicated programs to stay active or grow capacity and attract talents to academic positions, which might make a (part-time) academic career more attractive for scientists. Challenges and opportunities related to pharmacometrics as a multidisciplinary field Due to the multidisciplinary nature of pharmacometrics, there are two key challenges to exploring the potential of yet not identified next-generation talents: undergraduate students with the right skill set and mind-set need to be attracted background-specifically to the discipline and opportunities of pharmacometrics that are mostly new for them. Background-specific material could be developed to this end in a global effort. The curriculum of any academic training program should, especially at the beginning, acknowledge this diversity in backgrounds. Yet, all involved will quickly learn and see the diversity as an advantage. A critical aspect in developing the multidisciplinary pharmacometric mindset in young scientists is the understanding of fundamental principles of pharmacokinetics, pharmacodynamics, biopharmaceutics and pathophysiology. Dedicated modules in these basic sciences could be part of international training programs and included as recommended or mandatory parts of graduate programs. What can be left for on-the-job training might differ for students from different scientific backgrounds. Furthermore, the required expertise will strongly depend on the area in which a pharmacometrician works. Therefore, to build a one-size-fits-all university educational program is challenging (e.g., clinical, preclinical, or authorities). Instead, education could focus on a curriculum based on a desired set of core competencies. Multidisciplinary and specialized (under-) graduate courses could be developed in consensus with the pharmacometrics community and recognized by a professional international body, which would help to attract interested students from various disciplines early F I G U R E 2 Imbalanced flow of pharmacometricians between academia, and pharmaceutical industry and consultancy, leading to a possible decrease in newly trained pharmacometric talent. on. Our colleagues have provided more details about the practical implementation of such an approach from a US perspective. The aforementioned multidisciplinary mind-set allows pharmacometricians to be "good model communicators" who can efficiently translate between different disciplines, and identify the most impactful question for pharmacometric analyses. 10 Therefore, training to communicate results, implications, and assumptions of pharmacometric analyses should be considered equally important as methodological training. Good model communicators are pivotal for a sustainable future of pharmacometrics and to facilitate the move from an expert discipline to a fully embedded discipline within project teams. Challenges and opportunities related to the unbalanced spread of pharmacometrics The pharmacometrics community is relatively small but globally spread. A combination of local, international, and global collaborations could lead to optimal use of the unique insights across the world and allow students access to a large international network. International organizations such as the International Society of Pharmacometrics and WCoP are facilitating and spearheading global networking and collaboration initiatives. Furthermore, in past years, hybrid and remote working formats have been shown feasible for a pharmacometrics career. Especially smaller research groups and consulting companies have taken the opportunity to recruit from anywhere on the globe while allowing pharmacometric talents to work in their original country. This might improve talent retention and also allow better collaboration between remote-working pharmacometricians and local academic groups to foster local development of pharmacometric groups and strengthen the connection in the pharmacometrics community. ROADMAP FOR NEXT STEPS Training the next generation of pharmacometricians is a challenging task. Until now, it has been tackled through varying approaches across the globe, showcasing unique local challenges that might (only) be solved on a global scale. For example, the practical implementation of such an approach from a US perspective is provided in a companion paper to this one (under review). Indeed, joint efforts for continuous and sustainable development of pharmacometrics are crucial: overarching umbrella initiatives, for example, led by one of the hosts of the large pharmacometrics meetings, such as PAGE, ACoP, or WCoP, connect local initiatives such as the networks and training programs here (RedIF, PMXAfrica, PharMetrX) and others (Asian Pharmacometrics Network, PAGANZ) and share their strengths while mitigating their challenges and weaknesses. This could lead to a centralized location where interested students can learn about pharmacometrics and which training opportunities exist within reach, i.e. an expansion of the ISoP website with all pharmacometric training programs and networks could be a good starting point. Stronger collaboration between academia and the pharmaceutical industry in the form of internships, scholarships, and exchanges could foster the sustainable growth of pharmacometric skills in early career scientists across the globe. Furthermore, financial and mentoring support also of the pharmaceutical industry of pharmacometricians-in-training in the form of scholarships, academic-industry exchanges and sponsored conference attendance could, especially in LMIC, increase the retention of these young talents locally. As a next step, an open debate across the pharmacometrics community on how to tackle the identified challenges would be beneficial. We invite the readers and the global pharmacometrics community to reach out with their point of view, ideas, or suggestions in an open debate to achieve this goal. FUNDING INFORMATION SH was partly supported by the Alexander von Humboldt Foundation, Germany.
2023-03-25T06:17:33.913Z
2023-03-24T00:00:00.000
{ "year": 2023, "sha1": "472ebb0545c8e5e20c260c3c0b59f0239302f14d", "oa_license": "CCBYNC", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/psp4.12940", "oa_status": "GOLD", "pdf_src": "Wiley", "pdf_hash": "0024183f34543c503ffa449bb1f46874cb6636c2", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
264263334
pes2o/s2orc
v3-fos-license
Oblique derivative problem for non-divergence parabolic equations with discontinuous in time coefficients in a wedge We consider an oblique derivative problem in a wedge for nondivergence parabolic equations with discontinuous in $t$ coefficients. We obtain weighted coercive estimates of solutions in anisotropic Sobolev spaces. Introduction Consider the parabolic differential operator where x ∈ R n , t ∈ R and the convention regarding summation from 1 to n with respect to repeated indices is adopted.Here and elsewhere D i denotes the operator of differentiation with respect to x i , i = 1, . . ., n, D = (D 1 , . . ., D n ), and ∂ t denotes differentiation with respect to t.We assume that a ij are measurable real valued functions of t satisfying a ij = a ji and In [5], [6] it was shown by Krylov that for coercive estimates of ∂ t u and D(Du) one needs no smoothness assumptions on coefficients a ij with respect to t.The only assumption which is needed is estimate (2).Solvability results in the whole space R n × R for equation (1) in L p,q spaces were proved in [5]; solvability of the Dirichlet problem in the half-space R n + × R in weighted L p,q spaces was established by Krylov [6] for particular range of weights and by the authors [7] for the whole range of weights.Similar estimates for the oblique derivative problem in a half-space are obtained in [9], see also [2].This paper addresses solvability results for the boundary value problems to (1) in the wedge.Namely, let n ≥ m ≥ 2 and let K be a cone in R m .We assume that the boundary of ω = K ∩ S m−1 is of class C 1,1 , where S m−1 is the unit sphere in R m .We put K = K × R n−m .We underline that the case m = n where K = K is not excluded.In what follows we use the representation x = (x ′ , x ′′ ), where x ′ ∈ R m and x ′′ ∈ R n−m . First, we recall the important notion of critical exponents for the operator L in the wedge K.They were introduced in [8], where the following Dirichlet problem was considered: To define them we need the space V(Q K R (t 0 )) of functions u with finite norm where )) for all R ′ ∈ (0, R).The first critical exponent is defined as the supremum of all real λ such that |u(x; t)| ≤ C(λ, κ) for a certain κ ∈ (1/2, 1) independent of t 0 , R and u.This inequality must be satisfied for all t 0 ∈ R, R > 0 and for all u ∈ V loc (Q K R (t 0 )) subject to where Λ D is the first eigenvalue of the Dirichlet boundary value problem to the Beltrami-Laplacian in ω; In order to formulate the main result of [8] we introduce two classes of anisotropic spaces.For 1 < p, q < ∞ we define L p,q = L p,q (K × R) and Lp,q = Lp,q (K × R) as spaces of functions f with finite norms Then for any f such that ) there is a solution of the boundary value problem (3) satisfying the following estimates: This solution is unique in the space of functions with the finite norms in the left-hand side of (5) (respectively, of (6)). Here we complement this result by considering the problem where f = (f 1 , . . ., f n ).Denote by Γ D K the Green function of problem (3) (this Green function was constructed in [8]).One of the results of this paper is the following. gives a weak solution to problem (7) and satisfies the estimate Then the function (9) gives a weak solution to problem (7) and satisfies the estimate Let us turn to the oblique derivative problem in K We assume additionally that the cone K is strictly Lipschitz, i.e. where φ is a Lipschitz function, i.e. The main result of this paper is the following. The constant C depends only on ν, µ, p, q and K. Remark 1.Since λ + c and λ − c are positive, the intervals for µ in (8) and (15) are non-empty even for m = 2. We use an approach based on the study of the Green functions.In Section 2 we collect (partially known) results on the estimate of the Green function for equation (1) in the whole space and in the wedge subject to the Dirichlet boundary condition.In particular in Sect.2.2 we prove Theorem 1. Section 3 is devoted to the estimates of the Green function for the oblique derivative problem and to the proof of Theorem 2. Let us recall some notation.In what follows we denote by the same letter the kernel and the corresponding integral operator, i.e. Here we expand functions T and h by zero to whole space-time if necessary. For x ∈ K, d(x) is the distance from x to ∂K, and We use the letter C to denote various positive constants.To indicate that C depends on some parameter a, we sometimes write C(a). 2 Preliminary results The estimates in the whole space Denote by Γ the Green function of the operator L in the whole space: 4 for t > s and 0 otherwise.Here A(t) is the matrix {a ij (t)} n i,j=1 .The above representation implies, in particular, the following estimates. Proposition 2. Let α and β be two arbitrary multi-indices.Then for x, y ∈ R n and s < t.Here σ depends only on the ellipticity constant ν and C may depend on ν, α and β. The following statement is a particular case of a general result on boundedness of singular operators in Lebesgue spaces with Muckenhoupt weights.For p = q it can be extracted from [4].However, we could not find corresponding result for anisotropic spaces and give here a direct proof.Theorem 3. Let 1 < p, q < ∞, and let Then the integral operator with the kernel Proof.For µ = 0 this statement is proved in [6] (for the space L p,q ) and in [7] (for the space L p,q ).Thus, it is sufficient to prove boundedness of the operator with kernel Using (18), ( 19) and elementary estimates we obtain for x, y ∈ R n and s < t.Here r = min{|µ|, 1} and Due to (21) and (20), the kernels G satisfy the conditions of Proposition 4 (see Appendix) with λ 1 = −r, λ 2 = 0, ε 1 = ε 2 = 0. Thus, the operator G is bounded in L p (R n ×R) and L p,∞ (R n ×R).Generalized Riesz-Thorin theorem, see, e.g., [11, 1.18.7],shows that this operator is bounded in L p,q (R n × R) for any q ≥ p. For q < p the same argument provides boundedness of the operator and thus in L p ′ ,q ′ (R n × R).Now duality argument gives the statement of Theorem for the spaces L p,q . To deal with the scale L p,q , we take a function h supported in the layer Thus, our kernel satisfies the assumptions of Proposition 5 (see Appendix) where C does not depend on δ and s 0 .The last estimate is equivalent to the second condition in [1, Theorem 3.8] while the boundedness of G in L p (R n × R) gives the first condition in this theorem.Therefore, Theorem 3.8 [1] ensures that G is bounded in L p,q (R n × R) for any 1 < q < p < ∞.For q > p this statement follows by duality arguments. Coercive estimates for weak solutions to the Dirichlet problem in the wedge We recall that Γ D K stands for the Green function of the operator L in the wedge K under the Dirichlet boundary condition, and λ ± c > 0 are the critical exponents for L in K. The next statement is proved in [8,Theorem 3.10] In general case the proof runs almost without changes. For x, y ∈ K, t > s the following estimates are valid: where σ is a positive constant depending on ν, ε is an arbitrary small positive number and C may depend on ν, λ ± , α, β and ε. Now we turn to problem (7) and to the proof of Theorem 1. Similarly to the first part of the proof of Theorem 3, using Proposition 4 and generalized Riesz-Thorin theorem, we conclude that the operators T 0 and T 1 are bounded in L p,q (R n × R) for 1 < p ≤ q < ∞.For q < p we proceed by duality argument and arrive at for all 1 < p, q < ∞ and µ subject to (8). To estimate the first term in the left-hand side of (10) we use local estimates.For ξ ′′ ∈ R n−m , ρ > 0 and ϑ > 1, we define Localization of the estimate from [9, Theorem 1 (i)] with µ = 0 (by using a cut-off function, which is equal to 1 on Π ρ,2 and 0 outside Π 2ρ,8 ) gives for any ξ ′′ ∈ R n−m and ρ > 0. Using a proper partition of unity in K, we arrive at This immediately implies (10) with regard of (26). (ii) To deal with the space L p,q , we need the following lemma (compare with the second part of the proof of Theorem 3) Lemma 1.Let a function h be supported in the layer |s − s 0 | ≤ δ and satisfy h(y; s) ds ≡ 0. Also let p ∈ (1, ∞) and µ be subject to (8).Then the operators T j , j = 0, 1, 2, 3, satisfy where C does not depend on δ and s 0 .Proof.By h(y; s) ds ≡ 0, we have (we recall that all functions are assumed to be extended by zero outside K). Combination of these estimates gives Thus, the kernels in (27) satisfy the assumptions of Proposition 5 with κ = ε, ε 1 = 0 and with λ and µ replaced by µ + 1 for kernels T 0 and T 2 ; with for kernels T 1 and T 3 , respectively.Inequality (25) becomes (50), and the Lemma follows. We continue the proof of the second statement of Theorem 1. Estimate (10) for q = p provides boundedness of the operators T j , j = 0, 1, 2, 3, in L p (R n × R), which gives the first condition in [1,Theorem 3.8].Lemma 1 is equivalent to the second condition in this theorem.Therefore, Theorem 3.8 [1] ensures that these operators are bounded in L p,q (R n × R) for any 1 < q < p < ∞.For q > p this statement follows by duality arguments.This implies estimate (11).The proof is complete. 3 Oblique derivative problem The Green function From here on we use the notation x ′ = (x 1 , x) and assume that the cone K is strictly Lipschitz, i.e. ( 13) and ( 14) are satisfied. Theorem 4.There exists a Green function Γ N K = Γ N K (x, y; t, s) of problem (12).Moreover, if λ + < λ + c and λ − < λ − c then the following estimates are valid for arbitrary x, y ∈ K, t > s: Here σ is a positive number depending on ν, α and β, ε is an arbitrary small positive number and C may depend on ν, λ ± , Λ, α, β and ε. To estimate derivatives with α 1 = 0 we write where we have used (31) and ( 23).Now we apply Lemma 4 (see Appendix) and estimate the last integral by C R , which gives (28) for α 1 = 0. Since ∂ s Γ N K (x, y; t, s) can be expressed via D 2 y Γ N K (x, y; t, s), the estimate (29) follows from (28). Estimates for the difference Γ N K − Γ The next representation of the Green function Γ N K follows from (31): where Γ is the Green function of the operator L in the whole space and Now we are in the position to prove the main estimate of this Section. Proof.We begin with estimates for D α x D β y F with arbitrary α.Using ( 23) and ( 18), we obtain Taking into account that z = (φ(ẑ), ẑ, z ′′ ) and integrating with respect to z ′′ , we arrive at Let us estimate the expression in the last line of (35).If In any case we obtain Next, we claim that the following inequality holds for positive c.Indeed, we note that which implies and leads to (37).Using ( 36) and (37), we estimate the integral in (35) by where Applying Lemma 5 (see Appendix) for estimating the right-hand side of (38) and (39), we get where ̺ = (λ − − 1) − .Now we note that R and make the following change of variables in the right-hand side of (40): Proof of Theorem 2. The estimate of the last terms in the left-hand sides of ( 16) and ( 17) is equivalent to the boundedness of integral operators with kernels in L p,q (R n × R) and L p,q (R n × R), respectively.The first statement follows from Lemma 2 and Theorem 3, the second one -form Lemma 3 and Theorem 3. The first terms in (16) and in (17) are estimated by using equation ( 1), and Theorem follows. 4 Appendix Lemma 4. Let φ be a function on R + such that Here ε is arbitrary small positive number while C may depend on a, b, c, Λ and ε. Subcase 2.1: ̺ 1 ≤ ̺ 2 .Then we split the interval (x 1 , ∞) into two parts.The integral over (2Λ̺ 1 , ∞) is estimated in the same way as in the case 1.Further, the integral over (x 1 , 2Λ̺ 1 ) is estimated by , and the integral is estimated by where C may depend on a and b. Proof.(i) The case a, b ≥ 0 is well known. Then the integral over the set {|z| > √ ̺ 2 } in (49) is estimated by The last expression does not exceed the right-hand side of (49) by (i). the integral over the set {|z| < √ ̺ 2 } is estimated by . The last expression also does not exceed the right-hand side of (49). (iii) The remaining cases can be easily reduced to the case (ii). We formulate two auxiliary results on estimates of integral operators.The first statement is proved in [7, Lemmas A.1 and A. does not exceed a constant C independent of δ and s 0 .
2015-07-30T06:16:56.000Z
2015-07-30T00:00:00.000
{ "year": 2015, "sha1": "4e9ade5499779f41405b7e3e285725b73fba15b5", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "4e9ade5499779f41405b7e3e285725b73fba15b5", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
235669586
pes2o/s2orc
v3-fos-license
Using a Drone Sounder to Measure Channels for Cell-Free Massive MIMO Systems Measurements of the propagation channel form the basis of all realistic system performance evaluations, as foundation of statistical channel models or to verify ray tracing. This is also true for the analysis of cell-free massive multi-input multi-output (CF-mMIMO) systems in real-world environments. However, such experimental data are difficult to obtain, due to the complexity and expense of deploying tens or hundreds of channel sounder nodes across the wide area a CF-mMIMO system is expected to cover, especially when different configurations and number of antennas are to be explored. In this paper, we provide a novel method to measure channels for CF-mMIMO systems using a channel sounder based on a drone, also known as a small unmanned aerial vehicle (UAV). Such a method is efficient, flexible, simple, and low-cost, capturing channel data from thousands of different access point (AP) locations within minutes. In addition, we provide sample 3.5 GHz measurement results analyzing deployment strategies for APs and make the data open source, so they may be used for various other studies. Abstract-Measurements of the propagation channels in realworld environments form the basis of all realistic system performance evaluations, as foundation of statistical channel models or to verify ray tracing. This is also true for the analysis of cell-free massive multi-input multi-output (CF-mMIMO) systems. However, such experimental data are difficult to obtain, due to the complexity and expense of deploying tens or hundreds of channel sounder nodes across the wide area a CF-mMIMO system is expected to cover, especially when different configurations and number of antennas are to be explored. In this paper, we provide a novel method to obtain channel data for CF-mMIMO systems using a channel sounder based on a drone, also known as a small unmanned aerial vehicle (UAV). Such a method is efficient, flexible, simple, and low-cost, capturing channel data from thousands of different access point (AP) locations within minutes. In addition, we provide sample 3.5 GHz measurement results analyzing deployment strategies for APs and make the data open source, so they may be used for various other studies. To our knowledge, our data are the first large-scale, real-world CF-mMIMO channel data. I. INTRODUCTION A. Motivation In contrast to the traditional cellular system where the signal-to-interference-plus-noise ratio (SINR) varies significantly depending on where the user equipments (UEs) are located within a cell, the cell-free massive multi-input multioutput (CF-mMIMO) system can provide an almost uniform quality of services to all UEs by abolishing cell boundaries and distributing many base station (BS) antennas across a wide area in forms of access points (APs) [1]- [3]. While many studies analyzed how to scale, optimize, and deploy the CF-mMIMO system in a pragmatic manner, the propagation channels used in the analyses were either a) synthetic channels based on statistical channel models (uncorrelated/correlated Rayleigh/Rician) or b) simulated data based on behaviors of electromagnetic waves within a selected environment (ray tracing), whose accuracy, in particular with respect to angular This material is supported by KDDI Research, Inc. and the National Science Foundation (ECCS-1731694 and ECCS-1923601 dispersion, is uncertain. When actual real-world measurements were used, the number of APs was small (see Section I-B). Nevertheless, the measured channel data between tens to hundreds of APs and multiple UEs distributed across a wide area are needed to accurately model real-world channels for CF-mMIMO systems. This paper presents a novel approach to such large-scale channel measurements that drastically reduces efforts and costs while escalating the data volume: using a drone to create a fast-moving virtual array. B. Related Works There are several experimental works that study cooperation of multiple BS antennas distributed across a wide area. 1 One way to measure propagation channels in such systems is to use a "full" real-time system with each antenna having an individual radio-frequency (RF) chain and transceiver, while possibly the APs at different locations are connected through optical fiber back-haul [4], [5]. While such measurement system would be the closest to the actual deployed system, its main weaknesses are a) difficulty in installing the backhaul network, b) complexity in terms of calibration, synchronization, and operation, and c) expense scaling with the large number of antennas, and thus transceivers, considered in the CF-mMIMO systems, quickly becoming prohibitive. An alternative method is to use a single RF chain and RF over fiber modules connected to a RF switch [6], [7]. This "switched" real-time system allows easier calibration and synchronization while many APs are distributed across a wide area using the low-loss fiber cables. Yet, its price still scales with the number of APs and it is still difficult to make measurements in multiple settings due to challenges of installing and managing many cables and antennas. The last method is to use a virtual array [8]- [12], where one antenna (or a co-located antenna array) is used as an AP and another antenna is used as a UE. Such a system operates by fixing the location of the UE antenna and moving the AP antenna to selected AP locations. The UE antenna then moves to the next location and the process repeats. This method is popular, especially in small settings, due to its simplicity and low-cost. However, a key bottleneck is the effort in carrying and installing the transceiver emulating the AP to many different locations. Indeed, all previous measurementbased studies using these three methods involved only a small number of APs. Hence, an efficient, flexible, simple, and lowcost method to measure real-world channels for CF-mMIMO systems is necessary. C. Contributions We describe a new channel measurement method for the CF-mMIMO systems based on a channel sounder flying on a drone, creating a distributed virtual array that can measure channels from thousands of AP locations in a few minutes. 2 To our knowledge, this is the first channel measurement dedicated to CF-mMIMO analysis with such many possible AP locations, and the first time a drone is used to measure such channels. 3 This sounding methodology a) quickly captures channel data from many APs to a UE, b) accesses any outdoor AP location, c) is easy to calibrate and operate using only a single RF chain on each AP/UE end, and d) costs little in terms of both labor and equipment. A sample 3.5 GHz channel measurement campaign is conducted in an outdoor setting at the University of Southern California (USC) campus, and measurement results with insights to realizing a CF-mMIMO system are given. D. Reproducible Research We encourage researchers to use the real-world measurement data for various CF-mMIMO analyses by making the data open source. The data and the simulation results of this paper are available at: https://github.com/tomathchoi/drone C F-mMIMO. II. CHANNEL SOUNDING METHODOLOGY The channel sounder we use consists of 1) a transmitter (TX) on a drone with a lightweight software-defined radio (SDR) and a single dipole antenna and 2) a receiver (RX) on the ground with a cylindrical antenna array, digitizer, and storage, which are heavier and bulkier than the TX. 4 We assume the aerial TX serves as an AP and the ground RX serves as a UE 5 because a) APs are usually placed higher than UEs and b) the number of APs is assumed to be larger than the number of UEs (the TX sounder on a drone can move to many locations faster than the RX sounder on the ground). Measurements proceed as follows: in a selected environment, we first decide at which locations to place the APs and UEs. We fix the RX at the location of the first UE and fly the TX along a trajectory that includes locations of all APs, as shown on Fig. 1. The measured channels from a single trajectory thus contain the channels between all APs and a single UE. Because the drone flight can be repeated in an automated fashion, the TX can fly the same trajectory repeatedly. We can thus move the RX to the location of a different UE and repeat the same trajectory to obtain the channel data between the same APs and the different UE for a multi-user CF-mMIMO system analysis (the reproducibility of the trajectory will be discussed in Section IV-A). This measurement method has following advantages: 1) Boundless AP locations: The drone, with its small body, can reach any position at any height conveniently and quickly by using a simple mobile application, as long as the Federal Aviation Administration (FAA) safety rules are followed. 6 Such capability is especially useful when measurements for several different CF-mMIMO systems are conducted in multiple environments located far apart, since cumbersome installations of AP antennas at multiple rooftops and masts are not necessary. 2) Fast measurement speed and a large dataset: Although this method is technically the same as the virtual array method using one AP antenna and one UE antenna as described in Section I-B, thousands of AP locations can be swept within several minutes on a drone. In our measurement, the RX captures channel data every 50ms and the TX moves at 4m/s. With such measurement speed, channel data from 1200 AP locations distributed across 240m range to a UE can be measured per minute. We can either sample some of the spatial points among the whole drone trajectory to place a selected number of APs for a considered CF-mMIMO system, or utilize the ample size of the dataset to conduct data-hungry statistical analyses or machine learning applications. 3) Easy to operate and affordable: Setting up and operating the full or the switched sounder mentioned in Section I-B with many AP/UE antennas positioned at multiple different locations simultaneously is complex, and prone to hardware failures if not properly managed. Additionally, the cost of the RF equipment including the transceivers, clocks, cables, switches, amplifiers, filters, and antennas is significant, scaling linearly with the number of APs/UEs. In contrast, measuring channels using a drone sounder is simple and cost-efficient, requiring only a single RF chain which does not require any synchronization due to the virtual array principle. There are, however, some assumptions and limitations of this measurement method which also must be addressed: 1) Channel coherence in a fast fading channel: While all APs serve multiple UEs simultaneously in an actual CF-mMIMO system, we measure channels between many APs and a single UE over several minutes of flight time. Furthermore, measurements for different UEs may span different days. Because the channels for APs/UEs can lie in different coherence blocks, dynamic channel conditions such as trucks blocking line-of-sight (LOS) path cannot be accounted for. To minimize such effects, the measurements should be performed at times where the number of such mobile blockers/scatterers is minimal. 2) Using channels of multiple UEs together: We conduct multiple flights of the drone on the same route to measure channels for different UE positions. The drone will not be at the same location during each flight (error on order of wavelength or more), so analysis that requires phase coherence between different UE locations is challenging. This can be overcome by expanding the RX to operate multiple UE antennas simultaneously [15], which is easier than operating a larger number of AP antennas simultaneously, since RXs are on the ground. 3) Drone limitations and effects: Because the drone has limited power and weight capacity, the TX payload must not be power-hungry or heavy. Performance measures like the bandwidth, output power, number of antennas, and clock accuracy are traded off for lighter hardware with sub-optimal performances. Also, the drone body and the vibration coming from drone hovering may alter the antenna gain and pattern [16]. 7 III. MEASUREMENT CAMPAIGN Using our drone-based sounder that was outlined above and described in more detail in [14], we conducted a CF-mMIMO measurement campaign at USC University Park Campus (Fig. 2). 8 The selected measurement area has a dimension of about 400m by 200m. The TX (AP) was flown on a loop trajectory at 35m (rooftop AP) and 70m (aerial AP [13]) heights. After the trajectory measurements at two heights at a single UE location, the RX was moved to a new UE location and the TX repeated the same two trajectories. The measurement was 7 The measurement accuracy of our drone sounder is discussed in [14]. 8 The northeast corner of the drone trajectory on Fig. 2 is bent to avoid hitting tall trees in the area. conducted during dawn time to minimize the number of cars and people, and was conducted over three different days (UE1 on the first day, UE2 on the second day, and UE3/UE4 on the third day). The channel transfer function was captured every 50ms, and the drone moved with 4m/s (measurement every 20cm) for little more than 5 minutes, resulting in over 6000 transfer functions covering more than 1200m of AP locations per drone height per UE. UE1 was placed at a parking lot near the southeast corner of the area surrounded by tall buildings at west, north, and east (Fig. 2a), UE2 was near the center of the area away from tall buildings for most parts except for a building on the east (Fig. 2b), and UE3/UE4 were at the northwest corner of the area near the road (Fig. 2c and Fig. 2d). UE3 and UE4 were placed close to one another to observe multi-user performance when the UEs are spaced close together, as well as differences in channel characteristic when the UE is placed outside a building versus under a protruding roof. While actual CF-mMIMO systems are likely to have a much larger number of UEs, this initial study focuses on analyzing the simple scenario with a small number of UEs in order to straightforwardly study the feasibility of the unique channel sounding method for various CF-mMIMO system analyses. Extensive measurement campaigns at larger scales with more UEs are presented in [15]. A. Channel Gains for Each UE Channel gain is an important parameter as it describes the quality of the channel between an AP and a UE, and determines other parameters such as signal-to-noise ratio (SNR), SINR, spectral efficiency, etc. We define channel gain between AP l and UE k at ith realization as |h l,k,i | 2 . 9 Fig. 2 plots channel gains 10 between the AP locations across the trajectories at two heights and four UE locations, averaged over all realizations. For UE locations that are surrounded by many buildings (UE1, UE3, and UE4), channel gain is higher (in red) if there is a LOS path toward an AP location, but lower (in blue) if there is shadowing from the buildings or if the AP location is far from the UE location. UE2, which has LOS paths to many areas due to being away from tall buildings, generally shows higher channel gains across the trajectories than other UEs. Because having a LOS channel to multiple APs in metropolitan areas with high buildings is difficult, it is best to spread the APs across many locations, in order for the UE to have high channel gain to at least one UE, as suggested by the CF-mMIMO idea [2]. In contrast, fewer APs or even one BS may be good enough in rural areas without tall buildings. Heights of the APs must also be considered when deploying APs for CF-mMIMO systems. If we compare the 35m measurement and the 70m measurement, the 70m trajectories generally have more regions with high channel gains (see, e.g., the east side of UE2 and the northeast side of UE4) because shadowing between APs and UEs surrounded by buildings can sometimes be eliminated by simply increasing the height of the AP. While the trajectories we observe are both beyond 30m height, flying at lower heights is expected to reduce the regions with high channel gains even further. This qualitatively suggests that the required density of the APs during the deployment will be strongly correlated with the heights of the APs. One important aspect of our sounding method is the reproducibility of the measurement. To observe this, we repeated the same trajectory measurement twice for UE2 over two different days, and compared the channel gains over the same trajectories in a following way: where h l,2,i,t is a channel between AP l and UE 2 at realization i for trial t, G l,2,t [dB] is the sum of channel gains between AP l and UE 2 averaged over all realizaitons for trial t expressed in dB scale, and err RMS is the RMS error between two G l,2,t [dB]s over the whole trajectory containing L spatial points. The resulting RMS error was about 2.63dB. While this variation is not too big, the comparison of complex channel resulted in much larger variation, which suggests that while the magnitude does not vary as much, the phase will vary when we repeat the same trajectory. Thus, if the exact phase relationship between different UEs is required, simultaneous measurement with multiple UEs is preferable. However, if the UE locations are widely separated and their phase relationships are essentially random, the details are not relevant; this is also the case for the analysis in Sec. IV.C. B. Single-User Uplink SNR We now look at uplink SNR in a single-user case where only one UE is served by multiple APs at a given channel resource (time/frequency slot). Among all AP positions, we select a given number of APs, 11 and combine the SNR between the UE and all selected APs through maximal-ratio (MR) combining to get the total SNR. We consider the measured channel as the ground-truth channel in our evaluations. While the channel measurement is conducted from the AP (TX) side to the UE side (RX), we can still evaluate the uplink SNR because the channels are reciprocal and independent of whether the operation is uplink or downlink as long as the responses of RF chains measured through the back-to-back calibration is eliminated from the total channel responses of the channel measurement system. We assume the transmit power from each UE (p) is 0dBm and the noise power at each AP during the uplink (σ 2 ul ) is -90dBm. Following [3], if there are L single-antenna APs which are designated for UE k, the resulting uplink SNR for UE k in a cell-free system is: with 10000 different AP combinations chosen from the set of measured locations. We again use synthesized omnidirectional antenna on the UE side, where channel gains are averaged over multiple realizations. The first observation is that as the number of APs increase, the median SNR (shown by the solid lines) increases and the variation of the SNR values decreases (for example, standard deviation reduces from when 8.5dB to 0.4dB for UE1/AP35m) for all considered cases. This is because as we increase the number of APs, the APs would likely cover most parts of the measurement setting, resulting in similar performance even when we are picking the APs at random. We also see that the variance is smaller when the APs are at 70m height than 35m height. This is because the channel gain varies less along the trajectory at 70m height than 35m height (see Fig. 2). Another observation is that UE2 generally has better performance than UE1. While UE1 only has limited regions with high channel gains, UE2 contains many more regions with high channel gains. Hence, it is likely that the selected APs will contain at least one path with high channel gain to UE2, while it is not the case for UE1. UE2 also has less variance in SNR than UE1, since the channel gains are more uniformly high in contrast to UE1 where the variance of channel gains along the trajectory is very high. SNR values of UE3 and UE4, while omitted, provide similar behaviors in variations as UE1, since the distribution of channel gains is similar as shown in Fig. 2. C. Multi-User Uplink SINR Now we look at the case when multiple UEs are served together by all APs within the same time/frequency slot. This time, we use a randomly selected vertically polarized port from the cylindrical array as a UE instead of synthesizing an omnidirectional antenna, meaning UE antenna has a directivity and is pointing at random direction. The parameter we look at is the SINR, as the interference from other UEs may reduce the achievable spectral efficiency. Again, following [3], the SINR is computed as: where M is the number of BS antennas (in our analysis, also the number of single-antenna APs), v k is the M ×1 combining vector for UE k, h k is the M × 1 channel vector between UE k and all APs, (·) H is the Hermitian operation, and I M is the M × M identity matrix. The upper bound is achieved by computing v k via a generalized Rayleigh quotient [17], which we call optimum combining: 12 The optimum combining can only be calculated with channel information of all APs, which can be impractical in a realworld CF-mMIMO system as it requires the BS to combine the channel information from all APs and do matrix inversion. In contrast, a simple MR combining can be calculated locally for each AP by: Its downside is the reduced ability to suppress interference for finite array sizes. Fig. 4 shows the comparison of the SINR values when 64 and 256 APs at 35m height are selected among the trajectory to serve four UEs simultaneously under the two combinings. The first observation is that for UE1, the loss from interference is minimal, with optimum and MR combining curves close to one another. This is because UE1 has the highest channel gains toward APs located at the south of the trajectory, while UE2/3/4 all have low channel gains toward the APs at the south, as shown in Fig. 2. Thus, the APs that have high weights for reception of UE1 inherently receive little interference power from UE2/3/4. The second observation is that UE2/3/4 all experience some loss from interference, shown by the distinct difference between optimum and MR combining. The APs toward the east/north/west of the campus can all receive moderate to high power from UE2/3/4, which act as interference for one another. While UE2 (with L = 64) has the least outage probability (for bottom 10% UEs) for the optimum combining due to good channel gains from many APs when the interference can be cancelled, it has worse performance than UE1 for MR combining due to the interference from UE3 and UE4. Finally we observe that UE4 generally has relatively smaller channel gains than UE3, due to being under the protruding roof, except for the northeast corner of the 70m trajectory where the AP is at LOS only from UE4. In contrast to the optimum combining, the simpler MR combining generally shows much worse performance, and the gap is expected to increase with the increasing number of UEs. MR combining, despite its simplicity, might not be sufficient for the multi-user cases for all UE2/3/4 even when the number of APs is 256 as shown on Fig. 4b, so combining with higher performance than MR, yet computationally more efficient than optimum combining must be developed. This is remarkable insofar as theory for concentrated massive MIMO says that for the limit of infinitely large arrays, MR becomes optimum. V. CONCLUSION AND FUTURE WORK In this paper, we describe a novel channel sounding method for CF-mMIMO systems: using a drone to measure channels at any desired AP location. Such method provides a large dataset in a short period of time and costs little in comparison to the full setups with many antennas distributed across a wide area. We demonstrate a sample measurement campaign and analyze parameters such as channel gain, SNR, and SINR, to provide some insights on realizing CF-mMIMO systems, such as height of APs, number of APs, and combinings APs may use. We also distribute the real-world channel data open source for various other wireless system analysis. In the future, we will extend the measurement to a larger number of users and environments, in order to provide more statistical evaluations of the CF-mMIMO systems based on real-world data.
2021-06-30T01:16:26.492Z
2021-05-23T00:00:00.000
{ "year": 2021, "sha1": "5e13079d15d31279aea619bfa9c045e40e2baf8c", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "5e13079d15d31279aea619bfa9c045e40e2baf8c", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science", "Engineering" ] }
14727440
pes2o/s2orc
v3-fos-license
First Report of Circulating MicroRNAs in Tumour Necrosis Factor Receptor-Associated Periodic Syndrome (TRAPS) Tumor necrosis factor-receptor associated periodic syndrome (TRAPS) is a rare autosomal dominant autoinflammatory disorder characterized by recurrent episodes of long-lasting fever and inflammation in different regions of the body, such as the musculo-skeletal system, skin, gastrointestinal tract, serosal membranes and eye. Our aims were to evaluate circulating microRNAs (miRNAs) levels in patients with TRAPS, in comparison to controls without inflammatory diseases, and to correlate their levels with parameters of disease activity and/or disease severity. Expression levels of circulating miRNAs were measured by Agilent microarrays in 29 serum samples from 15 TRAPS patients carrying mutations known to be associated with high disease penetrance and from 8 controls without inflammatory diseases. Differentially expressed and clinically relevant miRNAs were detected using GeneSpring GX software. We identified a 6 miRNAs signature able to discriminate TRAPS from controls. Moreover, 4 miRNAs were differentially expressed between patients treated with the interleukin (IL)-1 receptor antagonist, anakinra, and untreated patients. Of these, miR-92a-3p and miR-150-3p expression was found to be significantly reduced in untreated patients, while their expression levels were similar to controls in samples obtained during anakinra treatment. MiR-92b levels were inversely correlated with the number of fever attacks/year during the 1st year from the index attack of TRAPS, while miR-377-5p levels were positively correlated with serum amyloid A (SAA) circulating levels. Our data suggest that serum miRNA levels show a baseline pattern in TRAPS, and may serve as potential markers of response to therapeutic intervention. Introduction Tumor necrosis factor-receptor associated periodic syndrome (TRAPS) is the most common autosomal dominant autoinflammatory disorder and is caused by mutations in the TNFRSF1A gene (12p13) encoding the 55-kD receptor for tumor necrosis factor-a (TNF-a) (TNFRSF1A) [1]. TRAPS is characterized by recurrent fever attacks lasting typically from 1 to 3 weeks; in addition to fever, common clinical features include mainly periorbital oedema, conjunctivitis, a migratory erythematous skin rash with underlying fasciitis and myalgia, and arthralgia and/or arthritis [2], [3]; serosal inflammation is also common, usually but not only in the form of polyserositis [4][5][6][7][8]. Mean age at disease onset is around 3 years. Nevertheless TRAPS is the most variable and multiform entity amongst autoinflammatory diseases both in terms of age at disease onset and clinical manifestations [2][3][4], [9]. This heterogeneity is probably related to the wide spectrum of known TNFRSF1A mutations [10]. TRAPS mutations can be distinguished into high-penetrance variants and low-penetrance variants: the former are mostly missense substitutions, mainly affecting the highly conserved cysteine residues of the extracellular cysteine-rich domains involved in disulfide bond formation and in the folding of the extracellular portion of TNFRSF1A [2], [3]. These mutations are associated with an earlier disease onset and with a more severe phenotype; in fact patients may experience a higher number of fever episodes and a greater severity of attacks [11]. These subjects also have a greater risk of developing AA amyloidosis, the most troublesome TRAPS complication [2], [3], [12]. On the contrary low-penetrance variants seem to be associated with a milder phenotype, a later disease onset and a lower risk of amyloidosis [3][4][5][6][7][8][9], [13]. The identification of TNFRSF1A mutations as the genetic cause of TRAPS raised the possibility that blocking TNF -even though TNF is not increased in most patients -could potentially represent a tailored therapeutic strategy, opening the way to new treatment opportunities for this complex disease [14]. Etanercept has been shown to control flares and inflammation in short case-series of patients of different ages with fully penetrant TRAPS phenotypes and in a prospective, open-label study [15], in which it proved to decrease the frequency of the attacks and the disease severity [16][17][18]. However, loss of response to etanercept over time as well as etanercept-resistant patients have also been observed, suggesting a non-specific action of etanercept in TRAPS [3], [19], [20]. Evidence of deregulated secretion of interleukin (IL)-1b recently supported IL-1 inhibition as a target therapy for TRAPS and IL-1 inhibitors, such as the human IgG1 anti-IL-1b monoclonal antibody canakinumab and the IL-1 receptor antagonist anakinra, have shown to induce a prompt and complete disease remission [21][22][23][24][25]. MicroRNAs (miRNAs) are small, non-coding RNAs (,18-25 nucleotides in length) that regulate gene expression at a posttranscriptional level, by degrading mRNA molecules or blocking their translation [26]. It is now well known that miRNAs can regulate every aspect of cellular activity, from differentiation and proliferation to apoptosis and that, as a single miRNA can target hundreds of mRNAs, aberrant miRNA expression is involved in the pathogenesis of many diseases [27]. These molecules can be detected in serum, and their circulating levels have already been described in inflammatory disorders such as rheumatoid arthritis and systemic lupus erythematosus [28][29][30]. To the best of our knowledge circulating miRNAs in TRAPS, as well as in other monogenic autoinflammatory disorders have never been investigated. The aim of our study was to evaluate circulating miRNAs levels in patients with TRAPS, in comparison to controls without inflammatory diseases, and to correlate their levels with parameters of disease activity and/or disease severity. Samples were also obtained from 8 age-and gender-matched controls without inflammatory diseases (5 males, 3 females) (41 years; range 21-59) attending the outpatient clinic at the Rheumatology Unit of the University of Siena, for fibromyalgia and/or carpal tunnel syndrome and who tested negative for TRAPS mutations. These subjects underwent detailed clinical and laboratory workup, in order to rule out any inflammatory, metabolic, and neoplastic disorders (in particular, they all showed inflammatory markers within normal values). Table 1 summarises the clinical and demographic characteristics of the samples collected from treated and untreated TRAPS patients. All patients and controls were Caucasians of Italian origin. Written informed consent was obtained both from patients and controls. The study protocol was reviewed and approved by the University of Siena Institutional Ethics Committee. Methods Data collected in a customized database for each subject with TRAPS included: i) gender ii) age iii) age at disease onset iv) disease duration v) duration of fever episodes vi) number of fever episodes/year (at disease onset) vii) number of fever episodes/year (during the last year) viii) amyloidosis (presence/absence) ix) chronic disease course (presence/absence) x) treatment with IL-1 inhibiting drugs (yes/no). Laboratory Assessment Blood was taken by venipuncture from patients during routine follow-up. Laboratory assessment included erythrocyte sedimentation rate (ESR), C-reactive protein (CRP), and SAA. ESR was measured using the Westergren method (mm/hour), and was considered normal if ,15 mm/hour for males and ,20 mm/ hour for females. Serum CRP concentrations were measured using a nephelometric immunoassay (mg/dl); values ,0.5 mg/dl were considered normal. Serum amyloid A (SAA) levels were measured by particle-enhanced nephelometry (BNII autoanalyzer, Dade Behring, Marburg, Germany). Reference value is ,6.4 mg/L. RNA Extraction Blood was centrifuged after venipuncture, and serum was immediately frozen at 280uC until assayed. Total RNA including microRNAs was extracted from 200 ml of serum using miRNeasy was eluted in 35 ml and 10 ml were used for microarray hybridization. miRNA Expression Profiling Thirty-seven RNA samples, from 8 controls (8 samples) and 15 TRAPS patients (29 samples), were hybridized on Agilent human miRNA microarray (#G4470B, Agilent Technologies, Palo Alto, CA). This chip consists of 15,000 probes, which represent 723 human microRNAs, sourced from the Sanger miRBase database (Release 10.1). Starting from equal volumes, RNA labeling and hybridization were performed in accordance to manufacturer's indications, as we previously reported [31]. Agilent scanner and the Feature Extraction 10.5 software (Agilent Technologies) were used to obtain the microarray raw-data. Microarray results were analysed using the GeneSpring GX 12 software (Agilent Technologies, Palo Alto, CA). Data transformation was applied to set all negative raw values at 1.0, followed by normalization on spiked-in control viral miRNAs (ebv-miR-BART8, hcmv-miR-UL112, kshv-miR-K12-2). A filter on low gene expression was used to keep only the probes expressed in at least one sample. Differentially expressed genes were selected to have a 1.5-or 2-fold expression difference between groups and a statistically significant p-value (,0.05), using moderated t-test with or without Benjamini-Hochberg correction, as indicated in the text. Differentially expressed genes were employed for Cluster Analysis of samples using the Manhattan correlation as a measure of similarity. Statistical Analysis Correlation analysis between all miRNAs and clinical variables was performed using PAM (Prediction Analysis of Microarrays) software [32]. Statistical analyses were performed using GraphPad Prism 5 software. Two-tailed Mann-Whitney test was used for statistical comparisons between groups. Correlations were calculated using log2 transformed data and Spearman's correlation (two-tailed p-value). *These microRNAs are downregulated in TRAPS patients vs. controls (see Table 2 and Table S2). doi:10.1371/journal.pone.0073443.t003 Results Microarray analysis performed on TRAPS patients and controls revealed 172 miRNAs whose expression was detectable in at least 1 sample (Table S1). The TRAPS miRNA expression profile was compared with that of controls, revealing a signature of six miRNAs specific for TRAPS patients (corrected p-value ,0.05, Table 2), which were all significantly down-regulated in TRAPS patients (miR-17-5p, miR-92a-3p, miR-134, miR-451a, miR-498, miR-572,) and were used for cluster analysis ( Figure 1A). The exclusion of anakinra treated samples increased the number of differentially expressed miRNAs by two additional units, namely miR-150-3p and miR-187-5p (Table S2). Figure S1 represents the hierarchical cluster of untreated TRAPS patients and controls based on this gene signature. This signature distinguished between the TRAPS patients and control group, with only three TRAPS samples (2 samples with T50M and one with S59P) displaying a profile similar to controls. To evaluate possible miRNA modulation in TRAPS subgroups, we examined the effect of treatment with the IL-1 receptor antagonist anakinra on circulating miRNA profiles, by comparing treated versus untreated patients. We found 4 miRNAs whose expression was significantly altered after anakinra treatment (pvalue ,0.05, Table 3). Figure 1B represents the cluster analysis of TRAPS samples based on the expression of 4 miRNAs that differentiate treated from untreated patients. We obtained a very good separation between the two groups; indeed only one sample was misclassified. Interestingly, miR-92a-3p and miR-150-3p expression levels were found to be significantly reduced in untreated TRAPS patients, while their expression levels were similar to controls in those patients who were sampled during treatment (Figure 2). Finally we searched for possible correlations between miRNA signatures and the clinical and laboratory variables reflecting disease severity and/or disease activity. MiR-92b levels were inversely correlated with the number of fever attacks/year during the 1 st year from the index attack of TRAPS (Spearman r = 20.5589) ( Figure 3A), while miR-377-5p levels were positively correlated with SAA circulating levels (Spearman r = 0.66) ( Figure 3B). Discussion In recent years, scientific interest in miRNAs has increased dramatically since they have been shown to play a key role in the regulation of immunity, including innate and adaptive immune responses, development, and differentiation of immune cells [33]. Abnormal miRNA expression has also been reported in rheumatic autoimmune diseases in which it has been observed that miRNAs can be variably expressed according to disease activity and progression [30], [34], [35]. Altogether, information available to date points to miRNAs as potential biomarkers, which could be exploited to monitor disease activity and response to drugs [35]. MiRNAs themselves are also emerging as potential targets for new therapeutic strategies. The ultimate goal will be the identification of a miRNA target or targets that could be manipulated through specific therapies, aiming at activation or inhibition of specific miRNAs responsible for the development of disease [28]. Lawrie et al. were among the first to demonstrate the presence of circulating miRNAs in cell-free body fluids such as plasma and serum [36]. MiRNAs have been reported as being aberrantly expressed in blood plasma or serum in different types of disorders such as cancer, diabetes mellitus and cardiovascular diseases [36][37][38]; moreover, changes in circulating miRNAs profiling have shown potential diagnostic as well as prognostic value in several disorders [39]. The exact mechanism by which miRNAs enter into the serum and whether they are biologically functional or simply biomarkers is still unknown. A recent study reported that miRNAs could be selectively packaged into micro-vesicles and actively secreted [40]. To the best of our knowledge no information concerning circulating miRNAs in TRAPS is available. Our data show that circulating miRNAs levels are altered in TRAPS patients versus controls. Six miRNAs were found to be downregulated in TRAPS (miR-134, miR-17-5p, miR-498, miR-451a, miR-572, miR-92a-3p) in comparison with controls. Moreover the expression of 4 additional miRNAs (miR-150-3p, miR-92a-3p, miR-22-3p, miR-30d-5p) was significantly altered in untreated patients versus subjects treated with anakinra. It is worthy of notice that miR-92a-3p and miR-150-3p levels, which are reduced in untreated TRAPS patients compared with controls, are restored to levels comparable with controls during anakinra treatment. In addition, the expression of other specific circulating miRNAs significantly correlated with both the number of fever attacks/year at disease onset (miR-92b levels were significantly reduced), suggesting a relationship with disease severity, and with enhanced SAA levels (miR-377-5p levels were significantly enhanced). Among the miRNAs which we found to be altered in TRAPS, MiR-150, miR-92 and miR-17 have been shown to be key regulators of specific lineage choices in the adaptive immune system and serum miR-150 levels have recently been negatively correlated with the plasma TNF-a levels in patients with sepsis [41]. Their role in TRAPS still needs to be elucidated. A growing number of studies have shown that circulating miRNA expression profiling is of increasing importance as a useful diagnostic and prognostic tool. In patients with a TRAPS-like disease, characterized by clinical features consistent with TRAPS but no mutation in the TNFRSF1A gene [42], circulating miRNA expression profiling could potentially become useful as a diagnostic signature, opening the way to a better understanding of the molecular mechanisms underlying this complex phenotype, Moreover, our data suggest that serum miRNA levels could be taken into future consideration for monitoring the response to treatment in TRAPS patients. Looking a bit further into the future, a better knowledge about miRNAs role in TRAPS could also potentially provide novel treatment targets [40], [43][44][45][46]. We acknowledge the limitations of our small samples size, and therefore a further study involving larger cohorts of patients and the use of different technical approaches is being planned. Conclusions Although further studies are mandatory, serum miRNAs levels show a baseline pattern in TRAPS and thus deserve further studies to exploit their potential role as markers of disease and response to therapeutic intervention. Figure S1 Hierarchical cluster representation of miR-NAs modulated in untreated TRAPS patients (blue) vs. controls (red). (TIF) Table S1 Normalized and log2 transformed expression levels of 172 miRNAs whose expression was detectable in at least 1 sample. (XLSX)
2017-05-02T03:15:31.259Z
2013-09-16T00:00:00.000
{ "year": 2013, "sha1": "44191004f7a40651c6a5d6c248833170bc93f761", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0073443&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "44191004f7a40651c6a5d6c248833170bc93f761", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
119126842
pes2o/s2orc
v3-fos-license
Decay of Solutions to the Maxwell Equations on Schwarzschild-de Sitter Spacetimes In this work, we consider solutions of the Maxwell equations on the Schwarzschild-de Sitter family of black hole spacetimes. We prove that, in the static region bounded by black hole and cosmological horizons, solutions of the Maxwell equations decay to stationary Coulomb solutions at a super-polynomial rate, with decay measured according to ingoing and outgoing null coordinates. Our method employs a differential transformation of Maxwell tensor components to obtain higher-order quantities satisfying a Fackerell-Ipser equation, in the style of Chandrasekhar and the more recent work of Pasqualotto. The analysis of the Fackerell-Ipser equation is accomplished by means of the vector field method, with decay estimates for the higher-order quantities leading to decay estimates for components of the Maxwell tensor. Introduction The Schwarzschild-de Sitter family, parametrized by mass M > 0, consists of those spherically symmetric spacetimes solving the Einstein equations with positive cosmological constant Λ: Such spacetimes display a mixture of geometric features: far from the black hole, they resemble the de Sitter spacetime with cosmological constant Λ; close to the black hole, they take on the characteristics of the Schwarzschild family. The stability of the Schwarzschild-de Sitter spacetimes as solutions of the Einstein equations (1) was resolved in the recent breakthrough of Hintz and Vasy [16], where the authors prove a more general result on stability of the small angular momenta Kerr-de Sitter spacetimes. The authors' result is a culmination of a great deal of work on the analysis of hyperbolic equations on Kerr de-Sitter spacetimes within the framework of the Melrose b-calculus; see [24,27,10,15,12,14]. In particular, the aforementioned authors prove exponential decay for solutions of the Maxwell equations on Kerr de-Sitter spacetimes with small angular momenta in [13]. The author would like to thank Pei-Ken Hung, Karsten Gimre, Mu-Tao Wang, and Shing-Tung Yau for for their interest in this work. Especially, he thanks Pei-Ken Hung for many stimulating conversations. There is a comparative dearth of analysis utilizing the vector-field multiplier method, with notable results of Dafermos and Rodnianski on the scalar wave [7] and of Schlue on the cosmological region [21,22]. The present paper adds to this literature, providing boundedness and decay estimates for solutions of the Maxwell equations using red-shift and Morawetz multipliers, along with the static multiplier. In addition, this work serves as a "warm-up" exercise, towards a demonstration of the linear stability of the Schwarzschild-de Sitter family by means of vector-field methods. Schwarzschild-de Sitter Spacetimes Regarding the cosmological constant Λ > 0 as fixed, the Schwarzschild-de Sitter spacetimes comprise a one-parameter family of solutions (M, g M,Λ ) to the Einstein equations Ric(g) = Λg. (2) The family is parametrized by mass M , which we assume to satisfy the sub-extremal condition These spacetimes have both black hole and cosmological regions, bounded by respective horizons H and H. Our primary interest is the region between the two, wherein the spacetimes are static and spherically symmetric. The staticity and spherical symmetry are encoded by the static Killing field, denoted T , and the angular Killing fields, denoted Ω i , with i = 1, 2, 3. We collect the angular Killing fields in the set Ω := {Ω i |i = 1, 2, 3}. For further details on the Schwarzschild-de Sitter family, we refer the reader to [4,11]. 2.1. Coordinate Systems. Our results concern the static region, up to and including the future event horizon and the future cosmological horizon. In the course of our analysis, various coordinate systems will prove useful; we enumerate them below. In the coordinates (t, r, θ, φ), this region has geometry encoded by with / g AB dx A dx B := r 2 dσ S 2 = r 2 dθ 2 + sin 2 θdφ 2 , where dσ S 2 denotes the round metric on the unit sphere. Note that / g AB dx A dx B is the induced metric on the sphere of symmetry S 2 (t, r). As an additional piece of notation, we use / ǫ AB to denote the associated area form on the sphere of symmetry S 2 (t, r). This static chart is valid for radii 0 < r b < r < r c , with r b and r c the black hole and cosmological radii appearing as roots of the equation 1 − µ = 0. Note that the equation has a remaining negative root, which we denote by r − . Concretely, we have [18] where ξ is specified by the relation In the sub-extremal regime (3), the radii r b and r c satisfy Letting with similar definitions relating to r c and r − , we define the Regge-Wheeler coordinate r * by with C an arbitrary constant. For convenience in the subsequent analysis, we choose this normalization constant such that r * = 0 on the photon sphere r = 3M . In the Regge-Wheeler coordinates, the metric takes on the form Using the Regge-Wheeler coordinates (t, r * ), we define the inward and outward null coordinates (u, v) by in which the metric has the form The pair (u, v), referred to as Eddington-Finkelstein coordinates, break down at either of the horizons. However, there are well-known, though rather cumbersome, rescalings of u and v which extend regularly to each of the horizons; see [4,11]. For a given pair of null coordinates (ũ,ṽ), we define the null hypersurfaces: Throughout this work, we use the following index notation: lowercase Latin characters a, b = 0, 1, 2, 3 for spacetime indices, and uppercase Latin characters A, B = 2, 3 for spherical indices. 2.2. Trapped Null Geodesics. In this subsection, we recount the wellknown phenomenon of null geodesic trapping at the photon sphere. Such trapping manifests as a "loss of derivatives" in the integrated decay estimates appearing later in this work, as first described by Ralston [20]. Generally, given a Killing field K a and a geodesic γ on a pseudo-Riemannian manifold with metric g ab , application of the Killing field yields a constant of motion C = g ab K aγb along the geodesic. Specializing to the Schwarzschild-de Sitter setting, we have constants of motion e = g ab T aγb , Written with respect to the static chart, we have along with the constants of motion e, l 1 , and the composite q = l 2 2 + l 2 Substituting these three constants, the null geodesic condition gives rise to a simple radial equation r 4 (γ r ) 2 = r 4 e 2 − r 2 (1 − µ)(q + l 2 1 ) =: R(r, e, q, l 1 ). 2.3. Sphere Bundles. Throughout this work, we consider quantities which are scalars and co-vectors on the spheres of symmetry. The associated sphere bundles, respectively referred to as L(0) and L(−1), come equipped with projected covariant derivative operators / ∇, defined for scalars by ordinary differentiation and for co-vectors by Given a spherical co-vector ω, i.e. a section of L(−1), we define divergence and curl operators by and the tensorial spherical Laplacian by extending the scalar spherical Laplacian. In addition to the spherical operators above, we shall make use of spacetime d'Alembertian operators, defined by with s = 0, 1 and the appropriate covariant derivative operator. Note that / L(0) = is the standard d'Alembertian operator on the Schwarzschild-de Sitter spacetime. The Maxwell Equations on Schwarzschild-de Sitter An alternating two-form F ∈ Λ 2 (M) is a solution of the Maxwell equations on (M, g) if F satisfies or equivalently, We refer to such solutions as Maxwell tensors on (M, g). The primary purpose of this section is to study the structure of the Maxwell equations, expressed in a double null frame. 3.1. Null Decomposition of the Maxwell Equations. Using the null coordinates (12), we define null directions spanning the normal bundle of the spheres of symmetry S 2 (t, r) = S 2 (u, v). We complete our null frame by choosing orthonormal basis vectors e A , A = 1, 2, for each of the spheres of symmetry. Note that the pair (L, L) is irregular at either of the horizons; rescaling, we define the pairs e 3 := L, regular across H + , andē regular across H + . With the null frame {L, L, e 1 , e 2 } in hand, we decompose the Maxwell tensor into the components with α A and α A regarded as one-forms on the spheres of symmetry, i.e. sections of L(−1), and ρ and σ regarded as functions on the same. Proposition 1. Expressed in terms of the null decomposition above, the Maxwell equations (19) take the form For a thorough derivation of the equations above, we refer the reader to Pasqualotto [19]. 3.2. Coulomb Solutions. The Maxwell equations (19) possess well-known stationary solutions, referred to as Coulomb solutions. Concretely, given real constants B and E, two-tensors of the form form a two-parameter family of stationary solutions, referred to as Coulomb solutions, to (19). In terms of the null decomposition above, Coulomb solutions take the form The main theorem of this work concerns decay of a general solution of the Maxwell equations, specified by appropriate initial data, to a Coulomb solution. Equivalently, utilizing initial data to identify the asymptotic Coulomb solution, we can reformulate our result in terms of decay of normalized solutions to zero. We describe this procedure below. Integrating (27) and (29) over the unit sphere, we find the relations with similar relations for ρ following from (28) and (30). That is, we have conservation of the integral quantities, for general solutions to the Maxwell equations. This phenomenon is often referred to as conservation of charge; see [2] for an excellent discussion. The conservation of charge above allows us to identify the asymptotic Coulomb solution, owing to the preservation of its parameters B and E. Given initial data on C u 0 ,v 0 (14) for a Maxwell tensor F , we identify its Coulomb parameters by integrating Indeed, integration over any sphere of symmetry lying in C u 0 ,v 0 yields the parameters. We denote the associated Coulomb solution by F stationary , with null decomposition where we have utilized conservation of charge. Subtracting the associated initial data, we can form a normalized initial data set, with associated normalized solution F − F stationary , such that decay of the normalized solution to zero is equivalent to decay of the general solution F to the Coulomb solution F stationary . We remark that this normalization procedure is possible for a variety of initial data specifications, beyond the null hypersurfaces C u 0 ,v 0 . 3.3. The Spin ±1 Teukolsky Equations. The decoupling of α A and α A in the null decomposed Maxwell system of Proposition 1 was established by Teukolsky [23] on vacuum, Petrov type-D backgrounds by means of certain algebraic and differential manipulations. The procedure does not use the vacuum assumption in any meaningful way, so there is a straightforward extension to our setting: The Maxwell components α A and α A satisfy the spin ±1 Teukolsky equations Proof. We derive the decoupled equation for α A , that for α A being analogous. At the outset, we note that with similar statements holding for L. Multiplying (25) by r and applying the operator / ∇ L to the result, we deduce THE MAXWELL EQUATIONS ON SCHWARZSCHILD-DE SITTER SPACETIMES 9 Rewriting the last term with (27) and (28), we find Application of (25) and the relation which holds for spherical one-forms ω, yield the spin -1 Teukolsky equation for α: 3.4. The Transformation Theory. Lacking Lagrangian structure, the spin ±1 Teukolsky equations prove difficult to estimate using standard vector field multiplier methods. However, certain higher order quantities, obtained from α A and α A by differential transformations, satisfy equations equipped with such structure and, moreover, having favorable analytic content. As the starting point for controlling the Maxwell tensor, these quantities are essential to our analysis. Before proceeding, we remark that, as with the decoupling of the previous subsection, such a transformation theory is well-known on vacuum, Petrov type-D backgrounds (see Chandrasekhar [5], Wald [25], and the later work of Aksteiner-Bäckdahl [1]). The extension to non-vacuum settings, as in the present case and e.g. [26,3], appears to be less developed. We define P A and P A , each a section of L(−1), in terms of α A and α A as follows: Observe that both P A and P A are regular at the horizons. Lemma 3. The quantities P A and P A satisfy the Fackerell-Ipser equation Proof. We present the argument for P A , that for P A being analogous. Note that the corresponding Teukolsky equation (36) can be rewritten as Multiplying the equation by r 2 1−µ and applying the operator / ∇ L , we find we rewrite the expression in terms of φ A : The above is the form seen in Dafermos-Holzegel-Rodnianski [6] and Pasqualotto [19]. With the rescaling rP A = φ A , the quantity P A is found to satisfy the Fackerell-Ipser equation (40) as claimed. Analysis of the Fackerell-Ipser Equation In this section, we analyze co-vectors Ψ A satisfying the Fackerell-Ipser equation (40). In particular, the estimates derived in this section hold for P A and P A . The analysis is largely based upon the ideas and notation of [7,17] with ℓ ≥ 1 and |m| ≤ ℓ. The identity yields the Poincaré inequality Here we have used the notation for the angular gradient. Stress-Energy Formalism. Associated with our Fackerell-Ipser equation is the stress-energy tensor where we emphasize that Applying a vector-field multiplier X b , we define the energy current and the density As well, we will have occasion to use the weighted energy current with weighted density for a suitable scalar weight function ω X . The current J X a [Ψ] and density K X [Ψ] serve as a convenient notation to express the spacetime Stokes' theorem integrated over a spacetime region D with boundary ∂D. Likewise, the weighted quantities satisfy The stress-energy tensor T ab [Ψ] defined above has non-trivial divergence where we note that the commutator [ / ∇ a , / ∇ b ] vanishes when contracted with a multiplier invariant under the angular Killing fields in Ω. In particular, all such multipliers considered in the subsequent analysis have this property. We remark that, owing to the positivity of the potential term V , the stress-energy tensor satisfies a positive energy condition. Namely, given future-directed, timelike vector fields X 1 and X 2 , we have 4.3. Additional Notation. Our estimates are expressed in terms of the null hypersurfaces C τ,τ (14). For simplicity, we denote the hypersurfaces and the spacetime region where 0 ≤ τ ′ < τ . Expressed in the Eddington-Finkelstein coordinates, the relevant volume forms are written dV ol C τ,τ = (1 − µ)r 2 sin θdudθdφ, In addition, we define the boundary regions We remark that the specifications above are made for the sake of convenience; the subsequent decay estimates can be expressed with respect to a broad class of foliations. Throughout the remainder of this work, we use c(M, Λ) and C(M, Λ) to denote small and large positive constants, respectively, each depending upon the parameters M and Λ. The Killing Multiplier T . Applying the static Killing field T as a multiplier, we observe that the density K T [Ψ] vanishes in consequence of V being radial, such that (∇ a T ab [Ψ])T b vanishes (52), and T being Killing, such that π ab T = ∇ (a T b) vanishes. Integrating over R(τ ′ , τ ), we obtain the identity (50) Defining the T -energy by the identity above yields the estimates for all 0 ≤ τ ′ < τ . THE MAXWELL EQUATIONS ON SCHWARZSCHILD-DE SITTER SPACETIMES 13 With our energy condition (53), we note that the T -energy above is nonnegative, degenerating at each of the horizons. 4.5. The Red-Shift Multiplier N . The static Killing field T degenerates at each horizon, becoming null; consequently, the T -energy defined above is degenerate and unsuited for proving boundedness and decay results up to and including the horizons. To circumvent this, we utilize a red-shift multiplier, of the sort introduced in [8]. We recall the details below. We work on the event horizon H + , away from the bifurcation sphere. Letting Y be a null vector transversal to the Killing field T , itself tangential on H + , we specify Y by Taking e A as orthonormal basis vectors, tangential to the spheres of symmetry, we calculate in the normalized null frame {T, Y, e 1 , e 2 }: with κ(M, Λ) being the positive surface gravity on Schwarzschild-de Sitter spacetime, and with h AB being the second fundamental form of the round sphere of radius r = r b (recall that we work on the event horizon) with respect to Y . We compute As the potential V is increasing near the event horizon, the first term above is non-negative. Expanding the second term with (60), we find With positive surface gravity κ and a choice of large σ, we deduce Together with the positivity of the first density term, we have the estimate on the event horizon H + . A similar argument can be made on the cosmological horizon H + using the transversal field Y and T . Extending to the static region, we construct a strictly timelike red-shift multiplier, identically N = T + Y on H + and N = T + Y on H + , satisfying the estimates for radii r b < r 1 < r 2 < R 2 < R 1 < r c . We choose the radii such that expressed in the Regge-Wheeler coordinate. That is, the radii are well separated from one another, and moreover, the red-shift vector N is identically T in a region about the photon sphere (3M ) * = 0. As an immediate application, we prove uniform boundedness of the nondegenerate N -energy for solutions to the Fackerell-Ipser equation. Here, the N -energy is defined by Theorem 4. Suppose Ψ is a solution of (40), specified by smooth initial data on the hypersurface Σ 0 . Then for τ > τ ′ ≥ 0, Ψ satisfies the uniform energy estimate Proof. The proof proceeds just as in [9]. Given τ ′ ≤τ ≤ τ , integration over the spacetime region R(τ , τ ) yields as the horizon terms have good sign. Utilizing monotonicity of the T -energy (59) and the red-shift estimates (63), we deduce the integral inequality which implies the uniform bound 4.6. The Morawetz Multiplier X. Let X = f (r)∂ r * , with f a radial function, and let ω X be a scalar weight function. Using the notation ( ) ′ to denote differentiation by the Regge-Wheeler coordinate r * , we calculate the unweighted density to be where we have used the identity Inserting the weight function we calculate the weighted density (49) Through the application of suitable multipliers of the form above, we deduce the following non-degenerate integrated decay estimate: Theorem 5. Suppose Ψ is a solution of (40), specified by smooth initial data on the hypersurface Σ 0 . Then for τ ≥ τ ′ ≥ 0, Ψ satisfies the non-degenerate integrated decay estimate Proof. The multiplier X 1 = f 1 (r)∂ r * , with provides the primary density estimate giving coercive control away from the horizons and away from the photon sphere r = 3M . With standard modifications by the multiplier X 2 = r 2 ∂ r * [7], allowing for control of t-derivatives, and the red-shift multiplier N , allowing for control near the horizons, we deduce the density estimates where c 1 (M, Λ) and c 2 (M, Λ) are suitably chosen positive constants. Turning to the boundary terms, we note that those terms formed from the weighted X 1 -energy are bounded by the T -energy: where we have also utilized (59). Similar estimates hold for the unweighted X 2 -energy. Using these boundary estimates and the degenerate density estimate (73), we can estimate the horizon terms formed from the red-shift: Taken together with Theorem 4, these boundary estimates and the nondegenerate density estimate (74) lead to an integrated decay estimate for the N -energy: 4.7. Decay Estimates. Using Theorems 4 and 5, we conclude this section with decay estimates on the solution Ψ. Letting λ > 0 be a positive parameter and integrating over [τ, τ + λC 2 ], the mean value theorem yields aτ on the interval such that where we have used the integral estimate. Applying as well the pointwise estimate, we find With the choice of parameter λ = 2C 1 , we have That is, given any initial point τ 0 = τ ≥ 0, we can produce a sequence τ k = τ 0 + 2kC 1 C 2 exhibiting the exponential decay f (τ k ) ≤ 1 2 k f (τ 0 ), with decay parameter inversely proportional to the product C 1 C 2 . In light of the estimates above, this sequential result is easily extended to arbitrary τ , establishing the theorem. The degenerate exponential decay estimate above easily leads to nondegenerate super-polynomial decay of general solutions Ψ: Theorem 7. Suppose Ψ is a solution of (40), specified by smooth initial data on the hypersurface Σ 0 . Then for τ ≥ 0, Ψ satisfies the energy decay estimate Note that the initial energy involves commutation of Ψ with the angular Killing fields of Ω, these commutations being described by multi-indices (q) of length m or less. Proof. Energy decay follows from an application of the comparison satisfied for τ ≥ 0, to the result of Theorem 6, and from the L 2 -summability of the co-vector harmonics over the spheres of symmetry in Σ τ . We remark that pointwise decay follows from further commutation with the angular Killing fields of Ω and application of standard Sobolev embedding to the resulting energy estimates. Decay of the Maxwell Components Using the estimates on P A and P A from the previous section, we conclude this work by proving decay of the components of F ab . 5.1. Decay of α A and α A . Basic estimates for α A and α A are obtained by exploiting the transformation formulae used in defining P A (38) and P A (39), with higher order statements obtained by means of commutation. Combining uniform boundedness and integrated decay estimates, we obtain degenerate exponential and non-degenerate super-polynomial decay of α A and α A in much the same way as in Theorems 6 and 7. Our estimates split naturally into three radial regions relating to the critical radius r µ , where µ is minimized; concretely, We let r 3 := min{3M, r µ /2}, and use the shorthand noting that µ is strictly decreasing on I and strictly increasing on III. We present the analysis for α A , that for α A being analogous. As α fails to be regular at the event horizon, we introduce the normalized quantitỹ Throughout, we will make use of the N -energy for α, specified by We remind the reader of the volume form conventions (56). With the choices v = v(u, R 3 ) for τ ′ ≤ u ≤ τ and v = τ otherwise, integration of (90) and (91) in u yields the integrated decay estimate Likewise, taking v = τ and integrating in u, we obtain Finally, with the choices u = τ and v = v(τ, R 3 ), we have where we have used the one-dimensional Sobolev inequality and (59). 5.1.3. Region III. Integrating the differential inequality with respect to a pair (u, v) in III, we find where we have applied (90) and (91). Indeed, the estimate above has the same form as these, and similar arguments yield the integrated decay estimate and the energy estimate analogous to (92) and (94). Derivative Estimates. Combining the estimates of the previous three subsections, we have deduced the integrated decay estimate and the energy estimate It remains to obtain analogous estimates for higher derivatives of α. Using (39), we calculate Written with respect to the regular pair (23), the above identity leads to Integrating and using the results on α from the previous subsections, we deduce the integrated decay estimate and the energy estimate Next we estimate the L derivatives of α. Again, owing to issues of regularity, we use the pair (22) near the event horizon. Estimates build upon those of α from the previous subsection, noting the commutation relation [ / ∇ L , / ∇ L ] = 0. In region I, the differential inequality integrates to give the analogs of (86) and (87). Likewise, a differential inequality analogous to (89) gives analogs of (92) and (93) on II. Summarizing, we have the integrated decay estimate and the energy estimate Finally, we estimate the angular gradient of α. Again, the estimates build upon those of α, noting the commutation relation [r / ∇ A , / ∇ L ] = 0. For example, in region I the differential inequality leads to the analog of (86) and (87). Significantly, the estimates in regions II and III display a loss of derivative, in applying the one-dimensional Sobolev inequality (see (94) and (98)). Overall, we find the integrated decay estimate and the energy estimate and the energy estimate Degenerate exponential decay, analogous to Theorem 6, proceeds in the following manner. Taking a spherical harmonic decomposition and letting Exponential decay of h, hence of g, follows from the same argument in Theorem 6. Note that the exponential decay parameter now degenerates as ℓ 4 , rather than ℓ 2 . As a consequence, the non-degenerate super-polynomial decay estimate requires twice as much regularity on the initial data. The SO(3)-invariance of the underlying equations leads to higher order versions of the energy estimate above; together with the Sobolev embedding theorems, these higher order estimates lead to pointwise control of α. Analogous results hold for α. Introducing the normalized quantitỹ we have the energy decay again allowing for a pointwise estimate on α through commutation and application of Sobolev embedding. 5.2. Decay of ρ and σ. The definitions of P A (38) and P A (39) and the Maxwell equations (25) and (26) yield the relations We present the decay estimates for ρ, those for σ being analogous. Considered on the null hypersurface C τ,τ (14), the Poincaré inequality on spheres of symmetry yields Application of the one-dimensional Sobolev inequality on the hypersurface where we have appealed to Theorem 7. A similar result is available on the null hypersurface C τ,τ (14). Taken together, the two yield the super-polynomial decay estimate 5.3. Summary of Results. Collecting the decay estimates on the Maxwell components and on the higher order quantities P and P , we summarize our results in the following theorem: (3). Further, suppose that F is specified by smooth initial data on the hypersurface Σ 0 (54). Then the derived quantities P (38) and P (39) satisfy the Fackerell-Ipser equation (40) and the super-polynomial decay estimates of Theorem 7. In addition, the Maxwell components (24) satisfy the super-polynomial decay estimates
2017-07-06T16:23:45.000Z
2017-06-21T00:00:00.000
{ "year": 2017, "sha1": "ee3d0eaede25691e57a3aa66699b5e90fac094ad", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "ee3d0eaede25691e57a3aa66699b5e90fac094ad", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics", "Mathematics" ] }
11160401
pes2o/s2orc
v3-fos-license
Constructing Reference Metrics on Multicube Representations of Arbitrary Manifolds Reference metrics are used to define the differential structure on multicube representations of manifolds, i.e., they provide a simple and practical way to define what it means globally for tensor fields and their derivatives to be continuous. This paper introduces a general procedure for constructing reference metrics automatically on multicube representations of manifolds with arbitrary topologies. The method is tested here by constructing reference metrics for compact, orientable two-dimensional manifolds with genera between zero and five. These metrics are shown to satisfy the Gauss-Bonnet identity numerically to the level of truncation error (which converges toward zero as the numerical resolution is increased). These reference metrics can be made smoother and more uniform by evolving them with Ricci flow. This smoothing procedure is tested on the two-dimensional reference metrics constructed here. These smoothing evolutions (using volume-normalized Ricci flow with DeTurck gauge fixing) are all shown to produce reference metrics with constant scalar curvatures (at the level of numerical truncation error). Introduction The multicube representation of a manifold Σ consists of a collection of non-intersecting ndimensional cubic regions B A ⊂ R n for A = 1, 2, ..., N R , together with a set of one-to-one invertible maps Ψ Aα Bβ that determine how the boundaries of these regions are to be connected together. The maps ∂ α B A = Ψ Aα Bβ (∂ β B B ) define these connections by identifying points on the boundary face ∂ β B B of region B B with points on the boundary face ∂ α B A of region B A (cf. Ref. [1] and Appendix B). It is convenient to choose all these cubic regions in R n to have the same coordinate size L, the same orientation, and to locate them so that regions intersect (if at all) in R n only at faces that are identified by the Ψ Aα Bβ maps. Since the regions do not overlap, the global Cartesian coordinates of R n can be used to identify points in Σ. Tensor fields on Σ can be represented by their components in the tensor bases associated with these global Cartesian coordinates. The Cartesian components of smooth tensor fields on a multicube manifold are smooth functions of the global Cartesian coordinates within each region B A , but these components may not be smooth (or even continuous) across the interface boundaries ∂ α B A between regions. Smooth tensor fields must instead satisfy more complicated interface continuity conditions defined by certain Jacobians, J Aαi Bβ j , that determine how vectors v i and covectors w i transform across interface boundaries: v i A = J Aαi Bβ j v j B and w Ai = J * Bβ j Aαi w B j . As discussed in Ref. [1], the needed Jacobians are easy to construct given a smooth, positive-definite reference metricg i j on Σ. A smooth reference metric also makes it possible to define what it means for tensor fields to be C 1 , i.e., to have continous derivatives across interface boundaries. Tensors are C 1 if their covariant gradients (defined with respect to the smooth connection determined by the reference metric) are continuous. At interface boundaries, the continuity of these gradients (which are themselves tensors) is defined by the Jacobians J Aαi Bβ j in the same way it is defined for any tensor field. A reference metric is therefore an extremely useful (if not essential) tool for defining and enforcing continuity of tensor fields and their derivatives on multicube representations of manifolds. Unfortunately there is (at present) no straightforward way to construct these reference metrics on manifolds with arbitrary topologies. The examples given to date in the literature have been limited to manifolds with simple topologies where explicit formulas for smooth metrics were already known [1]. The purpose of this paper is to present a general approach for constructing suitable reference metrics for arbitrary manifolds. The goal is to develop a method that can be implemented automatically by a code using as input only the multicube structure of the manifold, i.e., from a knowledge of the collection of regions B A and the way these regions are connected together by the interface maps Ψ Aα Bβ . In this paper we develop, implement, and test a method for constructing positive-definite (i.e., Riemannian) C 1 reference metrics for compact, orientable two-dimensional manifolds with arbitrary topologies. While C ∞ reference metrics might theoretically be preferable, C 1 metrics are all that are required to define the continuity of tensor fields and their derivatives. We show in Appendix A that any C 1 reference metric provides the same definitions of continuity of tensor fields and their derivatives across interface boundaries as a C ∞ reference metric. This level of smoothness is all that is needed to provide the appropriate interface boundary conditions for the solutions of the systems of second-order PDEs most commonly used in mathematical physics. For all practicable purposes, therefore, C 1 reference metrics are all that are generally required. Our method of constructing a reference metricg i j on Σ is built on a collection of star-shaped domains S I with I = 1, 2, ..., N S that surround the vertex points V I , which make up the corners of the multicube regions. The star-shaped domain S I is composed of copies of all the regions B A that intersect at the vertex point V I . The interface boundaries of the regions that include the vertex V I are to be connected together within S I using the same interface boundary maps Ψ Aα Bβ that define the multicube structure. Figure 1 illustrates a two-dimensional example of a star-shaped domain S I whose center V I is a vertex point where five regions intersect. A region B A would be represented multiple times in a particular S I if more than one of its vertices is identified by the interface boundary maps with the vertex point V I at the center of S I . For example, consider a one-region representation of T 2 . The single S I in this case consists of four copies of the single region B A , glued together so that each of the vertices of the original region coincides with the center of S I . The interior of each star-shaped domain S I has the topology of an open ball in R n , and together they form a set of overlapping domains that cover the manifold: A smooth reference metric is constructed on each star-shaped domain S I by introducing local Cartesian coordinates on it that have smooth transition maps with the global multicube coordinates of each region B A that it contains. Let e I i j denote the flat Euclidean metric within S I , Figure 1: Two-dimensional star-shaped domain S I whose center V I is a vertex point where five regions B A intersect. i.e., the tensor whose components are the unit matrix when written in terms of the local Cartesian coordinates of S I . These metrics are manifestly free of singularities within each S I , and they can be transformed from the local star-shaped domain coordinates into the global multicube coordinates in each B A using the smooth transition maps that relate them. These smooth metrics on the star-shaped domains S I can be combined to form a global metric on Σ by introducing a partition of unity u I ( x). These functions must be positive, u I ( x) > 0, for points x in the interior of S I ; they must vanish, u I ( x) = 0, for points outside S I ; and they are normalized so that 1 = I u I ( x) at every point x in Σ. Using these functions, the tensor g i j ( x) = I u I ( x) e I i j ( x) is positive definite at each point x in Σ and can therefore be used as a reference metric for Σ. Although each metric e I i j is smooth within its own domain S I , it may not be smooth with respect to the Cartesian coordinates of the other star-shaped domains that intersect S I . For this reason the combined metricḡ i j will generally only be as smooth as the products u I ( x) e I i j . At the present time we only know how to construct functions u I ( x) that make the combined metricḡ i j continuous (but not C 1 ) across all the interface boundaries. The metricḡ i j can be modified in a systematic and fairly straightforward way, however, to produce a new metricg i j whose extrinsic curvatureK i j vanishes along each multicube interface boundary ∂ α B A . Continuity of the extrinsic curvature is the geometrical condition needed to ensure the continuity of the derivatives of the metric across interface boundaries. The modified metricsg i j constructed in this way can therefore be used as C 1 reference metrics. In the two-dimensional case, the modification that convertsḡ i j intog i j can be accomplished using a simple conformal transformation. In higher dimensions, a more complicated transformation is required. The following sections present detailed descriptions of our procedure for constructing reference metricsg i j on two-dimensional multicube manifolds having arbitrary topologies. In Sec. 2.1 an explicit method is described for systematically constructing the overlapping star-shaped domains S I ; formulas are given for transforming between the intrinsic Cartesian coordinates in each S I and the global Cartesian coordinates in B A ; explicit representations are given (in both local and global Cartesian coordinates) for the flat metrics e I i j ( x) in each domain S I ; and examples of useful C 0 partition of unity functions u I ( x) are given. The resulting C 0 metrics are then modified in Sec. 2.2 by constructing an explicit conformal transformation that produces a metric having vanishing extrinsic curvature at each of the interface boundaries ∂ α B A . The resulting metric is C 1 and can therefore be used as a reference metric for these manifolds. We test these procedures for constructing reference metrics on a collection of compact, orientable two-dimensional manifolds in Sec. 2.3. New multicube representations of orientable two-dimensional manifolds having arbitrary topologies are described in detail in Appendix B. These procedures have been implemented in the Spectral Einstein Code (SpEC, developed by the SXS Collaboration, originally at Caltech and Cornell [2][3][4]). Reference metrics are constructed numerically in Sec. 2.3 for two-dimensional multicube manifolds with genera N g between zero and five; the scalar curvaturesR associated with these reference metrics are illustrated; and numerical results are presented which demonstrate that these two-dimensional reference metrics satisfy the Gauss-Bonnet identity up to truncation level errors (which converge to zero as the numerical resolution is increased). We also show that the continuous (but not C 1 ) reference metrics g i j fail to satisfy the Gauss-Bonnet identity numerically because of the curvature singularities which occur on the interface boundaries in this case. The scalar curvatures associated with the C 1 reference metrics constructed in Sec. 2 turn out to be quite nonuniform. Section 3 explores the possibility of using Ricci flow to smooth out the inhomogenities in these metricsg i j . In particular we develop a slightly modified version of volume-normalized Ricci flow with DeTurck gauge fixing. This version is found to perform better numerically with regard to keeping the volume of the manifold fixed at a prescribed value. We describe our implementation of these new Ricci flow equations in SpEC in Sec. 3.1. We test this implementation by evolving a round-sphere metric with random perturbations on a six-region multicube representation of the two-sphere manifold, S 2 . These tests show that our numerical Ricci flow works as expected: the solutions evolve toward constant-curvature metrics, the volumes of the manifolds are driven toward the prescribed values, and the Gauss-Bonnet identities remain satisfied throughout the evolutions. In Sec. 3.2 we use Ricci flow to evolve the rather nonuniform C 1 reference metricsg i j constructed in Sec. 2, using theseg i j both as initial data and as the fixed reference metrics throughout the evolutions. We show that all these evolutions approach constant curvature metrics, as expected for two-dimensional Ricci flow. The volumes of these manifolds remain fixed throughout the evolutions, and the Gauss-Bonnet identities are satisfied for all the geometries tested (which include genera N g between zero and five). These Ricci-flow-evolved metrics therefore provide smoother and more uniform reference metrics for these manifolds. Two-Dimensional Reference Metrics This section develops a procedure for constructing reference metrics on multicube representations of two-dimensional manifolds. Continuous reference metrics are created in Sec. 2.1 and then transformed in Sec. 2.2 into metrics whose derivatives are also continuous across the multicube interface boundaries. The resulting C 1 reference metrics are tested in Sec. 2.3 (on two-dimensional manifolds with genera N g between zero and five) to ensure that they satisfy the appropriate Gauss-Bonnet identities. Constructing Continuous Reference Metrics The procedure for creating a continuous (C 0 ) reference metricḡ i j presented here has three basic steps. First, a set of star-shaped domains S I for the multicube manifold is constructed from a knowledge of the regions B A and their interface boundary identification maps ∂ α B A = Ψ Aα Bβ (∂ β B B ). The interiors of these S I have the topology of open balls in R n and together they form an open cover of the manifold Σ. The primary task in this first step of the procedure is to organize the multicube structure in a way that allows us to determine which star-shaped domain S I is centered around each vertex ν Aµ of each multicube region B A , and to determine how many regions B A belong to each S I . In the second step, intrinsic Cartesian coordinates and metrics are constructed for each S I . These intrinsic coordinates are chosen to have smooth transformations with the global Cartesian coordinates in each multicube region B A . Metrics e I i j for each star-shaped domain are introduced in this step to be the Euclidean metric expressed in terms of the intrinsic Cartesian coordinates in each S I . In the third step, partitions of unity u I ( x) are constructed that are positive for points x inside S I , that vanish for points x outside S I , and that sum to unity at each point in the manifold: 1 = I u I ( x). A global reference metric is then obtained by taking weighted linear combinations of the flat metrics from each of the domains S I : . At present we only know how to choose the partition of unity functions u I ( x) in a way that makesḡ i j continuous across the boundary interfaces. Step One The first step is to compose and sort a list of all the vertices ν Aµ in a given multicube structure. The index µ = {1, ..., 2 n }, where n is the dimension of the manifold, identifies the vertices of a particular multicube region B A . This list of vertices ν Aµ can be sorted into equivalence classes V I whose members are identified with one another by the interface boundary-identification maps, i.e., ν Aµ and ν Bσ belong to the same V I iff there exists a sequence of maps Ψ Aα . One star-shaped domain S I is centered on each equivalence class of vertices V I . The domain S I consists of copies of all the multicube regions B A having vertices that belong to the equivalence class V I . For two-dimensional manifolds, the primary computational task to be completed in this first step is to determine the number K I of vertices ν Aµ that belong to each of the V I classes. The quantity K I represents the number of multicube regions B A clustered around the vertex V I in the star-shaped domain S I . Our code performs this counting process in two dimensions by using the fact that each vertex ν Aµ belongs to two different boundaries of the region B A . The code arbitrarily picks one of these boundaries, say ∂ α B A , and follows the identification map Ψ Bβ Aα to the neighboring region B B . The mapped vertex ν Bσ = Ψ Bβ Aα (ν Aµ ) again belongs to two boundaries of the new region B B : the mapped boundary ∂ β B B and another one, say ∂ γ B B . The code then follows the map Ψ Cδ Bγ across this other boundary to its neighboring region B C and to the new mapped vertex ν Cρ = Ψ Cδ Bγ (ν Bσ ). Continuing in this way, the code makes a sequence of transitions between regions until it arrives back at the original vertex ν Aµ of the starting region B A . The code counts these transitions and returns the number K I when the loop is closed. Figure 1 illustrates a two-dimensional star-shaped domain with K I = 5. Step Two The second step in this procedure is to construct local Cartesian coordinates that cover each of the star-shaped domains S I . We do this by noting that each S I consists of a cluster of cubes B A whose vertices coincide with the central point V I . If these cubes are appropriately distorted into parallelograms (by adjusting the angles between their coordinate axes), they can be fitted together (without overlapping and without leaving gaps between them) to form a domain in R n whose interior has the topology of an open ball. Each S I can therefore be covered by a single coordinate chart, which in two-dimensions can be written in the formx i I = (x I ,ȳ I ). Figure 2 illustrates both the distorted (on the left) and the undistorted (on the right) representations of a two-dimensional B A . In two dimensions the distortions needed to allow the B A to be fitted around a vertex point V I are quite simple: adjust the opening angles θ IAµ of the coordinate axes of each cube so they sum to 2π around each vertex, Aµ θ IAµ = 2π. The optimal way to satisfy this local flatness condition is to distort all of the two-dimensional cubes that make up S I in the same way, i.e., by setting θ IAµ = 2π/K I . In higher dimensions the problem of fitting the B A together to form a smooth star-shaped domain (without conical singularites and without gaps) is more complicated. The complication in higher dimensions comes from the lack of uniqueness and a clear optimal choice, rather than being a fundamental problem of existence. We plan to study the problem of finding a practical way to perform this construction in higher dimensions in a future paper. The simplest metricē I i j to assign to the star-shaped domain S I is the flat Euclidean metric expressed in terms of the local coordinates of S I : Each B A that intersects S I will inherit this flat geometry via the coordinate transformation that connects them. This fact can be used to deduce the coordinate transformations between the local Cartesian coordinatesx i I = (x I ,ȳ I ) of S I and the global coordinates The left side of Fig. 2 shows a region B A in S I that has been distorted into a parallelogram having an opening angle θ IAµ . The vectors ρ µ and σ µ in this figure represent unit vectors (according to the local flat metric of S I ) that are tangent to the boundary faces of B A at this vertex. The index µ identifies which of the vertices of B A these unit vectors belong to. Since the opening angle at this particular vertex is θ IAµ , the inner product of these vectors is just ρ µ · σ µ = cos θ IAµ . The vectors ρ µ and σ µ are proportional to the coordinate vectors ∂ x and ∂ y of the global Cartesian coordinates used to describe points in the multicube region B A -exactly which coordinate vectors depends on which vertex of B A coincides with this point. The right side of Fig. 2 shows these vectors at each of the vertices of B A , any of which could be the one that coincides with the center of S I . Table 1 gives the relationships between ρ µ and σ µ and the coordinate basis vectors in B A for each vertex ν µ . Also listed in Table 1 are the vectors v µ that give the location of each vertex relative to the center of its region B A . Table 1: The vectors ρ µ and σ µ are proportional to the basis vectors ∂ x and ∂ y at each vertex µ of the region B A . This table gives the global Cartesian coordinate representations of ρ µ and σ µ at each vertex, the vertex-dependent constants ǫ µ , and the locations ν µ of the vertices with respect to the center of B A . The inner products ρ µ · ρ µ , σ µ · σ µ , and ρ µ · σ µ are scalars that are independent of the coordinate representation of the vectors. Since ρ µ and σ µ are unit vectors that are (up to signs) just the coordinate basis vectors in the global Cartesian coordinates, it follows that the components of the metric e I i j in the global coordinates of B A must have the values ρ µ · ρ µ = σ µ · σ µ = e I xx = e I yy = 1 and ρ µ · σ µ = cos θ IAµ = ǫ µ e I xy , where ǫ µ = ±1 is the vertex-dependent constant defined in Table 1. The flat metric e I i j of the region S I ∩ B A therefore has the form when expressed in terms of the global Cartesian coordinates This metric can also be written as This is identical to the standard representation ofē I i j in the local coordinates of S I , Eq. (1), if new coordinatesx IA andỹ IA are defined as The constants c i A represent the global Cartesian coordinates of the center of region B A , and the constants v i µ represent the location of the µ vertex of the region relative to its center. These are included in the transformations in Eqs. (4) and (5) to ensure that the pointx IA =ỹ IA = 0 corresponds to the point x = c A + v µ , which is the ν Aµ vertex of B A that coincides with the center of S I . These new coordinatesx IA andỹ IA are therefore equal to the local Cartesian coordinates of S I ,x I andȳ I , up to a rigid rotation: for some angle ψ IA . The composition of Eqs. (6) and (7) with Eqs. (4) and (5) therefore gives the transformation between the local Cartesian coordinates of S I ,x I andȳ I , and the global Cartesian coordinates, x A and y A , of the multicube representation of the manifold. The metric e IA i j given in Eq. (2) must be constructed for each vertex ν Aµ of each region B A in terms of its global Cartesian coordinates x i A . These expressions depend only on the opening angles θ IAµ , which in turn depend only on the parameter K I . The full coordinate transformations between the global Cartesian coordinates x A and y A and the local coordinatesx I andȳ I given in Eqs. (4)- (7) are not actually needed to evaluate the reference metrics. Step Three The third step in this procedure for constructing a reference metric is to build a partition of unity u I ( x) that is adapted to the star-shaped domains. We do this by introducing a collection of weight functions w I ( x) that are positive within a particular S I and that fall to zero at its boundary. We experimented with a number of different weight functions and found that writing them as simple separable functions of the global Cartesian coordinates of each region B A worked far better than anything else we tried. Thus we let where L is the coordinate size of each region B A . The functions h(w) are chosen to have the value h(0) = 1, which corresponds to the vertex point at the center of the domain S I , and the value h(1) = 0 at the points which correspond to the outer boundary of S I . We find that the simple class of functions with integers k > 0 and ℓ > 0, works quite well. Some of these functions are illustrated in Fig. 3, with integer values in the range that worked best in our numerical tests. Figure 4 illustrates these weight functions expressed in terms of the local Cartesian coordinates of one of the star-shaped domains S I . This figure shows clearly that this choice of u I ( x) is continuous but not C 1 across the interface boundaries. We could also make these functions C 1 with respect to the local coordinates in one of the S I , however it is not possible to make them C 1 with respect to all of the overlapping local star-shaped coordinates at the same time. A partition of unity u I ( x) is constructed from the weight functions w I ( x) by normalizing them: where H( x) is defined by This definition ensures that the u I ( x) satisfy the normalization condition I u I ( x) = 1 for every point x in the manifold. A global reference metric is constructed by combining the metrics e I i j associated with each of the star-shaped domains S I and defined in Eq. (2), using the partition of unity defined in Eq. (10): This metric is positive definite, and it is continuous across all of the multicube interface boundaries. It can therefore be used as a continuous reference metric. In an effort to reduce the spatial variation of the metric defined in Eq. (12) and thus reduce the required numerical resolution, we add additional terms of the form u A ( x) e A i j , where e A i j are flat metrics with support in a single multicube region B A . Thus we let be the flat Euclidean metric expressed in terms of the global Cartesian coordinates x A and y A . We define new weight functions w A ( x) associated with the individual multicube regions to be which have the value w A ( c A ) = 1 at the center of the region B A and the value w A ( x) = 0 for points x on its boundary. These weight functions can be combined with those assocated with the star-shaped domains, Eq. (8), to form a new partition of unity. We modify the normalization function H( x) to be Then we redefine the functions u I ( x) using Eq. (10) with this new H( x), and we define functions u A ( x) using Eqs. (14) and (15): A new metric is then formed by combining these region-centered metrics with the star-shaped domain metrics constructed above: The addition of the region-centered metrics does not appear to have a significant impact on the required numerical resolution. Nevertheless, this is the two-dimensional reference metric that we use (after conformally transforming as described in the following section) in the numerical work described in the later sections of this paper. Constructing C 1 Reference Metrics The continuous metricḡ i j has been constructed in a way that ensures the geometry has no conical singularities at the vertices of the multicube regions. However,ḡ i j is not in general C 1 across the interface boundaries; e.g., the partition of unity that we use is not C 1 there. The geometry defined byḡ i j will therefore have curvature singularities along those interface boundaries. In order to remove these singularities, our next goal is to modifyḡ i j by making it C 1 , while at the same time keeping it continuous, positive definite, and free of conical singularities. It should be possible, for example, to find a tensor ψ i j that vanishes at the interface boundaries, and whose normal derivatives are the negatives of those ofḡ i j . In this case the new tensorg i j =ḡ i j + ψ i j and its first derivatives should be continuous at the boundaries. There is in fact a great deal of freedom available in choosing ψ i j . In particular, it can be changed arbitrarily in the interior of a region so long as its boundary values and derivatives remain unchanged. The idea is to use this freedom to keep ψ i j small enough everywhere thatg i j remains positive definite. We plan to find a practical way to do this for manifolds of arbitrary dimension in a future work. In this paper we focus on the two-dimensional case, where a simple conformal transformation is all that is needed to make the continuous metricḡ i j C 1 . We introduce the conformal factor ψ A for the metric in multicube region B A :g The conformal factor ψ A is chosen to make the resulting metricg A ab and its derivatives continuous across interface boundaries. The extrinsic curvatureK Aα i j of the ∂ α B A boundary of cubic region B A is defined bȳ wheren i Aα is the unit normal to the boundary and∇ k is the covariant derivative associated with the metricḡ A i j . In two dimensions this can be rewritten as whereK Aα =∇ kn k Aα is the trace. Since the normal vectorn i Aα depends only on the metricḡ i j , its divergence can be written explicitly in terms of derivatives of the metric: Under the conformal transformation given in Eq. (18), the trace of the extrinsic curvature K Aα transforms as follows:K The idea is to choose the conformal factor ψ A so that it has the value ψ A = 1 on each interface boundary ∂ α B A , with a normal derivative on each boundary given bȳ These boundary conditions ensure that the metricg i j continues to be continuous everywhere and free of cone singularities at the vertices of each cubic-block region, while also ensuring that the extrinsic curvature at each interface boundary is zero. There is no unique conformal factor satisfying the boundary conditions ψ A = 1 and the normal-derivative condition given in Eq. (23). However, the following expression for ψ A does satisfy these conditions: The required properties of the function f (w) are that it has the values f (0) = f (1) = 0 and the derivatives f ′ (0) = 1 and f ′ (1) = 0. The simple choice f (w) = w h(w) satisfies these conditions, with h(w) given in Eq. (9). The expression for the conformal factor in Eq. (24) has the property that log ψ A = 0 everywhere on the boundary of the cubic-block region, while its derivatives on the boundary satisfy Eq. (23). The values of the extrinsic curvaturesK Aα and the normal vectors n i Aα used in Eq. (24) are those associated with the continuous metricḡ i j given in Eq. (17). Continuity of the extrinsic curvature across interface boundaries is the necessary and sufficient condition for the metric to be C 1 and singularity-free at those interfaces (cf. the Israel junction conditions [5]). The metricsg i j defined in Eq. (18), with conformal factor ψ A given by Eq. (24), will be C 1 even across the multicube interface boundaries, since their extrinsic curvatures vanish and are continuous there. The reference metricsg i j can thus be used to define a C 1 differential structure, which defines the continuity of tensor fields and their derivatives. Appendix A shows that this differential structure is unique in the sense that it is the same as would be produced by any other C 1 reference metric expressed in the same global multicube coordinates. Testing the Reference Metrics We have implemented the method outlined in Secs. 2.1 and 2.2 for constructing a C 1 reference metricg i j in SpEC. This section describes some tests we have performed to verify that our code correctly constructs reference metrics according to these procedures. We begin by constructing multicube representations of compact, orientable two-dimensional manifolds having genera N g between zero and five. Appendix B gives detailed descriptions of these multicube representations and also shows explicitly how they can be generalized to compact, orientable two-dimensional manifolds of any genus N g . These multicube representations consist of lists of the regions B A and their specific locations in R n , together with a complete list of the specific interface boundary identification maps Ψ Aα Bβ that define how the regions are to be connected together. Any C 1 metric g i j , including the reference metricg i j from Eq. (18), must satisfy the Gauss-Bonnet identity, which relates the scalar curvature R to the topology of any compact, orientable two-dimensional Riemannian manifold: where ||R|| is the spatially averaged scalar curvature, V is the volume, and where N g is the genus of the manifold. The Gauss-Bonnet identity therefore provides a powerful test: The multicube manifold must have the correct genus or the identity will fail. And the metric must be C 1 across all the interface boundaries, or curvature singularities along those boundaries will cause the numerical integrals used in the the identity to fail. We use the quantity E GB , defined by to monitor how well the Gauss-Bonnet identity is satisfied numerically in our tests. Figure 5 shows the values of E GB computed for each of the multicube manifolds described in Appendix B using the C 1 reference metricg i j defined in Eq. (18). Each curve in Fig. 5 represents E GB for a particular multicube manifold as a function of the numerical resolution N (the number of grid points along each dimension of each multicube region B A ). The manifolds are identified in We have also tested the Gauss-Bonnet identity on this same collection of multicube manifolds using the scalar curvatures computed from the continuous reference metricsḡ i j of Eq. (17) instead of the C 1 metricsg i j of Eq. (18). Using these C 0 reference metrics, we find that E GB is of order unity (with values between about 0.5 and 2) for all of the tests illustrated in Fig. 5. The Gauss-Bonnet identity fails in this case because the curvatures associated with the C 0 reference metrics have singularities along the multicube interface boundaries. This failure, which was expected in this case, reinforces the conclusion that we have successfully implemented the procedure outlined in Secs. 2.1 and 2.2 for constructing C 1 reference metrics on two-dimensional manifolds with arbitrary topologies. Smoothing the Reference Metrics Using Ricci Flow The C 1 reference metricsg i j introduced in Secs. 2.1 and 2.2 satisfy the minimal requirements needed to establish low-order differential structures on two-dimensional manifolds. These structures allow us to define the continuity of tensors and their derivatives, which is all that is required for solving the systems of second-order equations of most interest in mathematical physics. Unfortunately these metrics exhibit a great deal of spatial structure and consequently require fairly high numerical resolution to be represented accurately. Figure 6 illustrates the scalar curvaturẽ R associated with these reference metricsg i j for the case of a six-region, N R = 6, representation of the genus N g = 0 multicube manifold (the two-sphere), and also for the case of a forty-region, N R = 40, representation of the genus N g = 5 multicube manifold (the five-handled sphere). While these scalar curvatures appear to be continuous (even across the region interface boundaries) they have very large spatial variations. The goal of this section is to develop a method of transforming these metrics into more uniform (and smoother) reference metrics. The uniformization theorem implies that every orientable two-dimensional manifold Σ admits a metric having constant scalar curvature [6]. One approach to making the reference metricsg i j Figure 6: Illustration of the scalar curvatureR of two multicube manifolds with C 1 reference metricsg i j constructed via the procedure described in Sec. 2. Both cases use a numerical resolution of N = 40 grid points along each dimension of each multicube region. Top: The genus N g = 0, six-region case. The left side shows the manifold mapped (nonisometrically) onto a 2-sphere, with radial warping proportional to the scalar curvatureR. The right side shows the same manifold in the multicube Cartesian coordinates, with warping in the z-direction proportional toR. Bottom: The genus N g = 5, forty-region multicube manifold in the multicube Cartesian coordinates, with warping in the z-direction proportional to the scalar curvatureR. more uniform, therefore, would be to find a way to transform them into metrics having constant scalar curvatures. Fortunately there is a well-studied technique for doing exactly that. Volumenormalized Ricci flow is a parabolic evolution equation for the metric whose solutions in two dimensions all evolve toward metrics having spatially constant scalar curvatures [6][7][8][9]. The evolution equation we use for the volume-normalized Ricci flow of a two-dimensional metric g i j is given by The quantities ||R|| and V(t) in Eq. (29) are the volume-averaged scalar curvature and the volume of the manifold defined in Eqs. (26) and (27), respectively. The terms containing these quantities are added to control the volume of the manifold. The term proportional to µ in Eq. (29) is new to the best of our knowledge. We have found that it makes our numerical solutions of Eq. (29) track the target volume V 0 more accurately. The DeTurck gauge-fixing covector H i is defined by where Γ j kℓ is the connection associated with the metric g i j , andΓ j kℓ is any other fixed connection on the manifold [10]. The DeTurck terms (those containing H i ) are added to make Eq. (29) strongly parabolic, and thus to have a manifestly well-posed initial value problem [11]. Contracting Eq. (29) with the inverse metric g i j gives Integrating this equation over any compact manifold provides the evolution equation for the volume V(t) of the manifold: Without the term proportional to µ, the volume of the manifold would be fixed, ∂ t V(t) = 0, at the analytical level. In numerical simulations, however, discretization and roundoff error give rise to slow, approximately linear drifts in the volume. With the damping term we have added, the volume of the manifold is driven toward the target value V 0 at a rate determined by the constant µ. In our numerical tests, we find that a value of µ = 10 works well. Numerical Ricci Flow We have implemented the volume-normalized Ricci flow equation with DeTurck gauge fixing, Eq. (29), in SpEC. This code evolves PDEs using pseudo-spectral methods to evaluate spatial derivatives, and it performs explicit time integration at each collocation point using standard ordinary differential equation solvers (e.g., Runge-Kutta). Boundary conditions are imposed at multicube interface boundaries to enforce continuity of the metric g i j and its normal derivativẽ n k∇ k g i j . The vectorñ k is the unit normal to the boundary and∇ k is the covariant derivative associated with the reference metricg i j . Boundary conditions are imposed in SpEC using penalty methods. The desired boundary conditions are added to the evolution equations at the boundary collocation points. The evolution equations on the ∂ α B A boundary, which is identified with the ∂ β B B boundary, for example, have the form where F i j represents the right side of Eq. (29), and α and β are positive constant penalty factors. The quantities g B i j A and ∇ k g B i j A represent the transformations of g B i j and∇ k g B i j into the tensor basis of region B A using the interface boundary Jacobians: If the penalty factors α and β are chosen properly, these additional terms drive the evolution at the boundary in a way that reduces any small boundary condition error [12]. There is a range of constants α and β that work well-too small can lead to instability, while too large may make the system overly stiff. Empirically, we have found that the following values work well in most cases: 1 In some cases the penalty factors (particularly α) can be decreased below the values given in Eq. (36) without sacrificing stability. Using smaller values allows a less restrictive condition on the size of the maximum time step and therefore allows more efficient numerical evolutions. In rare cases, we have found it necessary to increase β above the value given in Eq. (36). For example, in the low-resolution N = 16, ten-region, N R = 10, genus N g = 0 case, a value of β at least twice that given in Eq. (36) was needed for stability. Hesthaven and Gottlieb [12] have derived rigorous lower bounds on the penalty factors needed for stable evolution of a simple, second-order parabolic equation in one dimension. They show that when Robin-type boundary conditions are used (like those we use here), penalty factors that scale like α ∼ O(N 2 ) and β ∼ O(N 2 ) are required. Our results agree with theirs for β, but we have found it necessary to use much larger values of α that scale as α ∼ O(N 4 ) in most cases. We test the stability and robustness of our implementation of these Ricci flow evolution equations on a six-region, N R = 6, multicube representation of the two-sphere manifold, S 2 , which is described in detail in Appendix B.1. As initial data for these tests we use the standard round-sphere metric with pseudo-random white noise of amplitude 0.1 added to each component of the metric g i j at each collocation point. The reference metricg i j used in these tests is the usual smooth, unperturbed round-sphere metric, which is given explicitly in global Cartesian multicube coordinates in Ref. [1]. We use several measures to determine whether our implementation of numerical Ricci flow is working properly and whether it actually drives the metric toward a constant-curvature state, as it is expected to do in two dimensions. First, we measure how well the numerical Ricci flow evolves toward geometries having uniform scalar curvatures. One possible dimensionless measure of this scalar-curvature uniformity is the quantityẼ R , defined bỹ For the two-dimensional manifolds studied here, the volume-averaged scalar curvature ||R|| is given by the Gauss-Bonnet identity: ||R|| = 8π(1 − N g )/V. The scalar-curvature uniformity measure can therefore be rewritten in the form This measure is singular for N g = 1, so we define an alternative measure E R as follows: This alternative measure is well defined for all compact, orientable two-dimensional manifolds. It differs fromẼ R by the factor |1 − N g |/(1 + N g ), which is of order unity, except for the singular case N g = 1. We use the measure E R to monitor the uniformity of the scalar curvature in all of our Ricci flow evolutions. Second, we monitor the volume of the manifold to determine whether the volume-normalized flow is working properly. We do this using the dimensionless quantity E V , defined by to measure the fractional change in the volume relative to the target volume V 0 . Third, we use the quantity E H to measure the evolution of the DeTurck gauge-source covector: And finally, we assess how well the geometries produced by this Ricci flow satisfy the Gauss-Bonnet identity, using the quantity E GB defined in Eq. (28). Figure 7 shows the results of our Ricci flow evolutions using initial data constructed from the round-sphere metric with random noise perturbations. This figure plots the time evolutions of the four error measures E R , E V , E H , and E GB , defined in Eqs. (39), (40), (41), and (28), respectively, for evolutions performed with several different numerical resolutions N. As evidenced in these figures, the Ricci flow evolutions are stable and convergent as the numerical resolution N is increased. Nonuniformities in the random initial scalar curvature, as measured by E R and shown in the upper left part of Fig. 7, decay exponentially in time as the geometry evolves toward the constant-curvature round-sphere metric until the differences are dominated by truncation level errors at each resolution. The upper right part of Fig. 7 shows that the volume-controlling terms in Eq. (29) are effective at driving the volume of the manifold to the value V 0 , as measured by E V . The target volume V 0 in these tests was taken to be the volume measured by the smooth round-sphere reference metric, rather than the volume of the initial random metric. The lower left part of Fig. 7 shows that the gauge source one-form H i , measured by E H , is effectively driven to zero by the DeTurck term, and the lower right part of Fig. 7 shows that the Gauss-Bonnet error E GB decays very quickly to truncation level at each resolution. Random noise was added to the initial data in these tests at each grid point, so the precise structure of the initial data is different at each resolution. Therefore, numerical convergence with increasing resolution N at the initial and very early times was not expected (or observed). Smoother Reference Metrics We have used volume-normalized Ricci flow to construct smoother and more uniform reference metrics for several multicube manifolds in two dimensions. In particular we have performed Ricci-flow smoothing of the reference metrics for multicube representations of compact, orientable two-dimensional manifolds with genera between N g = 0 (the two-sphere) and N g = 5 (the five-handled two-sphere). In each case, initial data for the evolution are prepared by constructing the metricg i j according to the procedure described in Sec. 2. Theseg i j use the polynomial generating functions h(w) of Eq. (9), with k = 1 and ℓ = 4, both for the partition of unity and for the functions f (w) = w h(w) that appear in the conformal factor in Eq. (24). Although this choice of powers appears to give the best results, we have found that other choices often work nearly as well. We use the metricg i j not only as initial data for these Ricci flow evolutions, but also as the fixed reference metric, which defines the continuity of all tensor fields and their derivatives throughout the evolutions, including the Ricci-flow-evolved g i j (t). We have performed Ricci flow evolutions on all the multicube manifolds described in Appendix B, and the results look very similar to one another. For this reason we describe only one of these cases in detail, and then we summarize and compare the results of our highestresolution evolutions from all of the cases. We show detailed results for our most complex case: a forty-region, N R = 40, representation of a genus N g = 5 multicube manifold (the five-handled (28), respectively. The reference metric, which is identical to the initial metric in this case, is constructed according to the procedure described in Sec. 2. The numerical resolution in each spatial dimension of each multicube region is denoted by N. two-sphere). The scalar curvature for the reference metricg i j in this case is illustrated in the bottom part of Fig. 6. The details of the multicube structure for this case (and all our other cases) are given in Appendix B. Figure 8 shows the results of these genus N g = 5 evolutions for several different numerical resolutions N. The graphs in Fig. 8 indicate that the evolutions are stable and convergent, demonstrating our ability to evolve PDEs on arbitrary, complicated two-dimensional manifolds using the C 1 reference metrics developed in Sec. 2. These evolutions differ from the random-metric evolutions shown in Fig. 7 in several ways. First, these initial data are much smoother than the random metrics (which are unresolved by construction). Consequently, the Gauss-Bonnet error E GB is much smaller at early times. Second, the initial metric in these tests is identical to the In each case, the reference metric is identical to the initial metric and is constructed according to the procedure described in Sec. 2. reference metric, and accordingly the error measures E V and E H are much smaller (about truncation level) at early times. These error measures remain close to these initial truncation-error levels throughout the evolutions. We also note that the more complicated spatial structures of the reference metrics in these simulations require somewhat higher numerical resolutions in order to obtain the same level of truncation errors as the random-metric S 2 tests described in Sec. 3.1. Figure 9 compares the highest-resolution Ricci flow evolutions from each of the multicube manifolds described in Appendix B (up to and including the forty-region representation of a genus 5 manifold). All of these cases are found to be stable and convergent, with qualitatively similar results to the genus N g = 5 evolutions shown in Fig. 8. The only significant difference between the cases is the rate at which nonuniformities in the scalar curvatures decay. The reference metrics that we construct on these different multicube manifolds have nonuniformities on different length scales, and these nonuniformities correspondingly decay at different rates under the Ricci flow. There are also differences in the levels of the truncation errors for these cases at the same numerical resolution. The ten-region, N R = 10, representation of the genus N g = 1 multicube manifold (the two-torus), for example, has the highest level of truncation error among the examples we have studied. Discussion This paper presents a method for constructing reference metrics on multicube representations of manifolds having arbitrary topologies. The method was implemented and successfully tested, as described in Sec. 2, for a variety of compact, orientable two-dimensional Riemannian manifolds with genera between 0 and 5. The reference metrics constructed in this way are not smooth, but they have continuous derivatives, which is sufficient to define the C 1 differential structures needed for solving the systems of second-order PDEs of most interest in mathematical physics. We have demonstrated in Sec. 3, for example, that these C 1 reference metrics can be used successfully to solve systems of second-order parabolic evolution equations. The reference metrics constructed using the methods in Sec. 2 have large spatial variations, which are not easy to resolve numerically. We demonstrate in Sec. 3 that these metrics can be made more uniform by evolving them with Ricci flow. The two-dimensional reference metrics studied in our tests all evolve under Ricci flow to metrics having constant scalar curvatures. Ricci flow also has smoothing properties similar to the heat equation: solutions to the Ricci flow equation on compact manifolds become smooth, in fact real-analytic, for t > 0 provided the initial curvature is bounded (which is the case for our C 1 reference metrics) [13,14]. Our numerical evolutions show smoothing of the metrics that is consistent with this fact. The presence of the DeTurck gauge-fixing terms, however, somewhat obfuscates this question of smoothness. Our evolutions show that the DeTurck gauge-fixing covector H i is zero, up to truncation level errors, throughout the evolutions. The connection Γ k i j of the metric g i j at the end of our Ricci flow evolutions could (in principle) therefore retain some of the non-smooth features of the reference connectionΓ k i j , since H i = 0 = g i j g kℓ (Γ j kℓ −Γ j kℓ ). However, the vanishing of H i shows that the evolved metric satisfies the original Ricci flow equation without the DeTurck terms, and thus must be smooth by the aforementioned theorems [13,14]. Hence any non-smoothness of the connection must just reflect the (non-smooth) coordinate transitions at the interface boundaries. We made some effort to avoid even the potential effects of the non-smoothness of the connection associated with the DeTurck terms by modifying the basic Ricci flow Eq. (29) in various ways. For example, we attempted to carry out numerical Ricci flow evolutions without including the DeTurck terms at all, i.e., simply by setting H i = 0 in Eq. (29). All of these evolutions were unstable. The DeTurck terms were added to the Ricci flow equation to make it strongly parabolic and thereby manifestly well-posed [6]. Without the DeTurck terms, the basic Ricci flow equations may simply be ill-suited for numerical solution. We also tried modifying the De-Turck terms in a way that would attempt to drive the solution to harmonic gauge, i.e., to a gauge in which 0 = g i j Γ k i j . We did this by changing the definition of H i to give the reference connection an explicit time dependence, as in H i = g i j g kℓ (Γ j kℓ − e −µtΓ j kℓ ), for example. Unfortunately all of these runs failed as well. While these runs appeared to be stable, the Ricci flows in these cases did not evolve toward metrics having constant scalar curvatures, and the DeTurck gauge-source covector H i did not remain small during the evolutions. We plan to continue to search for effective and efficient ways to construct reference metrics on manifolds with arbitrary spatial topologies. In two dimensions the remaining questions are related to finding better gauge conditions for the reference metrics. In three and higher dimensions the challenge will be to find efficient ways to implement the general techniques developed here. Acknowledgments We thank Jörg Enders, Gerhard Huisken, James Isenberg, and Klaus Kröncke for helpful discussions about Ricci flow. LL and NT thank the Max Planck Institute for Gravitational Physics (Albert Einstein Institute) in Golm, Germany for their hospitality during a visit when a portion of this research was completed. LL and NT were supported in part by a grant from the Sherman Fairchild Foundation and by grants DMS-1065438 and PHY-1404569 from the National Science Foundation. OR was supported by a Heisenberg Fellowship and grant RI 2246/2 from the German Research Foundation (DFG). We also thank the Center for Computational Mathematics at the University of California at San Diego for providing access to their computer cluster (aquired through NSF DMS/MRI Award 0821816) on which all the numerical tests reported in this paper were performed. Appendix A. Uniqueness of the C 1 Multicube Differential Structure The traditional definition of a C k differential structure on a manifold consists of an atlas of coordinate charts having the property that the transition maps between overlapping charts are C k+1 functions. 2 Tensor fields are defined to be C k with respect to this differential structure if their components when represented in terms of this atlas are C k functions. In a multicube representation of a manifold, we define the continuity of tensor fields and their derivatives instead using the Jacobians and the connection determined by a reference metric. This enables us to define these concepts without needing an overlapping C k+1 atlas. The two definitions of differential structure are equivalent on any manifold having both a multicube structure and a C k+1 atlas. In this appendix we consider the technical question of the uniqueness of the multicube method of specifying the differential structure. The purpose of this appendix is to show that the C 1 differential structure of a multicube manifold defined by a particular C 1 reference metric is independent of the choice of reference metric. In particular, we show that the definitions of continuity of tensor fields and their covariant derivatives based on a C 1 reference metricg ab are the same as those based on any other C 1 metrič g ab , i.e., any metricǧ ab that is continuous and whose covariant gradient∇ aǧbc is continuous with respect to the differential structure defined byg ab . Since any C k metric with k ≥ 1 is also C 1 , this argument implies that the C 1 differential structure defined by the C 1 metricg ab is also equivalent to the C 1 differential structure defined by any C k metricǧ ab . We have shown [1] how the differential structure for a multicube representation of a manifold may be specified by giving a C 1 metricg ab represented in the global Cartesian multicube coor-dinate basis. 3 This method of defining the differential structure constructs JacobiansJ Aαa Bβb and their dualsJ * Bβb Aαa that transform tensors from the ∂ β B B face of cubic region B B to the ∂ α B A face of cubic region B A . These Jacobians are determined by the metricg ab and the rotation matrices C Aαa Bβb that define the identification maps (cf. Appendix B) between neighboring regions. The expressions for these Jacobians are given by Lindblom and Szilágyi [1]: The vectorsñ a Aα andñ a Bβ that appear in these expressions represent the outward directed unit normal vectors to the ∂ α B A face of region B A and the ∂ β B B face of cubic region B B , respectively. These normals are unit vectors with respect to theg ab metric, i.e., 1 =g 2), determine the way continuous tensor fields transform across interface boundaries. The reference metric also determines a covariant deriva-tive∇ a that, together with the Jacobians, defines how C 1 tensor fields transform across interface boundaries. These definitions of continuity for tensor fields and their derivatives determine the C 1 differential structure of the manifold. The question of the uniqueness of the C 1 differential structure reduces therefore to the questions of the uniqueness of the JacobiansJ Aαa Bβb , and of the uniqueness of the continuity of the derivatives determined by the covariant derivative∇ a . The normal covectorsñ Aαa that appear in Eqs. (A.1) and (A.2) are proportional to the gradients of the x |α| A =constant coordinate surfaces that define the particular boundary face of the region (i.e., in this case the α face of region A): The index α can have either sign, e.g., to represent the +x or the −x coordinate boundary face. The notation x |α| A indicates the coordinate associated with either case-i.e., both the +x and the −x faces are surfaces of constant x x A . The proportionality constantÑ Aα in Eq. (A.3) is determined by the requirement thatñ Aαa is a unit covector with respect to the reference metricg Aab : The sign ofÑ Aα is chosen to ensure thatñ Aαa is the outward directed normal. The normal vector is defined as the dual to this normal covector:ñ a Aα =g ab Añ Aαb . The Jacobians defined in Eqs. They also transform vectors t a Bβ that are tangent to the interface,ñ Aαa t a Aα = 0, by the rotations C Aαa Bβb used to define the interface boundary maps (cf. Appendix B): These Jacobians and dual Jacobians are inverses of each other as well (cf. Ref. [1]): Now consider a second positive-definite metricǧ ab that is C 1 with respect to the differential structure defined by the metricg ab . This second metric can be used to define alternate normal covectorsň Aαa =Ň Aα ∂ a x |α| A and vectorsň a Aα =ǧ ab Aň Aαb , withŇ −2 This norm can be rewritten aš (A.10) Equation (A.9) therefore implies the continuity of the ratioÑ Aα /Ň Aα across interface boundaries. The alternate normalň Aαa , which can be written asň Aαa = (Ň Aα /Ñ Aα )ñ Aαa , is therefore continuous (up to a sign flip) across interface boundaries. This also implies that the alternate normal vectorň a Aα =ǧ ab Aň Aαb is continuous. These alternate normals must therefore satisfy the same continuity conditions (up to the sign flips) across interface boundaries as any continuous tensor field:ň Contracting this expression withñ Aαa and using Eq. (A.10), it follows that Q =Ñ Aα /Ň Aα . Note that the tangent vectors t a Aα(k) , which are orthogonal toñ Aαa by definition, are also orthogonal toň Aαa . Therefore, the alternate normalň a Aα together with a linearly independent collection of tangent vectors can also be used as a basis of vectors on the boundary. Next define alternate JacobiansJ Aαa Bβb andJ * Bβb Aαa using the alternate metricǧ ab : These alternate Jacobians transform the alternate normalň a Aα and any tangent vector t a Aα(k) in the following way:ň The alternative Jacobian and its dual are also inverse of each other: Since the alternate dual JacobiansJ * Bβb Aαa are the inverses of the alternate Jacobians, they must also be identical to the original dual Jacobians (which are the inverses of the original Jacobians). We have shown therefore that the Jacobians used to define the continuity of tensor fields across boundary interfaces do not depend on which metric is used to construct them. This argument depends only on the continuity of those metrics (not their derivatives). Now consider the uniqueness of the multicube definition of the continuity of the derivatives of tensor fields. Let∇ a and∇ a denote the covariant derivatives defined by the C 1 reference metric g ab and the C 1 reference metricǧ ab , respectively. Let v a and w a denote vector and covector fields that are continuous across the interface boundaries, as defined by the Jacobians constructed from either of the reference metrics. Assume that∇ a v b and∇ a w b are also continuous across interface boundaries. The differences between these tensors and those computed using the alternate covariant derivative∇ a are tensors:∇ The quantity ∆ b ac =Γ b ac −Γ b ac , being the difference between connections, is also a tensor. It is continuous across interface boundaries as long as the two metricsg ab andǧ ab used to construct it are both C 1 . Continuity of the derivatives∇ a v b and∇ a w b across interface boundaries therefore implies the continuity of the alternative derivatives∇ a v b and∇ a w b . The equality of the JacobiansJ Aαa Bβb andJ Aαa Bβb , together with the continuity of the covariant derivatives∇ a and∇ a , implies that the C 1 differential structure constructed from the C 1 metricg ab is equivalent to the one constructed from any alternate C 1 metricǧ ab . In dimensions two and three there is only one differential structure on a particular manifold [15]. In those cases, this argument shows that the C 1 differential structures determined by any two C 1 metrics are equivalent. In higher dimensional manifolds, however, there can be multiple inequivalent differential structures [15]. The argument given here only establishes the independence of the multicube differential structure constructed from reference metrics belonging to the same differential structure in those cases. The uniqueness of the Jacobians J Aα a Bβ b discussed above assumed a particular fixed choice of global Cartesian multicube coordinates. Although these Cartesian multicube coordinates are severely restricted, they are not unique. The two assumptions made about them are the following. First, the faces of each cubic-block region are assumed to be constant-coordinate surfaces. And second, the interface boundary maps identify points in the manifold across boundaries in a particular way (cf. Appendix B). The global Cartesian multicube coordinates on these manifolds can therefore be modified in any way that leaves their interface boundary values and the identification of points on the interface boundaries unchanged. The coordinates can be modified smoothly in the interior of each cubic-block region, for example, while keeping their values fixed on their faces. More generally, the coordinates can be adjusted smoothly even on the boundary faces as long as complementary adjustments are made to the corresponding coordinates in the neighboring region. Let x a A denote one particular choice of coordinates on region A, and letx a A denote another set of smoothly related coordinates that satisfy the restrictions described above. Also assume that the Jacobians ∂x a A /∂x b A are everywhere nonsingular and nondegenerate. Let v a A and w Aa denote a smooth vector and covector fields in region A. The representations of these fields within this region using thex a A coordinates are given by the standard expressions Analogous changes of coordinates can be made in each of the cubic-block regions. The resulting JacobiansJ Aα a Bβ b needed to transform tensor fields represented in thex a A coordinates are related to those of the original fixed coordinates J Aα a Bβ b by the following transformations: This multicube coordinate freedom does not require ∂x a A /∂x b A to be the identity δ a b on the faces of the multicube regions, and consequently the JacobiansJ . This equation represents the coordinate freedom that exists in the expressions for the interface Jacobians on multicube manifolds within a particular differential structure. Every two-and three-dimensional manifold has a unique global differential structure, and therefore Eq. (A.24) represents all the freedom that exists in the boundary interface Jacobians on those manifolds. Appendix B. Two-Dimensional Multicube Manifolds The purpose of this appendix is to present explicit multicube representations of compact, orientable two-dimensional manifolds with genera between zero and three. A straightforward procedure allows us to extend these examples to arbitrary genus by gluing together copies of the N g = 2 multicube structures. The topologies of all these two-dimensional manifolds are uniquely determined by their genus N g , which can have non-negative integer values. The case N g = 0 is the two-sphere, S 2 , and N g = 1 is the two-torus, T 2 . Larger values of N g can be thought of as two-spheres with N g handles attached. A multicube representation of a manifold consists of a collection of multicube regions B A together with maps Ψ Aα Bβ that determine how the boundaries ∂ α B A of these regions are connected together. We choose multicube regions B A that have uniform coordinate size L and that are all aligned in R n with the global Cartesian coordinate axes. We position these B A in R n in such a way that regions intersect (if at all) only along boundaries that are identified with one another by one of the Ψ Aα Bβ maps. For each multicube manifold, we provide a table of vectors c A that represent the global Cartesian coordinates of the centers of each of the multicube regions B A . These tables serve as lists of the regions B A that are to be included in each particular multicube representation. We also provide tables of all of the interface boundary identifications for each multicube representation. A typical entry in one of these tables is an expression of the form ∂ +x B 2 ↔ ∂ −y B 3 , which would indicate that the +x boundary of multicube B 2 is to be identified with the −y boundary of multicube B 3 . The boundary identification maps used in our multicube manifolds are simple linear transformations of the form The matrix C Aα Bβ which appears in Eq. (B.1) is the combined rotation and reflection matrix needed to reorient the ∂ β B B boundary with ∂ α B A . Our specification of a particular multicube representation includes the matrices C Aα Bβ for each interface boundary identification map. The list of possible matrices is quite small in two-dimensions, consisting of the identity I, various combinations of 90-degree rotations R ± , and reflections M. Explicit representations of these matrices in terms of the global Cartesian coordinate basis are given by In the following sections we give the specific matrices C Aα Bβ and their inverses C Bβ Aα needed for each interface boundary identification ∂ α B A ↔ ∂ β B B of each multicube manifold. The methods and the notation used here are the same as those developed in Ref. [1]. Table B.2, and the corresponding transformation matrices are given in Table B.3. This six-region representation of S 2 is equivalent to the standard two-dimensional cubed-sphere representation of S 2 [16][17][18]. Greek letters indicate identifications between external edges. Right figure shows the same multicube representation using uniformly sized, undistorted squares, including their relative locations in the background Euclidean space. Table B.3: Transformation matrices C Aα Bβ for the interface identifications ∂ α B A ↔ ∂ β B B in the six-region, N R = 6, representation of the genus N g = 0 manifold, the two-sphere, S 2 . All transformation matrices C Aα Bβ are assumed to be the identity I, except those specified in this table. Table B.5, and the corresponding transformation matrices are given in Table B.6. This ten-region representation of S 2 is a simple generalization of the standard two-dimensional cubed-sphere representation of S 2 . It is constructed by splitting the four "equatorial" squares in the standard six-region representation into eight squares with the new interface boundaries running along the equator. Greek letters indicate identifications between external edges. Right figure shows the same multicube representation using uniformly sized, undistorted squares, including their relative locations in the background Euclidean space. Table B.4: Region center locations for the ten-region, N R = 10, genus N g = 0 multicube manifold. Table B.8, and the corresponding transformation matrices are given in Table B.9. This tenregion representation of T 2 is a simple generalization of the standard one-region representation. The outer edges of the squares in the left illustration in Fig. B.3 are identified with the opposing outer edges using identity maps, just as in the standard one-region representation of T 2 . This Table B.6: Transformation matrices C Aα Bβ for the interface identifications ∂ α B A ↔ ∂ β B B in the ten-region, N R = 10, representation of the genus N g = 0 manifold, the two-sphere, S 2 . All transformation matrices C Aα Bβ are assumed to be the identity I, except those specified in this table. ten-region representation merely subdivides the single-region representation into ten regions, as shown in Fig. B.3. Greek letters indicate identifications between external edges. Right figure shows the same multicube representation using uniformly sized, undistorted squares, including their relative locations in the background Euclidean space. Table B.11. All of the interface identification maps have transformation matrices C Aα Bβ that are the identity matrix I, so they are not included in a table for this case. This eight-region, N R = 8, representation of T 2 is constructed by gluing a handle onto the ten-region representation of S 2 described in Appendix B.2. The two inner regions (3 and 8 in Fig. B.2) are removed, and the holes created in Table B.8: Region interface identifications ∂ α B A ↔ ∂ β B B for the ten-region, N R = 10, representation of the genus N g = 1 manifold, the two-torus, T 2 . Table B.9: Transformation matrices C Aα Bβ for the region interface identifications ∂ α B A ↔ ∂ β B B in the ten-region, N R = 10, representation of the genus N g = 1 manifold, the two-torus, T 2 . All transformation matrices C Aα Bβ are assumed to be the identity I, except those specified in this table. Greek letters indicate identifications between external edges. Right illustration shows the same multicube representation using uniformly sized, undistorted squares, including their relative locations in the background Euclidean space. The locations of the regions in the right illustration were chosen to show explicitly as many nearest neighbor identifications as possible. The locations of the eight square regions used to construct this representation of the genus N g = 2 manifold, the two-handled sphere, are illustrated in Fig. B.5. The values of the squarecenter location vectors c A for this configuration are summarized in Table B Table B.13, and the corresponding transformation matrices are given in Table B.14. This representation of the two-handled sphere is constructed by starting with the ten-region representation of the two-torus shown in Fig The locations of the ten square regions used to construct this representation of the genus N g = 2 manifold, the two-handled sphere, are illustrated in Fig. B.6. The values of the squarecenter location vectors c A for this configuration are summarized in Table B Greek letters indicate identifications between external faces. Right illustration shows the same multicube representation using uniformly sized, undistorted squares, including their relative locations in the background Euclidean space. The locations of the regions in the right illustration were chosen to show explicitly as many nearest neighbor identifications as possible. Table B.13: Region interface identifications ∂ α B A ↔ ∂ β B B for the eight-region, N R = 8, representation of the genus N g = 2 manifold, the two-handled sphere. ∂ +y B 5 ↔ ∂ −y B 6 ∂ −y B 5 ↔ ∂ +y B 8 ∂ +y B 6 ↔ ∂ −y B 7 ∂ +y B 7 ↔ ∂ −y B 8 Table B.14: Transformation matrices C Aα Bβ for the region interface identifications ∂ α B A ↔ ∂ β B B in the eight-region, N R = 8, representation of the genus N g = 2 manifold, the two-handled sphere. All transformation matrices C Aα Bβ are assumed to be the identity I, except those specified in this table. edges of the regions are described in Table B.16, and the corresponding transformation matrices are given in Table B.17. This representation of the two-handled sphere is constructed by starting with the eight-region representation shown in Fig. B.5 and adding additional squares to separate more distinctly the ends of the second handle on the torus. The outer edges in this ten-region representation of the genus N g = 2 manifold are therefore connected together as shown in Fig. B.6. This representation has the advantage that it reduces the maximum number of squares meeting at a single vertex from eight to six. The reference metric in this case therefore requires less distortion of the flat metric pieces that go into its construction. Greek letters indicate identifications between external faces. Right illustration shows the same multicube representation using uniformly sized, undistorted squares, including their relative locations in the background Euclidean space. The locations of the regions in the right illustration were chosen to show explicitly as many nearest neighbor identifications as possible. Table B.16: Region interface identifications ∂ α B A ↔ ∂ β B B for the ten-region, N R = 10, representation of the genus N g = 2 manifold, the two-handled sphere. Appendix B.7. Representations of Genus N g ≥ 3 Multicube Manifolds Using 10(N g −1) Regions Multicube representations of two-dimensional manifolds with genera N g ≥ 3 can be constructed by gluing together copies of the genus N g = 2 multicube manifold depicted in Fig. B.6. This is done by breaking the interface identifications denoted γ and κ in Fig. B.6 and then attaching in their place additional copies of the same multicube structure, as shown in Fig. B.7 for the genus N g = 3 case. Each copy of the genus N g = 2 multicube structure added in this way increases the genus of the resulting manifold by one. The addition of one copy, as shown in Fig. B.7, produces a multicube manifold of genus N g = 3. The values of the square-center location vectors c A for this genus N g = 3 case are summarized in Table B.18. The inner edges of the touching squares in Fig. B.7 are connected by identity maps. The identifications of all the edges of the twenty square regions are described in Table B.19, and the corresponding transformation matrices are given in Table B.20. Table B.20: Transformation matrices C Aα Bβ for the region interface identifications ∂ α B A ↔ ∂ β B B in the twenty-region, N R = 20, representation of the genus N g = 3 manifold, the three-handled sphere. All transformation matrices C Aα Bβ are assumed to be the identity I, except those specified in this table.
2016-02-11T22:05:27.000Z
2014-11-25T00:00:00.000
{ "year": 2016, "sha1": "ce25230896a265a127df6753cadb63d46de235c1", "oa_license": "elsevier-specific: oa user license", "oa_url": "http://manuscript.elsevier.com/S0021999116000930/pdf/S0021999116000930.pdf", "oa_status": "BRONZE", "pdf_src": "Arxiv", "pdf_hash": "ad430d1dc3f8243253cfd2f30f1c33e204e540ef", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics", "Physics", "Computer Science" ] }
36067270
pes2o/s2orc
v3-fos-license
Cataract surgery in Steven Johnson syndrome. We read with interest the article Phacoemulsification in total white cataract with Steven Johnson syndrome (SJS) by Vasavada et al.1 The authors did not use topical steroids due to fear of infection aft er cataract surgery in an SJS patient. We feel postoperative topical steroid therapy is very important as ocular surface inß ammation is an important factor causing corneal complications like melting.2,3 We have also had some experience in operating on patients with SJS. In all our cases we have used topical steroids postoperatively under antibiotic cover and adequate lubrication and have not encountered infection. We believe that with adequate antibiotic cover the risk of infection may be reduced. Lett ers to the Editor Cataract surgery in Steven Johnson syndrome Dear Editor, We read with interest the article Phacoemulsification in total white cataract with Steven Johnson syndrome (SJS) by Vasavada et al. 1 The authors did not use topical steroids due to fear of infection aft er cataract surgery in an SJS patient. We feel postoperative topical steroid therapy is very important as ocular surface inß ammation is an important factor causing corneal complications like melting. 2, 3 We have also had some experience in operating on patients with SJS. In all our cases we have used topical steroids postoperatively under antibiotic cover and adequate lubrication and have not encountered infection. We believe that with adequate antibiotic cover the risk of infection may be reduced. Secondly, the authors have preferred a scleral incision for cataract surgery in a case of SJS due to the diseased cornea. Corneal melts have been reported in cases of cataract surgery done on patients with secondary Sjogrens syndrome with collagen vascular diseases. Surgery in SJS should be att empted on a quiet eye. We would like to emphasize that with adequate postoperative care and careful monitoring, the risk of corneal complications like melt is rare considering the fact that phacoemulsiÞ cation incision is small. We feel that it is bett er to leave the conjunctiva virgin in view of the postoperative inß ammation and diffi culty in exposure during surgery due to severe fore shortening of fornices. Thus a corneal incision may be bett er in these cases as one can avoid peritomy and cautery causing least amount of disruption to the ocular surface. 4 Large retinal pigment epithelium rip following serial intravitreal injection of avastin in a large fibrovascular pigment epithelial detachment Dear Editor, Anti vascular endothelial growth factor (VEGF) therapy has tremendously improved the management of wet age-related macular degeneration (AMD). With an increase in the usage of such agents over the last few years, complications have also been noted even though in few. It has been reported that retinal pigment epithelium (RPE) rip can occur following intravitreal injection of bevacizumab and other anti-VEGF agents. 1,2 We would like to share our experience with intravitreal injection of bevacizumab (1.25 mg/0.05 ml) in a patient of AMD with Þ brovascular pigment epithelial detachment (PED). A 52-year-old lady presented a year back with complaints of central scotoma in the right eye of one week duration. Vision at presentation was 20/40; N12 in the right eye and 20/20; N6 in the left eye. Examination revealed a large PED approximately two and half disc diameters at the macula in the right eye with hard exudates superonasal to the PED in the peripapillary area and RPE defects with hard drusen at the macula in the left eye [ Fig. 1A]. Fundus fluorescein angiography (FFA) revealed a large PED corresponding with the clinical picture, with ill-deÞ ned stippled late leakage temporal to the disc [Fig. 1B]. Indocyanine green angiography (ICG) revealed stippled ß uorescence in the temporal peripapillary area with a network of ill-deÞ ned vessels [ Fig. 1C]. Optical coherence tomography (OCT) showed the large PED with overlying subretinal ß uid. A well-deÞ ned V-shaped depression (marked) in the contour of the PED corresponding to the tomographic 'notch' delineated the superior high-domed PED from the adjacent shallowdomed PED. This feature was seen when the OCT scan was taken through the area of stippled hyperß uorescence and this area has been suggested to be indicative of the presence of an occult membrane 3 [ Fig. 1D]. nasal edge of the PED was made. The patient was explained the diff erent modalities of treatment. Due to economic constraints, the patient chose to undergo intravitreal bevacizumab injection. Two weeks post injection of intravitreal bevacizumab, she recovered to 20/20; N6 in right eye and clinically, PED reduced in size. She was stable for six months, when she had a recurrence of the symptoms. Her vision was 20/60, N12, the PED had re-occurred at the same location and was comparable to the size on initial presentation. Optical coherence tomography was repeated which showed findings similar to the first presentation [ Fig. 2]. Intravitreal injection of bevacizumab was repeated. She reported three months aft er the injection with further loss of vision in the right eye to 20/200, N36.The size of PED had increased signiÞ cantly [Figs. 3A and B]. No other clinical change was noted. At this stage it was decided to repeat injection bevacizumab in her eye while monitoring her clinical response. She was stable aft er two injections, aft er the third injection she reported a signiÞ cant improvement in vision. Documented best corrected visual acuity was 20/30; N6 in the right eye and fundus evaluation revealed a reduced height of the PED with a crescentric area of denuded RPE in the temporal region of the PED, away from the fovea layer indicative of the rolled but ß att ened RPE rip at the edge of the PED. 4 The PED was shallowest towards the edge of the rip [Figs. 4D]. The patient's vision did not deteriorate as the rip was well away from the fovea. Pigment epithelial detachments have been known to develop RPE rips, either spontaneously or following laser photocoagulation and photodynamic therapy. It is usually seen to occur at or along the border of the serous RPE detachment on the side opposite to the location of the choroidal neovascular membrane (CNVM). Spontaneous PEDs are explained by the hydrostatic pressure of leaking exudates from the sub-RPE occult membranes, leading to the formation of RPE detachments as well as the acute RPE tears or rips. The RPE tears post laser and photodynamic therapy are explained by contraction of the Þ brovascular tissue comprising the membrane. 5,6 In our case, the occult CNVM was located in the peripapillary area, at the nasal edge of the PED and the RPE rip was seen at the temporal border of the PED. The free edge of the RPE had rolled under and retracted towards the area of neovascular tissue. Anti-VEGF agents act by reducing angiogenesis and arresting the CNVM and thus the same pathology of Þ brovascular tissue contraction may be at work in RPE rips following anti-VEGF therapy. Thus the risk of an RPE rip should be considered with treatment with anti-VEGF agents in cases with Þ brovascular PEDs. Phacoemulsification and pars plana vitrectomy: A combined procedure Dear Editor, We read with interest the article by Jain et al. 1 We commend the authors on the very informative article but we would like to add a few points to make the article more pertinent. What was the postoperative refractive status of the patients in whom the retina was att ached? We would be interested in knowing the accuracy of the intraocular lens (IOL) power calculation. It is common knowledge that IOL power calculation is highly unreliable in silicon oil Þ lled eyes 2 and in eyes with retinal detachment. The IOL power could have been calculated with some degree of accuracy in only 19 eyes out of the 65 eyes in this study. The authors mention that the other eye was used as a guide for IOL power calculation where the power could not be calculated in the eyes to be operated. Were the fellow eyes anatomically normal in all these cases? Was there a history suggestive of anisometropia? Why did the authors not do a primary posterior capsulotomy in all cases? This would have prevented any posterior capsular opaciÞ cation. Was capsulotomy done in any case at the time of silicon oil removal? A clear visual axis is important in the postoperative period to visualize retinal status and for further laser, if necessary, prior to silicon oil removal. The main purpose of doing a combined phacoemulsiÞ cation and vitrectomy is to be able to implant a lens so as to avoid the refractive problems of aphakia. If the IOL power calculation is not carried out accurately and it leads to surprises which are equivalent to the errors seen in aphakia the whole exercise seems to be futile. We have been doing combined surgeries for many years. Our protocol for these diffi cult cases is as follows: We generally do not do primary IOL implantation in any eye with a retinal detachment. At least a rim of the posterior capsule is maintained in all cases aft er phacoemulsiÞ cation. The retinal procedure is carried out and subsequently at a later date, if the retina is att ached, the IOL is inserted together with silicon oil removal, if oil was used for tamponade. The IOL power is calculated using the modiÞ cation as suggested. 3 In eyes where gas is used as a tamponade, a secondary IOL is inserted aft er three to four months and IOL power calculation is carried out using standard formulae.
2018-04-03T03:45:01.368Z
2007-11-01T00:00:00.000
{ "year": 2007, "sha1": "5c3e33a41e543aba88cda62178d334ab1d29587b", "oa_license": "CCBYNCSA", "oa_url": "https://doi.org/10.4103/0301-4738.36496", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "5eee3e170b7962ddf60ad4993f6609b0a9f09349", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
40777651
pes2o/s2orc
v3-fos-license
Control Schemes For Distribution Grids With Mass Distributed Generation This project discusses the control schemes for distribution grids with a large amount of wind penetration. Microgrids are constantly gaining popularity, especially in the countries, where there is energy crisis. Various systems, including synchronous generators, grid and loads, have been investigated in this project. Major focus is placed on active and reactive power sharing. Droop control for multiple synchronous generators is explored. The phenomenon of load transients has also been reviewed and associated simulations have been carried out on SimPower Systems. Constant wind power has been introduced and behaviour of the electrical system is observed. Behaviour of the system under variable wind has also been analysed. Moreover, recent development projects and previous works, regarding microgrids and distributed generation, have been discussed. Acknowledgements Firstly, I would like to thank my supervisor, Professor Greg Asher, for his invaluable help and constructive advice, for his patience, enthusiasm and immense technical knowledge. I would like to express my heartiest gratitude to him whose wide range of expertise and knowledge has helped me make this project a success. My special thanks are also extended to Dr. Serhiy Bozhko, Mr. Sung Oe and Mr. Shuai Shao for their numerous guidelines during my project. Last, but not the least, I would like to express my deepest love and appreciation to my family, especially, my parents and grandparents, for their never-ending love, constant support and words of comfort. Critical Loads: These are the loads which are dependent on microgrid. In other words, the loads to which the micro-source supplies power are called critical loads. They are also called sensitive and non-traditional loads. Non-Critical Loads: These are the loads which are cut off from the system when microgrid starts delivering power. They are also known as non-sensitive and traditional loads. INTRODUCTION This chapter will introduce the background of the project, the main aims and the objectives and how they will be achieved. Thesis structure and layout is also presented. Introduction As the power demand is increasing day by day, the importance of producing more energy cannot be neglected. In this regard, micro-grids, also known as distributed grids, are of utmost importance. They are a good means of providing energy to the network in case the main grid fails. This project deals with the control schemes for distribution grids with mass distributed generation. It is likely that in near future, there will be more reliance on distributed generation, especially at low voltage (LV) and medium voltage (MV) levels. Such systems will increase the overall stability and reliability of the power network and at the same time, will provide more efficient performance. In simple words, if there is any fault on main grid and it is tripped, the micro-grid does not disrupt power flow and continues to supply power to the consumers. Aims and Objectives The major aim of the project will be to develop the control methods for micro-grids under high levels of distributed generation which, in case of this project, is wind power. To achieve this aim, the software called SimPower Systems will be used. It is basically a part of MATLAB software. Several basic systems will be built and then, the final modeling will be shown. The concept of master-slave and droop-based control of multiple synchronous generators will be investigated. Load transients will also be observed. Finally, the wind power will be introduced in the system and active and reactive power sharing will be explored. At each stage of the project, behavior of different parameters like voltages, currents, active power, reactive power, speed etc. will be observed and discussed. Wind power will be used as distributed generation source. Literature [7] has proved that wind power is the first preferred choice for micro-source. Previous research results [13] have shown that distributed generation (DG) could have significant impact on the stability of the transmission system, especially at large penetration levels, where penetration is DG power as a percentage of total load power. Thesis Structure and Layout The first chapter will deal with the general introduction about the project. Major aims and objectives will also be mentioned. The second chapter presents research work carried out in the past regarding microgrids and their control schemes. The theoretical background and recent development projects, regarding microgrids and distributed generation, are also covered in this chapter. The third chapter will give some insight into simulations of two major networks. The first network will consist of the main grid, a synchronous generator and various loads. The second one will include two synchronous generators and loads. Behaviour of various electrical quantities will be observed. Active and reactive powers for generators and loads will also be explored in this chapter. The fourth chapter will look into the droop control of these generators and how active and reactive power is shared between them according to the droop coefficients. The fifth chapter will give some idea about the behaviour of the system when wind power (source of distributed generation) is introduced into the system. The sixth (and the last) chapter will conclude the project with insight into guidelines for future work regarding microgrids. LITERATURE REVIEW AND BACKGROUND INFORMATION This chapter reviews some research work and recent development projects regarding microgrids and distributed generation. Detailed information, related to various issues and factors of microgrid and distributed generation, is also elaborated. Literature Review and Background Information Microgrids provide more efficiency and reliability in the power network. Efficiency can be considerably enhanced using CHP (Combined Heat and Power) techniques. Piagi and Lasseter [1] have found that during control mode of micro-grid, there must be some energy balance between power supply and demand. This is possible by dispatching generators and/or loads. They [1] have proposed a micro-source control scheme for facilitating seamless mode transfer between grid and micro-grid. It means in case of a fault, power can be rapidly detached from grid and shifted to micro-grid. The authors [1] have used power-frequency droop to implement this. Similar idea will be used in the present project to investigate power sharing between the two synchronous generators. They have carried out a case study on University of Wisconsin microgrid to demonstrate the practical implications of island and grid connected modes. [1] S. Krishnamurthy et al. [2] have studied the operation of diesel engine fed synchronous generator sets as distributed generation sources. They found that controlling the reactive power-voltage droop characteristics is very essential; otherwise large reactive currents would flow in the synchronous generators. Ling Su et al. [3] carried out studies on the micro-grid control schemes using two micro-turbines as micro-sources. They slightly changed the way the droop parameters are designed. They enforced a limit on the active power output so that when power is at the maximum value, frequency is at its minimum value and vice versa. Yong Xue et al. [4] designed a control scheme to control the active power in a distributed generation unit in grid-connected mode. Similarly, Basak P et al. [9] have discussed the control techniques for island mode of micro-grids. One major conclusion drawn from their work is that when micro-grid is operating in island mode, all non-critical (traditional) loads are eliminated automatically. Zhang Jie et al. [10] have used the inverter control technique for micro-grid control. This is the most common control technique. They also studied the transition between grid-connected and grid-disconnected mode. They have used Voltage Source Inverter (VSI) as micro-source. Wei Huang et al. [11] have studied seamless transfer between grid-connected and disconnected mode using SCR trigger control. Jia Yaoqin et al. [5] have carried out research on standalone micro-grid at low voltage. They used parallel inverters as sources of distributed generation. They have used an improved method of droop control to stabilize the output voltage which results in small reactive currents. Rowe C et al. [8] have designed the power-frequency droop using a voltage PI controller in the direct axis of rotating reference frame. Huang Jiayi et al. [12] have discussed the current situation and projects about micro-grids in Europe and Japan. In Portugal, a micro-grid has been developed which is supplied by a LV feeder from a distributed power station of 200 KVA. Similarly, in Japan, an organization has started three research projects in this area. In one of these, they have used fuel cells as microsource. A vital part of microgrid operation consists of energy management and controlling scheme in islanded mode. Literature [14] has put forward the technique of super capacitor and battery combination for this. Flywheel may be used in some cases. [11] The separation device (marked as SD) in Figure 2.1, taken from [11], is basically a static switch. Figure 2.1 Typical Distributed Grid System When it is open, we have what we call as the island mode or stand-alone mode. Under this mode, all traditional loads (non-critical) are eliminated and the micro-source powers the sensitive (critical) loads only. The details about microgrid are presented in Section 2.3. Definitions According to IEEE, distributed generation is the "generation of electricity by the facilities that are sufficiently smaller than central generating plants so as to allow interconnection at nearly any point in a power system". According to Dondi et al., distributed generation is the generation of electrical power using a small source which is not part of the large central power system and which is located in close vicinity of the load. Similarly, according to Ackermann et al., distributed generation is the generation of electrical power from the source which is directly connected to the distribution network. It is also known as dispersed or embedded generation in USA. In some European countries, it is called decentralized generation. It must be noted that distributed generation should not be confused with renewable energy because distributed generation may include renewable or non-renewable technologies as will be explained later on. According to Arthur D.Little, "Distributed generation is the integrated or stand-alone use of small, modular electricity generation resources by utilities, utility customers and third parties in applications that benefit the electric system, specific end-user customers or both." [19]. Why to use Distributed Generation? There are many factors which contribute in the attraction towards distributed generation. Firstly, it gives a vital opportunity to different players in the electricity market to provide electric power according to customer requirements. In other words, there is more flexibility in providing electricity. Secondly, they can provide electric power to local loads, hence, saving the costs of grid connections to remote and geographically challenging areas. They can also provide grid support, for instance, stabilizing a dropping frequency (due to some fault or excess load current). Distributed generation is environmentally beneficial too. It is because most of the distributed generation technologies include renewable sources like wind, solar etc. They can produce combined heat and power when CHPs are used. This improves the efficiency significantly. [18] The trend towards using distributed generation has increased significantly in the last few years. Types of Distributed Generation There are many types of distributed generation [17], [20]. Some of them are explained below: Reciprocating Engines: This technology is one of the oldest distributed generation technologies. The engines commonly use natural gas or diesel as their fuel source. The design of the engine is particularly important for increased efficiency and reduction in the emission nitrous oxides. Wind Power: Wind turbines offer a relatively cheap method to provide electricity. They have low CO2 emissions and low pollution effects. The major hurdle in their use is that wind is unpredictable and hence, we are not sure of the power output. Secondly, wind turbines require a lot of maintenance regarding gearbox and rotor. They are also hazardous for birds and bats. In order to counter-act the unpredictable nature of wind velocity, battery storage systems are being incorporated into the networks to continue the supply of electricity when the turbine blades are not rotating. Voltage regulators help these batteries to charge and store energy. Micro-turbines: Further, the distributed generation sources can also be categorized as renewable and non renewable technologies. Renewable technologies consist of solar, wind, geothermal etc. while non renewable technologies include fuel cells, micro-turbines, combustion turbine, and internal combustion engine. [21] Typical power capability ranges of various DG sources are shown in Table 2.3 which is taken from [21]. Applications of Distributed Generation DGs have a wide range of applications. They can be used as a standby source of power in case the main grid fails. It can then supply power to sensitive loads in a normal way. This is commonly called stand-by operation mode. Isolated areas can use distributed grids as it might not be feasible to connect the grid in those geographically challenging areas. Distributed Generation Challenges The types of challenges [22] which are being faced by DG can be divided into two major classes: Technical Challenges: They pertain to the issues like safe operation, emergency operation (island mode), power quality and stability issues. Non-technical Challenges: They relate to the factors like cost of purchase, insurance and setting standards for interconnection. Major Policy Issues There are some major policy issues regarding distributed generation. Firstly, they are expensive to install and maintain. Their protection schemes are relatively difficult to design. In the event of islanding, special considerations must be taken into account such as no power is being supplied to the grid at the instance of outage. Secondly, some DG units use induction generators which are not capable of producing reactive power in the system. We know that power usually flows unidirectional from high voltage level to a low one i.e. from transmission grid to distribution grid. Increased number of DG units may result in power flows from the low voltage to medium voltage grid. Some DG technologies produce direct current, so, these units must be connected to the grid using inverters and DC-AC interfaces. This can give rise to harmonics which disturbs the quality of output power. It is essential to filter these harmonic distortions properly otherwise they may affect the operational capabilities of the load. [18] Microgrids According to the U.S. Department of Energy, microgrid is "a group of interconnected loads and distributed energy resources within clearly defined electrical boundaries that acts as a single controllable entity with respect to the grid (and can) connect and disconnect from the grid to enable it to operate in both grid-connected or island-mode." The basic function of micro-grid is to maintain stable operation under various faults and factors which can disturb the network stability. In simple words, it provides power to the local load. In other words, micro-grid is a smaller version of main grid which operates independently. [23], [24] Modes of Operation Micro-grid has two modes of operation which are highlighted below. [25] Grid Connected Mode Under this mode, main grid is active. Static switch is closed. All feeders are being supplied by main grid. In other words, critical loads (on Feeders A, B & C) and non-critical loads (on feeder D) are being supplied by the main grid. Figure 2.4, taken from [25], displays this scenario. Grid Disconnected Mode It is also called island mode. Under this condition, main grid is cut off and is not supplying Static switch is open. Feeders A, B and C are supplied by micro-sources. Feeder D is dead as it is not sensitive. In other words, critical loads are being supplied power by the microsources and non-critical loads (on Feeder D) are cut-off from the system. Figure 2.5, taken from [25], displays this situation. Figure 2.5 Grid-Disconnected Mode [25] Basic micro-grid structure, taken from [12], is shown in Figure 2.6. Referring to Figure 2.6, fuel cell is the source of distributed generation and is supplying power when the static switch opens. Circuit breaker is incorporated to trip in the event of fault. Types of Micro-grid There are many types of micro-grids some of which are explained below. [26], [28] Remote grids: These are the grids which have their existence due to the rugged and severe topographical features of landscape. They are used for locations which are quite far away from the power grid. They are employed when it is not possible to connect all loads to a single main grid. In other words, these grids are used to supply power to geographically challenging areas. Military grids: They are also known as security grids. They are used to keep a record of important data pertaining to security etc. Commercial grids: They are also called industrial grids. They refer to the grids which cater for the energy needs of large industries and factories. They are employed due to their increased security and reliability. They are most commonly used in chemical industry and chip manufacturing industry. Community grids: They also known as the utility grids. They are the most commonly used now-a-days. They refer to the grids which cater for the energy needs of domestic consumers. (2) Plug-and-play (Utilization of waste heat from gases to improve efficiency). (3) Smooth seamless transfer between island mode and grid-connected mode. Stages of Operation Micro-grids have four stages of operation [6]: (1) Transient stage of going to grid-connected mode. (3) Transient stage of going to island mode (grid-disconnected mode). Differences between Microgrid & Main grid Literature [6] has pointed out some major differences between traditional main grid and microgrid. These are: (1) To the main network, microgrid is a modular controllable unit, just like a generator or load, which can be scheduled. But to the consumers, it is an autonomous system, which can satisfy different power quality requirements from different loads. It basically acts as a source of power to local loads. (2) There are many unconventional generators in a microgrid like wind power, photovoltaic or fuel cells. (3) Besides power, microgrid has the ability to supply heat. It can utilize waste heat using CHPs, hence, giving a rise to the overall efficiency of the network. Advantages of Microgrids Some advantages of using micro-grids are [17], [24], [26], [27]: (1) They lessen the price of the electricity by providing their power to the main grid. This results in sufficient reduction of required power demand. (2) Combined heat and power distributed generation can utilize the waste heat from gases for increasing the overall efficiency of the system. (3) They can easily be manufactured in the form of modules and hence, can be installed quickly at any suitable location. (4) As there are many DG technologies, there is no need of using a particular kind of fuel or DG source. Hence, there is a large variety and choice in the use of micro-sources. (5) They increase system reliability in case of a fault on the main grid. They ensure continuity of supply to the consumer end. It is practically made possible by the rapid transfer between gridconnected and island mode. (6) They employ renewable sources like wind and PV as micro-sources. Hence, they don't pose serious threats to environment. They are environmentally friendly. (7) In the long term, this area is a potential job creator for micro-grid construction, installation and maintenance. (8) As the micro-grid involves placing of power sources close to the load, there are minimum transmission and distribution losses. Key Issues of Microgrids There are some key issues regarding micro-grids which must be addressed. [24], [25]: ✓ It is very vital in a power system to balance the real and reactive power. Such imbalance happens when power generated is not equal to the power demand. This can cause the system frequency to drift from its nominal value. This imbalance can be dealt with the help of frequency and voltage control. That is why, active power-frequency droops and reactive power-voltage droops are used. ✓ Another issue regarding the stability of micro-grids is their operation under island mode. The micro-grid must provide power to all the loads which are controlled by the microsources. ✓ Resynchronization with the main grid. This is difficult and requires a lot of care. It is because the voltages and frequency, before closing the switch, must be equal to the steady state values of voltages and frequency. Recent Development Projects regarding Microgrids This section gives a brief description about some of the recent microgrid development projects in some countries: Japan In 2003, the new Energy and Industrial Technology Development Organization and the Ministry Of Economy, Trade and Industry, Japan started three practical projects which dealt with introducing renewable energy resources into local power grid. The main target which was achieved by them was the design of a system which had efficient control and reliability. As this was not economically feasible, thus, some methods to improve the economics of the designed system were presented. [24] Canada In Canada, Micro-grid R&D has done some work on medium voltage level. Here, they have placed their stress on designing and developing new protection and control techniques and investigated them in detail, especially, when there is a large input of wind power into the system. They have looked into both modes of operation i.e. grid-connected mode and island mode. [35] India Similarly, in India, a hilly area by the name of Alamprahu Pathar (in the state of Maharashtra), was selected as a potential site for carrying out a micro-grid project. This site has good generation of wind power and construction of a micro-grid here can benefit nearby sugar industries as well as the domestic and agricultural consumers in the nearby area. [35] China In Xiamen, China, a microgrid working on direct current will start functioning by the end of this Lithuania In recent years, some trends in moving towards distributed generation have been seen in Lithuania. In the last decade, the country has developed many combined heat and power plants to tackle the microgrid issue. However, there are still some major hurdles before the system of microgrid generation can be fully implemented in this country. Appropriate legislation needs to be done regarding this. There are also some technical barriers which prevent this for example small generators need to meet some criteria which are set by the network operator. There are also some issues related to connection validation time. If these issues are resolved, this country can progress rapidly in the energy sector. [37] SIMULATIONS INVOLVING MAIN GRID AND SYNCHRONOUS GENERATORS This chapter presents simulation work involving grid, synchronous generators and various loads. Behaviour of various electrical parameters is observed. Some insight into load transients is also provided. SimPower Systems The software which is used to carry out the simulation work is called SimPower Systems. SimPower Systems is basically a part of MATLAB library. It is a very useful tool for simulation, Choosing the Solver Simulink has a variety of solvers in order to solve an electrical network. The solver basically determines the step time of the simulation and the way the simulation proceeds. This choice depends on various factors. ODE45 is the most commonly used solver. It uses Runge-Kutta and Dormand-Price integration techniques for evaluating step times. Its accuracy is also quite reasonable. In case, the problem or the network is stiff, it is recommended not to use ODE45. Stiff network means the network whose differential equations cannot be solved by a particular method and is numerically unstable. If such is the case, ODE23 is preferred. It is relatively accurate and faster when the circuit is stiff. There are some other solvers like ODE15s and ODE113 but they are not too common and are used in some special cases and problems which are computationally exhaustive. [30] Simulation of network involving main grid and a synchronous generator Firstly, the system consisting of a main grid, a synchronous generator and resistive load is investigated. The block diagram is shown in Figure 3 At t=0, generator is connected to the system and it starts to operate as soon as the simulation starts. The load is resistive. It is adjusted to absorb a load power of 1 MW. The line to line bus rms voltage is nearly 11 KV. The speed of the synchronous generator is 157 radians/second. It is because the main grid (threephase voltage source) sets the frequency of the generator in this case and hence, the generator runs at 157 radians/second. In technical terms, the grid is acting as the "Master" and the synchronous generator is acting as the "Slave". The three-phase measurement block is used for measuring the currents and voltages in the three lines. It is connected in series with a three phase element. It can measure phase-to-ground as well as phase-to-phase voltages and currents. The three phase voltages and currents can be easily converted to the  frame for calculating grid active and reactive powers. The block is shown in Simulation Results and Discussions The network in Figure 3.1 is simulated for 15 seconds and the solver used is ode23tb. We know that the formula used to calculate the speed of synchronous generator is given by: Ns= 120 P f Where, P is the number of poles and f is the system frequency. In the given system, f is 50 Hz and number of poles are 4, so, the synchronous speed is 1500 RPM (157 radians/second).The synchronous generator takes in 0.5 MW of active power. In other words, we are forcing it to produce 0.5 MW at the output. The load power is already adjusted to absorb 1 MW, thus, the grid must produce 0.5 MW to keep the powers balanced. The results are as expected. The active and reactive powers of the grid (three-phase voltage source), synchronous generator and the load was scoped and observed. It was found that active powers of the grid and synchronous generator are 0.5 MW each which sums up to the load active power (which was preset at 1MW). Graphical Results regarding Active and Reactive Power The graphical results of active and reactive powers across the generator and the grid are shown in Discussions regarding Active and Reactive Power Let us first observe the behavior of active power across the grid, generator and the load. From respectively. The load reactive power is zero as it is a purely resistive load. Hence, reactive power generated by the synchronous generator is being cancelled by that of grid. In other words, synchronous generator is supplying KVARs (reactive power) and the grid is absorbing them. Associated Graphical Results Some Investigation of Load Angle Load angle is basically the angle between the bus voltage vector (V) and the no load voltage vector E. It is commonly denoted by  . It takes a very long time to get a steady state value of this load angle for the system shown in Figure 3.1. This problem is discussed in Appendix A2. Behaviour of load angle for the system involving synchronous generator and resistive load Now, we shall investigate the trend in load angle in a system which consists of the synchronous generator and resistive load. The block diagram is shown in Figure 3.12. The actual Simulink diagram is shown in Appendix A3. Results and Discussions Values of load resistance were varied and the corresponding load currents and load angles were observed. Results in the tabular form are shown in Table 3 Table 3. Trends in load angle with variation in load resistance As evident from Table 3.13, as the load resistance is increased, load current decreases and load angle also decreases. It is because the smaller the current, the smaller is the jIXs vector and hence, smaller the angle between the E vector and the bus voltage vector V. Figure 3.14 shows this phenomenon.  Figure 3.14 Phasor diagram depicting the relation between load angle  and current I Behaviour of load angle for the system involving grid, synchronous generator and resistive-inductive load Now, we shall investigate the trend in load angle for a system which consists of the grid, synchronous generator and resistive-inductive load. The block diagram is shown in Figure 3.15. The actual Simulink diagram is shown in Appendix A4. Results and Discussions The load inductance was kept constant at 0.01 H. Values of load resistance were varied and the corresponding load currents and load angles were observed. Results in the tabular form are shown in Table 3 Table 3.16 Trends in load angle with variation in load resistance As evident from Table 3.16, as the load resistance is increased, load current decreases and load angle also decreases. It is because the smaller the current, the smaller is the jIXs vector and hence, smaller the angle between the E vector and the bus voltage vector V. Investigation of system containing two synchronous generators and resistive/inductive loads Now, the system consisting of two synchronous generators and resistive/inductive loads will be studied. The block diagrams of the system are shown in Figures 3.17 and 3.18. The actual Simulink diagram is shown in Appendix A5. Why Synchronous Generators are operated in parallel? There are many reasons for operating the synchronous generators in parallel. Firstly, it increases the reliability of the power network. It means that if any generator has to be taken out from the existing system due to some fault, there is not total collapse of load and the other generator can still supply power to the load to some extent. [31] Secondly, it allows more flexibility in the system. If one generator is to be removed from the system for some maintenance or repair, it can be easily done. Also, this paralleling does not place too much load on the individual generator operation. Last, but not the least, various generators can provide a much bigger load than one machine itself. [31] Conditions for Paralleling There is some criterion which needs to be satisfied before any generators are to be connected in parallel. They must have same rms line voltages, same phase sequences and same phase angles. The frequency of the generators to be added (usually called the oncoming generators) must be slightly higher than the frequency of the running system. [31] Simulations and Results For the systems in Figures 3.17-3.18, the simulation time used is 5 seconds and the solver is ode23tb. Both the generators are identical; each rated at 1.5 MVA and 11 KV (line to line rms). The load is resistive and is chosen in such a way that it receives an active power of 1 MW. In this case, SG 1 is acting as the "Master" and SG 2 as the "Slave". In other words, SG 1 sets the speed of SG 2. Firstly, the switch is off (Figure 3.17) and SG 2 (Synchronous Generator 2) is run at zero power and SG 1 (Synchronous Generator 1) is made to run at a synchronous speed of 157 radians/second. Here, the load power (1 MW) equals the power produced by SG 1 (1 MW). After that, SG 2 is run at 0.5 MW and power sharing is observed. In this case, SG 1 also produces 0.5 MW to keep the active power across load at 1 MW. We can produce any amount of power (within limits) from SG 2 to see the power sharing, for example, if SG 2 is forced to run at 0.7 MW, SG 1 will produce 0.3 MW and so on. The reactive powers produced by SG 1 and SG 2 are -21 KVARs and 21 KVARs respectively. The net reactive power is zero as the load is purely resistive. The graphical results for active and reactive powers of both synchronous generators are shown in Load Transients Load transients and their characteristics were also observed for the system of Figure 3.18, for example, a resistive or inductive load comes into the system after some preset time. Consider the network shown in second, thus, the total reactive power is doubled i.e. 500 KVARs. CHAPTER 4 DROOP CONTROLLED SYNCHRONOUS GENERATORS This chapter discusses the droop control of multiple synchronous generators and how active and reactive power is shared between them. Before that, some theoretical background is discussed. Droop Control Background The concept of droop control is very vital when there is more than one synchronous generator connected to the system. Consider the graphs, taken from [32], shown in Figure 4.1 The well-known equations used to describe these graphs are: f=f0 -mP…………………… (4.1) V=V0-nQ…………………… (4.2) In these equations, f0 and V0 are the nominal frequency and output voltage magnitude. In other words, they are the values of frequency and voltage at no load. Symbols 'm' and 'n' are the frequency and voltage droops coefficients respectively. [33] Normally, the difference (f0-f) and (V0-V) are allowed to be 2% and 5% range of the nominal values. It must be noted that equations (4.1) and (4.2) hold for mainly inductive impedance, which is usually the case. If the impedance is capacitive or resistive, active power depends on voltage and reactive power depends on frequency i.e. opposite of the inductive case. [33] All synchronous generators have a source of mechanical power. This source is called the prime mover. The most common type of prime mover is the steam turbine although gas turbines, water turbines and diesel engines are also used as prime movers. Irrespective of what type is the prime mover, they follow a similar trend in such a way that as the power drawn from them increases, the speed at which they operate decreases. This phenomenon is commonly called the speed droop of prime mover. It is defined mathematically by the equation: Speed Droop= (X-Y)/(Y) Here, X is the no-load prime mover speed and Y is the full-load prime mover speed. [34] Simulations for Droop Control The simulations were carried out for the system shown in sharing between the generators. In other words, the higher these gains (or slopes), the smaller is the active or reactive power contribution by that synchronous generator. The active and reactive powers always equal the load active and reactive power for each case implying that the system is working fine. Results and Discussions For Vo= 7000V, the network in Figure 4.2 was simulated (for 5 seconds with ode23tb as the solver) with appropriate reactive power-voltage droop control scheme (shown in Appendix B1). Some simulation results for investigating reactive power sharing are shown in Table 4 Variations in Inductive Load Regarding the reactive power-voltage droop, inductive load was varied and consequently, reactive powers of loads and generators were observed. It was found that higher the load inductance, higher is the reactive power supplied by the generators and their sum equals the load reactive power in each case. Some simulation results are shown in Table 4 CHAPTER 5 SIMULATIONS INVOLVING WIND POWER In this chapter, wind power will be introduced in the system containing two synchronous generators and loads. Trends of active and reactive powers will be investigated. Modelling The block diagram which needs to be simulated is shown in Figure 5.1. The actual Simulink diagram is shown in Appendix C1. The scheme used to model the wind power is shown in Appendix C2. It basically introduces Id and Iq into the power system. These currents translate into active and reactive power injections into the system respectively. Some basic information about types of power converters is covered in the next section. Types of Power Converters in Microgrids There are three common types of power converters used in microgrids [38]. They are called gridfeeding, grid-forming and grid-supporting power converters. Grid-feeding Power Converters They are the most common type of power converters used in microgrids. They can be modelled as current source in parallel with large impedance. They cannot operate in island mode unless there is a local synchronous generator which sets the voltage and frequency of the microgrid. [38] Grid-forming Power Converters They are modelled as ideal AC voltage sources with a known frequency and amplitude. Standby UPS is a common example of this converter. [38] Grid-supporting Power Converters These are the converters which are also modelled as AC voltage sources and are interfaced to the grid through impedance. Active and reactive powers of this converter depend on AC voltage of the grid and voltage source as well as the impedance through which the converter is linked to the grid. [38] Relationship between Currents and Active/Reactive Powers The active and reactive power produced by the wind is dependent on the current Id and Iq. In other words, higher the Id, higher is the active power input into the system via wind and vice versa. Similarly, the higher the Iq, the higher the reactive power and vice versa. Simulation Work and Results The Simulink model for Figure 5.1 was simulated for purely resistive load. The load resistance is 121 ohms. Simulation time is 5 seconds. The solver is ode23tb. In this case, Id=20 A and Iq= 0. Active power sharing was observed. Active powers produced by the generators and wind were Now, Id= 0.1A and Iq =0.1 A. It was observed that as Iq was increased, the reactive powers produced by the synchronous generators also increased and their sum was balanced by the reactive power absorbed by the load. This is shown in tabular form in Table 5 Variable Wind In order to see the effect of variable wind on the system and how active powers are shared, Id was input as a variable wind data. Iq was set to zero. The table of data used as input is shown in Referring to Figures 5.10-5.11, considerable amount of initial transients can be seen. One possible reason for the transients is that the generator is not doubly fed. It has a simple rotor winding which is fed by direct current. It is partially controllable as opposed to doubly fed induction machines which are fully controllable because they have two rotor windings fed with alternating current. Relationship between Iq and Reactive Power In order to observe the relation between Iq and reactive power of generators, Iq was applied as a step input instead of a constant. The block diagram for the network is shown in Figure 5.12. Simulation time is 5 seconds and the solver is ode23tb. It was observed that as Iq was doubled (from 15A to 30A), at a step time of 3 seconds, reactive power of each of the synchronous generators also doubled. This is shown graphically in Figure 5.13 and Figure 5.14. In other words, reactive power produced by each of the synchronous generators is directly proportional to the amount of reactive current injection. FUTURE WORK AND CONCLUSIONS In this chapter, the project is concluded with some guidelines regarding future work. Future Work Some dimensions, regarding microgrids, which can be probed into, in the future are: • In-depth investigation and evaluation of various frequency and voltage control techniques under grid-connected and grid-disconnected modes. • Observing the transition phenomenon closely from grid-connected mode to griddisconnected mode and vice versa. Observing the same under high penetration of wind power. • Researching, designing and implementing protection schemes for microgrids. Conclusions The project presented the major simulations leading to the microgrid implementation. Theoretical background and various development projects regarding distributed generation and microgrids were presented. Active and reactive power sharing including droop control was observed for multiple synchronous generators. The phenomenon of load transients and the concept of Master-Slave were also investigated. Moreover, the behaviour of the system was studied under wind penetration. Overall, the project was a success and nearly all the objectives were met. In short, micro-grids are an effective way of providing electric power to the consumers without disruption. This technology is relatively new and has been implemented in a few countries like USA, Japan and Canada, but in the long run, it will surely benefit all kinds of consumers and this technology is here to stay. Further know-how about this technology should be inculcated into the people by the energy sector experts and they should make them realize the importance of using this.
2018-01-10T06:59:05.172Z
2021-07-02T00:00:00.000
{ "year": 2021, "sha1": "793b70f745ba68e7a1c45285dc10ce9170f076bf", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "dd31202d7306939038381dff226513e62a7eb3bd", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [ "Computer Science", "Engineering" ] }
260889267
pes2o/s2orc
v3-fos-license
Evaluation of the effect of pitavastatin on motor deficit and functional recovery in sciatic nerve injury: A CatWalk study Objectives This study aims to investigate the electrophysiological, scintigraphic, and histopathological effects of pitavastatin and its impact on functional status in rats with sciatic nerve injury. Materials and methods A total of 30 Wistar albino rats were divided into three equal groups including 10 rats in each group: sham group (no injury), control group (nerve injury induced), and pitavastatin group (nerve injury induced and 2 mg/kg of pitavastatin administered orally once a day for 21 days). Before and at the end of intervention, quantitative gait analysis with the CatWalk system and sciatic nerve conduction studies were performed. After the intervention, the gastrocnemius muscle was scintigraphically evaluated, and the sciatic nerve was histopathologically examined. Results There was no significant difference in the sciatic nerve conduction before the intervention and Day 21 among the groups (p>0.05). According to the quantitative gait analysis, there were significant differences in the control group in terms of the individual, static, dynamic, and coordination parameters (p<0.05). The histopathological examination revealed a significant difference in the total myelinated axon count and mean axon diameter among the groups (p<0.001). Conclusion Pitavastatin is effective in nerve regeneration and motor function recovery in rats with sciatic nerve injury. Peripheral nerve injury is a common disorder which may lead to disability. In addition to primary injury, the affected perineural environment, triggered inflammatory and immunological response, and oxidative stress can result in further damage. [1] An agent that can be effective particularly in the secondary injury process can improve recovery. Besides their cholesterol-lowering effects, statins also exhibit antioxidant, anti-inflammatory, immunomodulatory, and neuroprotective properties and a pleiotropic activity. [2] There are many studies evaluating the effects of statins on the central nervous system, while only few have investigated their effects on peripheral nerves. In studies on sciatic nerve crush injury, simvastatin, atorvastatin, and lovastatin have been shown to exert neuroprotective effects. [3][4][5] Similar to other statins, pitavastatin has anti-inf lammatory, immunomodulatory, and antioxidant properties. Some pleiotropic properties of pitavastatin (e.g., suppression of vascular inflammation and oxidative stress by endothelial protection) have been found to be highly potent. [6] In addition, pitavastatin has better pharmacokinetic and pharmacodynamic efficacy and minimal drug-drug interactions compared to other statins. [6] Owing to these advantages of pitavastatin over other statins, in the current study, we aimed to evaluate its effect on peripheral nerve injury, considering the lack of research in this area, and to examine the curative effect of pitavastatin using electrophysiological, scintigraphic, and histopathological methods and quantitative gait analysis in experimental models of sciatic nerve injury. Animals To the best of our knowledge, there were no previous similar studies using pitavastatin and the CatWalk was not used in statin studies. Therefore, the sample size was unable to be calculated, since detailed data were not given in previous studies using statin. We arranged our study as 10 rats per group, as in the study in which the duration and practices of the study were most similar. [4] A total of 30 female Wistar albino rats weighing between 230 and 260 g were used in the study. The rats were kept in a 12-h light-dark cycle in standard rooms with a controlled airf low and humidity at a temperature maintained at 22 to 24°C. They were fed ad libitum with standard rat chow and tap water. After the surgical procedure, the animals were moved to separate cages. Drugs administration Thirty rats were randomized one after another into three groups: sham group (n=10), control group (n=10), and pitavastatin group (n=10). In the sham group, only a skin incision was made and repaired. Nerve injury was not induced, and no medication was given. In the control group, nerve injury was induced, but no medication was given. In the pitavastatin group, after the induction of nerve injury, 2 mg/kg of pitavastatin (Alipza ® ,Recordati SaRL, Italy) was orally administered. Oral gavage prepared by dissolving in 0.5% methyl cellulose was given to the rats using a 20-gauge blunt feeding needle. surgical procedures After the induction of anesthesia with the administration of intramuscular ketamine hydrochloride (Ketalar ® , Pfizer Pharmaceuticals, Istanbul, Türkiye) (87.5 mg/kg) and xylazine hydrochloride (Rompun ® , Bayer Pharmaceuticals, Istanbul, Türkiye) (12.5 mg/kg), sterile conditions were achieved and the rats were placed in the prone position. With an incision made between the knee joint and the ischial tubercle, the skin and subcutaneous muscle tissue were passed to reach the sciatic nerve. Using a microsurgical needle holder, axonal injury (axonotmesis) was induced in a 1-mm segment on the nerve, immediately proximal to the left sciatic nerve where it trifurcated. Clamping was performed four times for 15 sec, until the mouth of the needle holder was fully closed. Totally, 5 sec were waited between each clamping. After the procedure, a transparent image was obtained in the axon segment ( Figure 1). The muscles were approximated with two pieces of 5/0 poly (glycolide-co-lactide) (Pegelak ® , Doğsan, Trabzon, Türkiye), and the skin was repaired with 4/0 polypropylene (Propilen ® , Doğsan, Trabzon, Türkiye). Electrophysiological evaluations All the rats were electrophysiologically evaluated under anesthesia before surgery and on Day 21 after the intervention. Sciatic nerve conduction studies were performed using the Neuropack M1 (Nihon-Kohden Corp., Tokyo, Japan) device with the following technical settings: stimulation rate, 1 Hz; sampling time, 100 µs; and filter frequency, 5 kHz for high-cut and 10 kHz for low-cut. The room temperature was set at 25°C, and the extremity temperature, measured with a digital needle thermometer, was set at 34 to 36°C. The operation site was shaved. A bipolar stimulator needle electrode was placed on the left sciatic nerve, 10 mm proximal to the crush area, with the anode tip positioned distally. The monopolar recording needle electrode was positioned so that the anode electrode was in the middle of the gastrocnemius muscle and the cathode electrode was in the tendon. The ground electrode was placed on the back of the rats (Figure 2). Stimulation intensity was gradually increased, until a supramaximal response was obtained from the sciatic nerve. The compound muscle action potential amplitude, distal latency, and nerve conduction velocity were recorded for the sciatic motor nerve. Functional evaluations Functional evaluation was undertaken with the CatWalk XT (Noldus Information Technology, Wageningen, the Netherlands), a quantitative gait analysis system used to automatically record the paw prints of mice and rats while walking. To teach the procedure to the rats, they were allowed to walk on CatWalk daily for two weeks before the operation. The animals were placed at the head of a track made of a standard 6-mm thick glass surface and black plastic walls. They were motivated to walk along the track by placing rewards at the end. After two weeks, all animals were able to complete the track without interruption, and their functional parameters were recorded before surgery. [7] On Day 21 of the intervention, walking and recording procedures were repeated. There were a high number of parameters measured by the CatWalk system, and only those that are frequently recommended for sciatic nerve injury in the literature were evaluated due to the limited data on reliability and validity. The print length (cm), print width (cm), and print area (cm 2 ) were used as individual paw parameters. Using these parameters, the length, width, and area values were calculated throughout the entire stance phase, as if the paws were inked. The following dynamic paw parameters were evaluated: stance duration (sec), swing duration (sec), swing speed (cm/sec), and duty cycle (%) [stance duration/(stance duration+swing duration)]. From the static paw parameters, the maximum contact area (cm 2 ), stride length (cm), and base of support (BOS) (cm) were used. The maximum contact area refers to the total floor area contacted by the paw during the stance phase. The BOS is obtained by taking the average width of the track made by the front paws and hind paws. Among the coordination parameters, average run speed (cm/sec) and regularity index (%) were selected. Regularity index refers to the exclusive use of regular step patterns during uninterrupted locomotion. [7,8] scintigraphic evaluations At three weeks after the intervention, muscle perfusion scintigraphy with Tc99m-methoxy isobutyl isonitrile (MIBI) was applied to all the rats under anesthesia to evaluate the perfusion of the gastrocnemius muscle. During the evaluation, a high inter-extremity index and a high calf retention index were considered as good perfusion findings. [9] Scintigraphy was performed with the rats in the supine position using the MG dual-head SPECT gamma camera (General Electric Healthcare, WI, USA) for anterior-posterior imaging. A low-energy general purpose collimator was utilized in all scintigraphic imaging procedures. Dynamic and static imaging was performed based on a 256×256 matrix. Static images were acquired in the first 10 min during the early blood pool phase, starting simultaneously with the injection. Static blood pool images were acquired between 0 and 5 min in the early phase, with a 5-min late-phase static follow-up image also being obtained between 30 and 35 min. Qualitative and quantitative assessments were applied to the animals. For simultaneous imaging, scintigraphic imaging was initiated by an intravenous bolus injection of 5 mCi MIBI into the tail veins of the rats. In the quantitative evaluation, regions of interest (ROI) were drawn, and the results were obtained numerically. Images were taken bilaterally, and the healthy side was used as a control group. For the evaluation, the count value in each calf region was calculated with symmetrical angular ROI drawn on the acquired images. The count values of the ROI obtained from the static images for the right/left inter-extremity index calculation were determined separately for the right and left sides. The following formulas were used: Histopathological evaluations On Day 21 of the study, after all the evaluations were completed, the animals were euthanized under deep anesthesia. A sample of the left sciatic nerve was taken from the distal of the crush area. Nerve samples were fixed with formaldehyde and washed in running water overnight to remove formaldehyde. Then, the samples were subjected to routine pathological tissue procedures and passed through graded alcohol (50%, 75%, 96%, and 100%) xylol series and embedded in paraffin blocks. From the prepared blocks, 5 µ-thick sections, (the first three sections and, then, every 10 th section), were taken using the Leica RM 2125 RT (Leica Microsystems, Wetzlar, Germany) and placed on slides. The preparations were passed through alcohol and xylol series and stained with hematoxylin-eosin. All samples were analyzed under a high-resolution light microscope (Olympus DP-73 camera, Olympus BX53-DIC microscope; Tokyo, Japan) and processed with the CellSens Entry Imaging Software version 4.1 (Olympus, Tokyo, Japan). Photographs were taken from five random areas of each sample. The total myelinated axon count, mean axon diameter and axon-to-fiber diameter ratio (G-ratio) were calculated using XV Image Processing Software version 1.12 (Olympus, Tokyo, Japan). statistical analysis Statistical analysis was performed using the IBM SPSS for Windows version 21.0 software (IBM Corp., Armonk, NY, USA). Since the data set did not fit into a normal distribution pattern, non-parametric approaches were used to describe data and test statistical hypotheses. Descriptive data were expressed in median (min-max) values. To test differences between two repeated measures, the Wilcoxon signedrank test was used. The Kruskal-Wallis test was performed to analyze differences between three groups for each measure. For the pairwise comparison of groups, the Dunn-Bonferroni test was used. A p value of <0.05 was considered statistically significant. REsULTs There was no significant difference among the groups with respect to the body weights of the rats before the intervention and on Day 21 (Table 1). Electrophysiological data There was a significant difference in the sciatic nerve latency, amplitude and velocity values measured before the intervention and Day 21 among the groups (p<0.001). The detailed electrophysiological data of the subjects are given in Table 1. CatWalk data Concerning the individual paw parameters evaluated with the Catwalk XT gait analysis, a significant difference was found among the three groups before the intervention and on Day 21. The individual paw parameters showed significant differences among the groups on Day 21 (p<0.05) ( Table 2). There was also a statistically significant difference for all variables among the groups on Day 21 in the dynamic paw parameters (p<0.05). The maximum contact area and BOS, the part of static part parameters, were found to be statistically significantly different among the groups on Day 21 (p<0.05). Of coordination parameters, only run speed showed a statistically significant difference among the groups on Day 21 (p=0.001). The results of the CatWalk gait analysis data are shown in Table 2. scintigraphic data A significant difference was observed in the early phase, calf retention index, and inter-extremity index values in scintigraphic measurements among the groups (p<0.05) (Table 3, Figure 3). Histopathological data Significant differences were observed among the three groups in terms of all the three parameters of the histopathological evaluation (p<0.001). The total myelinated axon count and the mean axon diameter had the highest value in the sham group and the lowest value in the control group. Concerning G-ratio, the value of the sham group was significantly higher than those of the remaining two groups (Table 3, Figure 3). DIsCUssION In this study, after sciatic nerve crush, pitavastatin resulted in electrophysiological, functional, scintigraphic, and histopathological improvements similar to the values observed in the sham group. This study is valuable, since it is the first to evaluate the effect of pitavastatin on sciatic nerve crush in a rat model. Although there are no studies investigating the use of pitavastatin in peripheral nerve injury, research has been conducted with other statins. In a previous study, sciatic nerve injury was induced in rats, which were then administered either atorvastatin or saline, and electrophysiological and histopathological values were found to be significantly higher in the atorvastatin group at four weeks. [7] In another study evaluating the effects of lovastatin on sciatic nerve injury in rats, Ghayour et al. [3] reported lower latency and higher amplitude values and also higher mean axon count and myelin thickness in the lovastatin group. In this study, the post-intervention amplitude and latency values obtained by electrophysiological evaluation in the pitavastatin group were similar to the baseline values, and a significantly higher amplitude value and a lower latency. In addition to previous studies, sciatic nerve velocity was also calculated in our study, and the most decrease on Day 21 was observed in the control group. The increase in nerve conduction velocity is associated with an increase in the thickness of the myelin sheath, as well as axonal healing. [10] Moreover, the myelinated axon count and mean axon diameter values were found to be higher in the pitavastatin group than in the control group. The improvement in these values indicates both axonal healing and increased thickness of the myelin sheath. These results support the idea that pitavastatin contributes to neuroregeneration in sciatic nerve injury. In the literature, there are multiple sclerosis studies showing that statins have a positive effect on myelination. [11,12] Statins may exert their myelination effect by reducing edema with their anti-inf lammatory properties. [13] Chang et al. [14] reported that pitavastatin reduced cytokines, such as interleukin (IL)-1β, IL-6 and tumor necrosis factor-alpha (TNF-α), and Chen et al. [15] similarly concluded that this statin showed anti-inf lammatory properties by reducing IL-2, IL-6, interferon-gamma (IFN-γ), and TNF-α. In the current study, the antiinf lammatory properties of pitavastatin may have played a role in the increased myelination. Future studies evaluating the effect of pitavastatin on nerve injury can clarify this finding by investigating the anti-inf lammatory properties of this statin. In addition to electrophysiological studies, the sciatic function index (SFI) and CatWalk gait analysis system can be used to evaluate regeneration. [7,16,17] To the best of our knowledge, there is no previous study evaluating the effects of statins on functional recovery in sciatic nerve injury with CatWalk, a quantitative gait analysis method. In a study on atorvastatin, the improvement in SFI was found to be higher compared to the control group. [7] In another study, the group which was administered lovastatin had superior functional improvements at three, five, and seven weeks. [3] In a study evaluating the effects of simvastatin on sciatic nerve crush-induced rats, the simvastatinadministered group had superior SFI values to the controls on Days 7, 14, and 21. [4] Dynamic paw parameters and coordination parameters cannot be evaluated with the SFI. In this study, the differences in the print length, maximum contact area and BOS hind paw values were not found significantly in the pitavastatin group. Therefore, we can speculate that pitavastatin is effective in regeneration and functionality in rats subjected to sciatic nerve crush, consistent with previous statin studies. In our study, the post-crush change in the dynamic and coordination parameters in the control group are similar to the literature. [7,18] Moreover, we observed that the dynamic and coordination parameters of the pitavastatin group were similar to the preoperative values and were significantly superior to the control group and similar to the sham group. Based on these findings, we suggest that pitavastatin is effective in regeneration and improvement of motor function in the presence of static nerve injury. In this study, scintigraphy measurements, in which the perfusion of the gastrocnemius muscle was evaluated, revealed that the early phase, calf retention index and inter-extremity index values were higher the pitavastatin group. High inter-extremity index and calf retention index values indicate better perfusion. [9] The literature contains positron emission tomography studies evaluating denervated nerves after nerve injury; [19,20] however, we were unable to find any research involving such an evaluation based on the scintigraphy of the gastrocnemius muscle. Since it is expected that the innervated vascular structure would be affected as a result of the interruption or reduction of innervation, it is reasonable to observe a decrease in these values at three weeks. However, the fact that scintigraphy data in the pitavastatin group showed values closer to the sham group can be interpreted as the effectiveness of pitavastatin in accelerating nerve healing. This study has certain strengths, such as being the first to evaluate the effects of pitavastatin on sciatic nerve injury, and also being the first statin study to perform the functional assessment of sciatic nerve injury using a quantitative method, CatWalk. On the other hand, the fact that biochemical markers were unable to be included in the evaluation and different doses of pitavastatin were unable to be tested is the main limitation to the study. Future studies may shed light on these issues more accurately. In conclusion, in rats with sciatic nerve crush injury, the oral administration of 2 mg/kg of pitavastatin once daily for 21 days clinically improved the electrophysiological, scintigraphic, and histopathological markers and the individual, static, dynamic and coordination parameters of gait analysis. In the light of these findings, disability that may develop due to nerve injury can be reduced by pitavastatin through its effects on nerve regeneration and motor function. However, further experimental and clinical studies are needed to draw more reliable conclusions. Ethics Committee Approval: The study protocol was approved by the University of Health Sciences, Ankara Training and Research Hospital, Animal Experiments Ethics Committee (date: 08.10.2018, no: 180049). All the experimental protocols applied in this study were carried out in accordance with international standards and declarations on animal experiments. Data sharing statement: The data that support the findings of this study are available from the corresponding author upon reasonable request.
2023-08-10T15:11:46.138Z
2023-06-11T00:00:00.000
{ "year": 2023, "sha1": "0d3398c42fbc4de48cd820cb189104dc3521f929", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "ScienceParsePlus", "pdf_hash": "8e2bcb39a7c1f824b896d32b7e546ea18cc92fa7", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [] }
230799877
pes2o/s2orc
v3-fos-license
Schemes in Lean We tell the story of how schemes were formalised in three different ways in the Lean theorem prover. Introduction and overview 1.1. Varieties and schemes in algebraic geometry. Before 1960, algebraic geometry was done via the theory of algebraic varieties, finite-dimensional objects defined over a fixed algebraically closed field, or "universal domain". The standard reference text was Weil's 1946 book "Foundations of algebraic geometry" [Wei46], and in the final chapter "Comments and discussions", Weil remarks that "it would be very convenient to have. . . a principle of reduction modulo p ", a phenomenon which Weil would have known well should exist but which was extremely inconvenient to do in this setting. Schemes were introduced by Grothendieck in the 1960s (following earlier ideas of Chevalley, and building on ideas of Zariski) as the building blocks for a new algebraic geometry. Grothendieck did not need to work over a fixed base field; his foundations worked with general commutative rings rather than algebraically closed fields, enabling reduction modulo p to become possible. Given a general "geometric object" (for example, a topological space), one can consider the set of continuous real-valued functions on this object, and pointwise addition and multiplication turn this space of functions into a commutative ring. Grothendieck's observation was that one could make a construction in the opposite direction: starting with an arbitrary commutative ring R, he constructed a geometric object Spec (R) called an affine scheme; this was a topological space equipped with a sheaf of functions O X on this space, such that O X (Spec (R)), the "allowable" functions on Spec (R), was R again. Grothendieck defined a general scheme by gluing affine schemes together (following Weil, who in [Wei46] had defined an abstract variety by gluing affine varieties together 1 ). This new viewpoint proved incisive -ten years later the Weil conjectures, questions about the number of points which algebraic varieties have over finite fields, were being proved using machinery such asétale cohomology, which had been developed using schemes. 1.2. Formalising schemes -the history. As mentioned in the abstract, schemes were formalised three times in Lean [dKA + 15]; each formalisation was better-behaved than the one before. The first formalisation was by the first three authors (KB, CH, KL), and it evolved in late 2017 and early 2018 from the Thursday evening Xena Project meetings at Imperial College London, where undergraduates learn how to formalise mathematics in Lean (KB is the staff member running the meetings; CH and KL were, at the time, first year mathematics undergraduates). It started with KL formalising the theory of localisation of rings as a project, and with KB suggesting that schemes would be a natural way to take the theory further in Lean. It was their first attempt to formalise anything non-trivial in a theorem prover (and its ultimate success inspired KB to see exactly how far Lean could be pushed, eventually resulting in [BCM20]). However some poor (in retrospect) design decisions were made when it came to the theory of localisation, and in this first iteration these decisions resulted in messy infrastructure which would not scale. It quickly became clear that an extensive rewrite was needed. It also became apparent that such a formalisation had never been embarked upon before in any other theorem prover (parts of the theory of affine schemes had been formalised, but nothing approaching the definition of a scheme) -this perhaps says something about the interests of mathematical formalisation community, at least pre-2017. The (abandoned) project is still currently online at [BHL18]. AL and RFM then joined the project, with AL developing a very robust theory of localisation in the form which was needed, and RFM rewriting the definition of a scheme from scratch as part of his 2018-2019 MSc project supervised by KB. RFM ultimately produced a definition which was usable -indeed several other basic results from the Stacks project website [Sta21] and Hartshorne's algebraic geometry textbook [Har77] were also proved in this iteration, mostly by KL. This version is currently online at [FM19]. However, sheaves were defined "by hand" in this iteration; there was one definition for a sheaf of types (or sheaf of sets, depending on your foundations), one for a sheaf of abelian groups and another one for a sheaf of rings. The contribution of SM was to build enough abstract category theory in Lean to enable a third definition using this category-theoretic language. At this point a design change was also made; the definition of the sheaf on Spec (R) was changed from the definition in [Sta21] to the equivalent definition in [Har77]. This was the version which finally made it into Lean's mathematics library, in commit b79fc0379ae786153fc22ce5ee6751505e36a3d9 of mathlib [mat20, src/algebraic geometry/Scheme.lean]. The current mathlib documentation for schemes (perhaps more readable for people who do not want to look directly at the code) is available at the Lean community website. 1.3. Organisation of the paper. Most of the formalisation was plain sailing, but occasionally we ran into unexpected problems. The layout of this paper is as follows. In the next section we go over the details of some of the mathematics involved. In the three sections after that we explain the three approaches to formalising the material, emphasising the parts which did not go smoothly and explaining how this affected future design decisions. Mathematical details. Convention: all rings are commutative and have a 1; all ring homomorphisms send 1 to 1. Let X be a topological space. A presheaf of rings F on X is a way to associate a ring F (U ) to each open subset U ⊆ X, and a ring homomorphism ρ UV : F (U ) → F (V ) to each inclusion V ⊆ U of open subsets of X, such that ρ UU is the identity for all opens U , and In other words, F is a contravariant functor from the category of open subsets of X to the category of rings. If f ∈ F (U ) and V ⊆ U , we write f | V as shorthand for ρ UV (f ). The model example to keep in mind is where F (U ) is defined to be the ring of continuous functions U → R, and ρ V U sends a continuous function on U to its restriction to V . A presheaf of rings F is said to be a sheaf of rings if elements of F (U ) can "be defined locally". More precisely, a presheaf of rings F is a sheaf of rings if for every open set U and every open cover U = i∈I U i of U by open sets U i , the sequence In words, the exactness of ( ‡) is the assertion that if you have a collection of elements f i ∈ F (U i ) which agree on all overlaps U i ∩ U j , then there is a unique f ∈ F (U ) whose restriction to U i is f i for all i. A topological space X equipped with a sheaf of rings O X is called a ringed space. If (X, O X ) is a ringed space, then to each point u ∈ X one can associate the stalk O X,x := lim − →x∈U O X (U ) of "functions defined near x", a filtered colimit of rings and hence also a ring. If this ring is local (that is, has a unique maximal ideal) for every x ∈ X we say that (X, O X ) is a locally ringed space. A fundamental construction in this area is the following. Given a ring R, let Spec (R) denote the set of its prime ideals. The Zariski topology is a natural topology on Spec (R): for I an ideal we define V (I) to be the prime ideals of R containing I, and the V (I) as I varies are the closed sets of Spec (R). We shall explain later how to use the theory of localisations of rings to construct a presheaf of rings O R on Spec (R) and to prove that it is a sheaf. The ringed space (Spec (R), O R ) is locally ringed, and is called the affine scheme associated to the ring R. A scheme is a locally ringed space (X, O X ) with the property that X can be covered by opens U i such that the induced locally ringed space If R is a ring then Spec (R) is a scheme, because it can be covered by the open affine subset U = Spec (R). To KB's naive eyes in 2017, everything here looked straightforward. But in fact there were several gotchas involved in turning this into a formal definition. The rest of this paper describes them, and how they were dealt with in Lean. 3. The first definition. We begin with a discussion of localisation, and the construction of the sheaf of rings on the topological space associated to a ring. Let R be a ring and let S be a submonoid of (R, ×) (for some reason mathematicians often call S a "multiplicative subset"; it contains 1 and is closed under multiplication so it is precisely a submonoid). The localisation R[1/S] of R at S is a ring equipped with a canonical map i : R → R[1/S] and having the following universal property: for any ring A and any map f : The special case where S is the submonoid {1, f, f 2 , f 3 , . . .} generated by f ∈ R has its own notation R[1/f ]. Standard arguments involving universal objects show that R[1/S] is uniquely defined up to unique isomorphism, if it exists. In fact, these latter arguments do not even assume that S is a submonoid; in general the localisation at a general subset S of R is the localisation at the submonoid of R generated by S. Existence is shown by an explicit construction: one puts an appropriate equivalence relation on R × S (an element (r, s) is thought of as representing the fraction r/s) and puts a ring structure on the quotient. All of this was formalised in Lean with little fuss. The construction of the topological space Spec (R) was equally uneventful. Next, a decision had to be made about how to put a sheaf of rings on Spec (R); there are several constructions in the literature. The prevailing philosophy of KB at the time was "always follow the Stacks project". 2 The approach in the Stacks project is as follows. Let X be a topological space. In [Sta21, Tag 009H] it is explained how to set up the theory of presheaves and sheaves on a basis for the topology on X, and in [Sta21, Tag 009N] it is shown how to extend a sheaf on a basis for X to a sheaf on X. Formalisation of these results, and their extension to sheaves of rings, was straightforward. 3.1. Definition of O X (D(f )), and "canonical maps". This was a part of the story where things did not go as smoothly as envisaged. The first problem was this. Say R is a ring, and f, g ∈ R have the property that D(f ) = D(g), that is, a prime ideal contains f if and only if it contains g. This can certainly happen in non-obvious ways -for example if x ∈ R and f = again with no definition of "canoniquement") and in 1.3.3 we see "un homomorphisme canonique fonctoriel M f → M g ", and the first usage of "M f = M g " to denote a "canonical" isomorphism rather than a set-theoretic identity. Having This becomes an issue in Lean, because in our formalisation we have rings R[1/f ] and R[1/g] which are most definitely not equal; certainly the universal property gives isomorphisms between them, but one of these rings is a quotient of R × {1, f, f 2 , f 3 , . . .} and the other is a quotient of R × {1, g, g 2 , g 3 , . . .}. If you like to think set-theoretically, you can think of an element of R[1/f ] as being a subset (an equivalence class) of R × {1, f, f 2 , . . .} and an element of R[1/g] as being a subset of R × {1, g, g 2 , . . .}. In particular, R[1/f ] and R[1/g] are visibly not equal. One solution to this problem would be to throw the axiom of choice at it. For each open set U ⊆ Spec (R) which is known to be of the form D(f ) for some f (possibly infinitely many), we choose some "special" f U such that U = D(f U ) and define O X (U ) := R[1/f U ]. This of course works, but we envisaged that this definition would be frustrating to work with down the line. Reid Barton on the Lean chat suggested the following approach instead, which was what we ultimately chose. If the open set U is known to be of the form D(f ), then define S U to be the submonoid {g ∈ R | U ⊆ D(g)} of elements of R which are non-zero on U , and define O X (U ) := R[1/S U ]. This has the advantage that no choices are involved; it bears the hallmark of a construction in constructive mathematics, where a rule of thumb is that if you can't figure out a natural way to choose one object from a set, then choose all of them. This defines a presheaf of rings O X on the basic open subsets D(f ) of Spec (R), and by taking limits one can extend this definition to give a presheaf of rings on all of Spec (R). Amusingly, we then realised that to define schemes, this presheaf construction was all we needed. 3.2. Defining schemes. We defined a scheme to be a topological space X equipped with a sheaf of rings O X , for which X had a cover X = i U i such that each U i was isomorphic to Spec (R i ) for some ring R i . This isomorphism is in the books usually stated as an isomorphism of locally ringed spaces. However a locally ringed space is just a topological space equipped with a presheaf of rings and satisfying some extra axioms, so in particular an isomorphism of locally ringed spaces is the same as an isomorphism of spaces equipped with presheaves of rings. We have already defined a presheaf of rings on Spec (R), so we are almost done. It remains to define the pullback presheaf of rings on U i , and this is easy: if ι : It was straightforward to check that ι * O X is a presheaf of rings on U i , and our definition of a scheme was complete. Note that we did not need to demand that our original ringed space was locally ringed -this follows from the fact that it is locally affine. In particular, our definition is mathematically equivalent to the usual definition, although the proof of this involves theorems which were not at this time formalised. Our original definition was met with some scepticism by the computer scientists in the Lean community, however, and not for this reason above. A definition with no unit tests might contain an error. It was suggested that we prove a theorem about schemes, to provide evidence that our definition was correct. We decided to prove the theorem that an affine scheme was a scheme. Although the result sounds trivial, some work remains: a scheme is a space equipped with a sheaf of rings, and we have thus far only equipped Spec (R) with a presheaf of rings. To prove that affine schemes are schemes should be essentially equivalent to proving that the presheaf O X of rings on Spec (R) is a sheaf. 3.3. Proving O X is a sheaf on Spec (R). Again, we followed the Stacks project. Here we ran into a serious technical issue, and one might even argue that this issue is typically overlooked in the literature -it seems to be a very good example of a situation where mathematicians pay no attention to the details of what is happening, knowing that things are going to work out. When formalising, one has to check these details. Let Step 2: One can now translate the statement into a purely ring-theoretic lemma [Sta21, Tag 00EJ]. We proceeded in reverse order, with CH proving the ring-theoretic lemma first. Let us state it here: Lemma 3.3.1 (Tag 00EJ). Let R be a ring, and say f 1 , f 2 , . . . , f n ∈ R generate the unit ideal. Then the following sequence is exact: Here the map α is the obvious one, and the map β sends r i to the element whose (i, j) component is It was only after we applied this to make progress with Step 1 that we understood the subtleties in what was left. The "canonical" identification of basic opens in Spec (R[1/f ]) with basic opens in Spec (R) involved, when identifying global sections, an identification of Of course these rings are canonically isomorphic, but they are not equal. In short, we had a proof of exactness of and we needed 3 a proof of exactness of To a mathematician, essentially nothing needs to be done here. A mathematician might say "the diagrams are the same; one is exact, so the other is" and this would be an acceptable proof. Pressed for more details, a mathematician might offer the following explanation: Here all the vertical maps are canonical and all the horizontal maps are defined in a natural way and hence the squares will all obviously commute. However in Lean this needs to be checked! Lean has no concept of what it means for an isomorphism to be canonical (and looking at the Wikipedia page on canonical maps one indeed discovers that there seems to be no formal definition of the word), and we needed to explicitly check that the squares in the above diagram commute. So it came to pass that one line in the Stacks project tag 01HR ("Thus we may apply Lemma 10.22.2...We conclude that the sequence is exact") became several hundred lines of Lean code. In retrospect it would have been much easier to do a diagram chase. However we had a belief that "everything would follow immediately from the universal property" and instead went down this route, which was more troublesome than one might expect, because for example the homomorphism β above is not a ring homomorphism and hence the universal property cannot be used! We ultimately resorted to proving a lemma showing that there was at most one R-algebra map from R[1/S] to R[1/T ] and used this to finish. We proved that the squares commuted, and deduced that the presheaf of rings on Spec (R) was a sheaf. Our goal of proving that affine schemes were schemes was now in sight. In particular, in a univalent system, the ring isomorphism R[1/f g] ∼ = R[1/f ][1/g] can be promoted to an equality (for this richer concept of equality) and now it looks on the face of it that a rewrite would be able to make progress. However, univalence does not solve the problem which we encountered here. After the substitution in a univalent system, we would have a proof of exactness of one diagram, and we want to prove exactness of another diagram, and the diagrams now have the same objects, however we need to check that they have the same morphisms! Checking this of course boils down to checking that the squares commute, so the lion's share of the work still needs to be done. In our original definition of a scheme, we did the diagram chase "manually". However in our second iteration of the definition, we will introduce ideas which enable us to avoid this rewriting problem completely. 3.5. Affine schemes are schemes. With the proof that Spec (R) is a ringed space, we can now attempt to prove that it is a scheme. This provided one final surprise in our formalisation, and again Reid Barton explained the way around it. Our cover of Spec (R) by affines is just the identity map ι : Spec (R) → Spec (R), and all that remains is to show that the presheaves O X and ι * O X on Spec (R) are isomorphic. This boils down to the following: if U is an open subset of Spec (R) then we need to produce an isomorphism between O X (U ) and O X (ι(U )) which commutes with restriction maps. Our first attempt to do this was the following. Note that ι is the identity map. Hence ι(U ) = U , and thus O X (ι(U )) = O X (U ). Let's define the isomorphism to be the identity map. Checking that the diagrams commute should then be straightforward. But it was not straightforward; one now has to check that ρ ι(U)ι(V ) = ρ UV , and replacing ι(U ) with U in a naive manner caused Lean to give motive is not type correct errors. Ultimately we were able to get this working, but what is going on here? Mathematically there seems to be no issue. We learnt from Mario Carneiro what the problem was. The fact that ι(U ) = U is a trivial theorem, but, perhaps surprisingly, it is not true by definition. The definition of ι(U ) is that it is the set of x ∈ X such that there exists u ∈ U with u = x. Hence in particular x ∈ ι(U ) ⇐⇒ x ∈ U (this is true by definition), and hence ι(U ) = U , because two sets are equal if and only if they have the same elements (this is the axiom of set extensionality). We have proved that ι(U ) = U , but along the way we invoked an axiom of mathematics and in particular the equality is not definitional. This means that rewriting the equality ι(U ) = U can cause technical problems with data-carrying types such as O X (ι(U )) which depends on ι(U ) (although in our case they were surmountable). This is a technical issue with dependent type theory and can sometimes indicate that one is working with the wrong definitions. Fortunately, in our case, Reid Barton pointed out the following extraordinary trick to us: we can define the map O X (ι(U )) → O X (U ) using restriction rather than trying to force it to be the identity! Using restriction means that we need to supply a proof that U ⊆ ι(U ), but this is trivial (and who cares that it uses an axiom, we are not trying to rewrite anything). The fact that the diagram commutes now just boils down to the fact that restriction from ι(U ) to V via U equals restriction from ι(U ) to V via ι(V ), which follows from the presheaf axiom of transitivity of restrictions. This is certainly not the way that one would usually think about this, but it works fine. 3.6. Conclusions. The main problem with this first approach was the issue with localisations. The ring R[1/S] was defined as an explicit ring, and theorems were proved about it; later on when it came to apply these theorems, it turned out that in our application we only had a ring isomorphic to R[1/S] rather than our explicit definition on the nose. These issues were solved in our second approach. The second definition. Apparently there's a saying in computer science: "Build one to throw away". Having done this, we now knew what we should be doing; we had to develop a better theory of localisation. We now describe how we did this. 4.1. Localisation. The error we had initially made was to only define "the" localisation R[1/S] of a ring R at a submonoid S. The localisation is defined up to unique isomorphism, but pinning it down as an explicit set (or more precisely an explicit type, as Lean uses type theory rather than set theory) turned out to be a bad idea. What we needed instead is a predicate is localisation by S on ring homomorphisms R → T , saying that R → T is isomorphic to R → R[1/S] in the category of R-algebras, or in other words that T is isomorphic to R[1/S] in a manner compatible with the R-algebra structure. We will refer to this predicate by saying that the R-algebra T is a localisation of R at S, as opposed to "the" localisation of R at S. AL noted that in fact localising rings at submonoids was not the primitive notion: in fact, as Bourbaki taught us, one should be localising monoids at submonoids, and attaching ring structures later on. AL developed an entire formalised theory of localisation of monoids, with both the "explicit" constructions M [1/S] and the "predicate" approach, showing that the explicit constructions satisfied the predicate, and proving universal properties both for the explicit construction and the predicate construction. These files now form the foundation of the theory of localisation in mathlib. 4.2. The sheaf on an affine scheme, again. RFM rewrote schemes from scratch, tidying up the code along the way and also moving away from the disastrous design decision of calling Lean files by their Stacks project tags (tags should be mentioned in docstrings, not filenames; filenames serve a useful organisational purpose). The refactoring to using predicates instead of explicit constructions meant that there was no longer any need to make the substitution of R[1/f ][1/g] into a lemma explicitly naming R[1/f g]; all one has to do is to prove that R[1/f ][1/g] satisfies the predicate for being a localisation of R at the submonoid generated by f g, which is straightforward. This shortened the definition by hundreds of lines of code. The hard work, or so we thought, would be in reproving the ring-theoretic lemma [Sta21, Tag 00EJ] (Lemma 3.3.1) in this more general form. The issue here is that CH's original formalisation was an explicit computation involving the rings R[1/f i ] and R[1/f i f j ]; what we now needed was to reprove this lemma for all rings satisfying the localisation predicate instead. The beefed-up lemma would then apply in situations where the original lemma did not, meaning that the diagram chase described in section 3.3 would no longer be necessary. These definitions are mathematically equivalent, so to a mathematician they may as well both be the definition. However Neil Strickland proposed a third definition: Definition 3 (Strickland). An R-algebra f : R → T is a localisation of R at S if it satisfies the following three conditions: • Every t ∈ T can be written as f (r)/f (s) for some r ∈ R and s ∈ S; • The kernel of f is the annihilator of S. It is not difficult to prove that Strickland's definition of the predicate is equivalent to the other two. Indeed an explicit computation using the explicit construction of R[1/S] as a quotient of R×S shows that it satisfies Strickland's predicate, and conversely any R-algebra T satisfying Strickland's predicate admits a map from R[1/S] (by the universal property and the first condition), which can be explicitly checked to be surjective (from the second condition) and injective (from the third condition). So which definition should one use? This is not a mathematical question, it is what is known as an implementation issue, or a design decision. Ultimately of course the goal is to prove that all of the definitions are equivalent. But where does one start? One would like to fix one of them and then develop an interface, or API, for it. This is a collection of basic theorems about our predicate (some deducing it, some using it) which will ultimately lead to a formal proof that it is equivalent to the other two definitions. In particular, one will sometimes have to verify the predicate, and one will sometimes have to use it. Strickland's predicate had advantages over the other two definitions when it comes to using it. In particular, it does not involve quantification over all rings. Using a property involving quantification over all rings often involves having to construct a ring with nice properties and then applying the universal property to that ring. For example, we invite the reader to prove that any R-algebra T satisfying the universal property of being a localisation (definition 2) also has the property that the kernel of R → T is the annihilator of S. The shortest way we know how to do this from first principles is to first show that the explicitly constructed ring R[1/S] satisfies the universal property, secondly to prove that two rings satisfying the universal property are isomorphic as R-algebras, and finally to do an explicit calculation of the kernel of the map R → R[1/S] to prove the result. Of course there is no problem formalising this proof, but it gives an idea as to how much work it is to build an API for definition 2. Ultimately one has to fix one definition of the predicate, and then prove that it is equivalent to the other two; there will always be some work involved. But "API building" turned out to be easiest with Strickland's definition, which is the one RFM used. 4.4. Reproving 00EJ. What remains to be done is to reprove Lemma 3.3.1 not for our rings R[1/f ] but more generally for rings satisfying Strickland's predicate. To our surprise, this turned out to be very easy. Indeed, the only facts about the R-algebras R[1/S] used in the Stacks project proof in 2017 4 were precisely the ones isolated by Strickland! The refactoring was hence far easier than expected. In some sense, what happened here was that we made the statement of the lemma more general, and noted that the same proof still worked. 4.5. Definition and usage. RFM also set up the theory of locally ringed spaces, enabling us to define a scheme as a locally ringed space which was locally affine. The proof that an affine scheme is a scheme now also needs a new proof, namely that the stalks of O X were local rings. This needed some of the theory of filtered colimits, but this did not present any particular problems. KL used the definitions of the project in this form to do the "gluing sheaves" Exercise II.1.22 in Hartshorne's algebraic geometry textbook [Har77] and also to prove that Spec was adjoint to the global sections functor on locally ringed spaces [Sta21, Tag 01I1]. However when it came to start thinking about porting our definition into Lean's mathematics library, we ran into a problem. Part of our code was still sub-optimal: we had a definition of a sheaf of types, and a definition of a sheaf of rings. These notions should be unified under some more general notion of a sheaf of objects in a category. Thus the third definition was born. 5. The third definition: sheaves done correctly. 5.1. Category theory in mathlib. SM has been the main driving force behind a gigantic category theory library in Lean. In early 2018, a combination of his work not yet being mature enough, and there not being much infrastructure to support easily working on branches of mathlib, meant that we had developed our own definitions of sheaves rather than using category theory. By 2020 this was no longer the case, and SM's definition of a presheaf taking values in a category was the natural thing to use for the "official" definition of a scheme. We had seen with our own eyes the problems of having one definition of presheaves of types and another definition of presheaves of rings -we were constantly having to prove results for presheaves of types and then prove them again for presheaves of rings. Ultimately we had to completely refactor the sheaf part of the story, but we took this opportunity to introduce category theory more generally. Another advantage of the third definition is that rather than working in a new project with mathlib as a dependency, we worked directly on a branch of mathlib, and ultimately the code ended up as part of mathlib meaning that it will not quickly rot and die, as is very very common with Lean code which is not in mathlib (mathlib is not backwards compatible -it is still in some sense an experimental library and occasionally big design decisions are changed in an attempt to make it better). 5.2. Changes made to the definition. We go through the final definition, pointing out how it differs from the previous version. In this definition, a scheme is an object in the category LocallyRingedSpace which is locally isomorphic to an affine scheme. The abstract nonsense presented no surprises; the work left, when defining a scheme and proving that affine schemes are schemes, is to define the sheaf of rings on Spec (R) in this language. This time it was decided to not follow the Stacks project approach via sheaves on a basis, but to instead define the sheaf directly following [Har77], where an element of O X (U ) is a dependent function taking u ∈ U to an element of R u , the localisation of R at the prime ideal u, subject to the condition that locally the function can be written as r/s with r, s ∈ R and s not vanishing near u. The advantage of this approach is that it is clear that O X is a sheaf; the ring-theoretic Lemma 00EJ is not used at all in this approach. Indeed, in this set-up, 00EJ is used to prove O X (Spec (R)) = R. 5.3. Sheaves and categories. SM defined the notion of a sheaf on a topological space taking values in any category which has products. The sheaf condition on a presheaf is that the usual "sheaf condition diagram" is an equalizer. It is then a theorem that a sheaf of rings is a presheaf of rings whose underlying presheaf of types is a sheaf of types. Since then, Bhavik Mehta has defined the concept of a sheaf of types on an arbitrary site, and more generally the notion of a sheaf of objects of an arbitrary category on a site. All of this work is now in mathlib. These definitions open the door to defining and proving theorems about theétale cohomology of schemes in Lean. 5.4. A word on mathlib. We mentioned above that this definition of a scheme made it into Lean's mathematics library mathlib, which means that all the code in it was subject to scrutiny by the library maintainers. A reasonable analogy would be that mathlib is like a journal, and the maintainers are like the editors. After three iterations of the definition of a scheme, it was in good enough shape for a "pull request" to be made to mathlib. (A pull request is the open source project equivalent of submitting an article to an editor!) Note that the multiple iteration procedure discussed in this paper is not usual -but we were beginners in 2017 with very little information to guide us on how to make schemes on a computer, and KB's poor initial design decisions were because of this. Nowadays we have a much better understanding of how to put modern mathematics into Lean's dependent type theory. The advantage of getting the definition into mathlib is that now it is guaranteed to compile for the duration of Lean 3's lifespan, because if someone makes changes to the library which break it, it will be down to that same someone to fix it. This is in stark contrast to the first two definitions, which compile with very old versions of Lean and mathlib, and would almost certainly not compile with modern versions.
2021-01-08T02:15:28.090Z
2021-01-07T00:00:00.000
{ "year": 2021, "sha1": "f3ccb291fbe039a6661c01a7042e151b37771e60", "oa_license": "CCBY", "oa_url": "https://www.tandfonline.com/doi/pdf/10.1080/10586458.2021.1983489?needAccess=true", "oa_status": "HYBRID", "pdf_src": "Arxiv", "pdf_hash": "f3ccb291fbe039a6661c01a7042e151b37771e60", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Mathematics" ] }
13781090
pes2o/s2orc
v3-fos-license
Functional Analysis of the α-Defensin Disulfide Array in Mouse Cryptdin-4* The α-defensin antimicrobial peptide family is defined by a unique tridisulfide array. To test whether this invariant structural feature determines α-defensin bactericidal activity, mouse cryptdin-4 (Crp4) tertiary structure was disrupted by pairs of site-directed Ala for Cys substitutions. In a series of Crp4 disulfide variants whose cysteine connectivities were confirmed using NMR spectroscopy and mass spectrometry, mutagenesis did not induce loss of function. To the contrary, the in vitro bactericidal activities of several Crp4 disulfide variants were equivalent to or greater than those of native Crp4. Mouse Paneth cell α-defensins require the proteolytic activation of precursors by matrix metalloproteinase-7 (MMP-7), prompting an analysis of the relative sensitivities of native and mutant Crp4 and pro-Crp4 molecules to degradation by MMP-7. Although native Crp4 and the α-defensin moiety of proCrp4 resisted proteolysis completely, all disulfide variants were degraded extensively by MMP-7. Crp4 bactericidal activity was eliminated by MMP-7 cleavage. Thus, rather than determining α-defensin bactericidal activity, the Crp4 disulfide arrangement confers essential protection from degradation by this critical activating proteinase. The mammalian defensins comprise the ␣-, ␤-, and -defensin families of cationic, Cys-rich antimicrobial peptides, and each subfamily is characterized by a distinctive tridisulfide array (1). ␣-Defensins are cationic, amphipathic, 3-4-kDa peptides with a ␤-sheet polypeptide backbone and broad spectrum antimicrobial activities (1). The consensus ␣-defensin tertiary structure is established by six cysteines that are spaced in a pattern that facilitates the formation of invariant disulfide bonds between Cys I -Cys VI , Cys II -Cys IV , and Cys III -Cys V (2) (Fig. 1). These conserved ␣-defensin disulfide pairings have been inferred to have a role in determining, perhaps critically, the bactericidal activity of these peptides. Paneth cell ␣-defensins confer enteric immunity (3) and, thus, knowledge of determinants of peptide activity and biosynthetic regulation will improve the understanding of the role of these ␣-defensins in mucosal immunity. For example, mouse Paneth cell ␣-defensins, termed cryptdins (Crps), 1 are secreted into the lumen of small intestinal crypts at concentrations of 25-100 mg/ml, four orders of magnitude greater than their minimum bactericidal concentrations (4). In mice, Paneth cell ␣-defensin precursors (proCrps) are processed to their biologically active forms by specific proteolytic cleavage events catalyzed by matrix metalloproteinase-7 (MMP-7, matrilysin). Disruption of the MMP-7 gene abrogates proCrp activation, eliminating the accumulation of functional mature Crp peptides from the small intestine (5). Consequently, MMP-7-null mice have impaired enteric innate immunity in response to oral bacterial infection (5). Also, in mice transgenic for the human Paneth cell ␣-defensin HD5, the minitransgene is expressed specifically in Paneth cells, and the mice are immune to oral infection by virulent strains of Salmonella enterica serovar Typhimurium (serovar Typhimurium) (3). Here, we report on the role of the disulfide array in the mouse Paneth cell ␣-defensin cryptdin-4 (Crp4) (6,7). Paired Ala for Cys amino acid substitutions in Crp4 were tested for effects on bactericidal activity and resistance to the activating proteinase MMP-7. Mutations that disrupted disulfide bonds did not inactivate peptide bactericidal activity regardless of position. However, Crp4 and proCrp4 molecules with disrupted disulfides were proteolyzed extensively by MMP-7, disclosing a critical protective role for the disulfide array in peptide biosynthesis. Preparation of Recombinant Crp4 Peptide Variants-Recombinant Crp4 peptides were expressed in Escherichia coli as N-terminal His 6tagged fusion proteins from the EcoRI and SalI sites of the pET28a expression vector (Novagen, Inc., Madison, WI) as described (8,9). The Crp4-coding cDNA sequences were amplified using the forward primer ER1-Met-C4-F (5Ј-GCGCGAATTCATCGAGGGAAGGATGGGTTTGT-TATGCTATTGT-3Ј) paired with the reverse primer pMALCrp4-R (5Ј-ATATATGTCGACTCAGCGACAGCAGAGCGTGTACAATAAATG-3Ј) as reported previously (9). For proCrp4, the forward primer pETPCr4-F (5Ј-GCGCGAATTCATGGATCCTATCCAA AACACA-3Ј) was paired with the reverse primer SLpMALCrp4R (5Ј-ATATATGTCGACTGT-TCAGCGGCGGGGGCAGCAGTACAA-3Ј), corresponding to nucleotides 104 -119 and 301-327 in preproCrp4 cDNA (8). The underlined codons in the forward primers denote Met codons introduced upstream of each peptide N terminus to provide a CNBr cleavage site (8,9). In all instances, reactions were performed using the GeneAmp PCR Core Reagents (Applied Biosystems, Foster City, CA) by incubating the reaction mixture at 94°C for 5 min followed by successive cycles at 94°C for 30 s, 60°C for 30 s, and 72°C for 30 s for 30 cycles and then a final extension reaction at 72°C for 7 min. Mutagenesis at Cys Residue Positions-Mutations were introduced into Crp4 by PCR as described previously (8) in the order described below. In the first round of mutagenesis the Crp4 construct in pET-28a (9) was used as template. In PCR reaction number 1, a mutant forward primer, e.g. Crp4-C11A-F, containing the mutation for peptide residue position 11 flanked by three natural codons was paired with the reverse primer T7 terminator (Invitrogen), a downstream sequencing primer in the pET-28a vector. In PCR reaction number 2, the mutant reverse primer Crp4-C11A-R, the reverse complement of the mutant forward primer, was paired with the T7 promoter forward primer, again from the pET-28a. After amplification at 94°C for 5 min followed by successive cycles at 94°C for 30 s, 60°C for 30 s, and 72°C for 30 s for 30 cycles and then a final extension reaction at 72°C for 7 min, samples of purified products from reactions number 1 and number 2 were com- bined as templates in PCR reaction number 3 using the T7 promoter and terminator primers as amplimers. All mutated Crp4 templates were cloned in pCR-2.1 TOPO, verified by DNA sequencing, excised with SalI and EcoRI, subcloned into pET28a plasmid DNA (Novagen, Inc.), and transformed into E. coli BL21(DE3)-CodonPlus-RIL cells (Stratagene) for recombinant expression. The underlined codons in the forward primers denote Met codons introduced upstream of each peptide N terminus to provide a CNBr cleavage site (8,9). Purification of Recombinant Crp4 Proteins-Recombinant proteins were expressed and purified as His-tagged Crp4 fusion peptides as described (8). Briefly, recombinant proteins were expressed at 37°C in Terrific Broth medium by induction with 0.1 mM isopropyl-␤-D-1-thiogalactopyranoside for 6 h at 37°C, cells were lysed by sonication in 6 M guanidine-HCl in 100 mM Tris-Cl (pH 8.1), and the soluble protein fraction was clarified by centrifugation (8 -10). His-tagged Crp4 fusion peptides were purified using nickel-nitrilotriacetic acid (Qiagen) resin affinity chromatography (8). After CNBr cleavage, Crp4 peptides were purified by C18 reverse-phase high performance liquid chromatography (RP-HPLC) and quantitated by bicinchoninic acid (Pierce), and the molecular masses of the purified peptides were determined using matrix-assisted laser desorption ionization mode mass spectrometry (Voyager-DE MALDI-TOF, PE-Biosystems, Foster City, CA) in the Mass Spectroscopy Facility, Department of Chemistry, University of California, Irvine, CA. NMR Spectroscopy-Samples of Crp4 and the mutants used for NMR analysis contained 2 mg of Crp4, 0.6 mg of (C6A/C21A)-Crp4, and Ͻ0.3 mg of the other mutants dissolved in 0.5 ml of 95% H 2 O/5% D 2 O at pH 4. One-dimensional and two-dimensional total correlation spectroscopy with a MLEV17 mixing time of 80 ms and two-dimensional nuclear Overhauser effect spectroscopy spectra with a mixing time of 200 ms were recorded for all analogues on a Bruker DMX 750 MHz spectrometer at 298 K. In all experiments, the carrier frequency was set at the center of the spectrum on the solvent signal, and all spectra were recorded in phase-sensitive mode using the time-proportional phase increment method. Solvent suppression was achieved by a modified WATERGATE sequence. Two-dimensional spectra collected with Ͼ4000 data points in the f2 dimension and 512 increments on the f1 dimension over a spectral width corresponding to 12 ppm. Resonance assignments were achieved by standard sequential assignment strategies (11). Cleavage of Crp4 and proCrp4 Disulfide Variants with MMP-7 in Vitro-Recombinant Crp4, proCrp4, and variants with site-directed mutations in the disulfide array were digested with MMP-7 and analyzed for proteolysis by AU-PAGE, and samples of the proteolytic digests were tested in bactericidal peptide assays and analyzed by Nterminal sequencing by Edman degradation as described previously (8). Samples (11 g) of proCrp4 and all proCrp4 variants, as well as 5-g samples of Crp4 and variants, were incubated with an activated recombinant human MMP-7 (0.3ϳ1.0 g) catalytic domain (Calbiochem, La Jolla, CA) in buffer containing 10 mM HEPES (pH 7.4), 150 mM NaCl, and 5 mM CaCl 2 for 18 -24 h at 37°C (8). Equimolar samples of all digests were analyzed by AU-PAGE, and 3-g quantities of complete digests were subjected to five or more cycles of Edman degradation in the University of California, Irvine Biomedical Protein and Mass Spectrometry Resource Facility. The biological effects of MMP-7-mediated proteolysis of Crp4 molecules with mutations in the disulfide array was assayed by conducting bactericidal peptide assays as above. Bacterial target cells consisting of exponentially growing bacteria (ϳ1 ϫ 10 6 CFU/ml) were incubated with equimolar quantities (0 to 20 g/ml) of Crp4 or pro-Crp4 peptide variants that had been incubated overnight at 37°C with or without MMP-7. Mutagenesis of the Crp4 Disulfide Array-Recombinant Crp4 variants with site-directed mutations in the tridisulfide array ( Fig. 1A) were prepared by expression in E. coli using the pET-28 vector system (8). As shown in Fig. 1, Crp4 variants included molecules null for the following: (a) individual Cys I -Cys VI , Cys II -Cys IV , or Cys III -Cys V disulfides; (b) both the Cys I -Cys VI and Cys III -Cys V bonds; and (c) a Crp4 peptide with all Cys residues converted to Ala and, thus, disulfide-null. All variant Crp4 peptides were purified to homogeneity by RP-HPLC as verified by analytical RP-HPLC (not shown) and AU-PAGE analyses in which the peptides migrated as expected relative to native Crp4 (Fig. 1C) (8). Alkylation of ␣-defensins disrupts ␤-sheet structure, linearizing the molecule and reducing its mobility in AU-PAGE (13). Similarly, the mobility of these variant Crp4 molecules was diminished with increased numbers of disrupted disulfides (Fig. 1C). A series of one-dimensional and two-dimensional total correlation spectroscopy and nuclear Overhauser effect spectroscopy NMR spectra was recorded to assess the structural integrity of recombinant Crp4 and the disulfide-deficient variants. The two-dimensional spectra were sequentially assigned and used to derive chemical shifts for the backbone protons, which are a sensitive monitor of structure as summarized in Fig. 2. Native Crp4 and the C6A/C21A variant have widely dispersed amide signals characteristic of well folded peptides ( Fig. 2A). On the other hand, the amide signals for the C4A/C29A, C11A/ C28A, and C4A/C11A/C28A/C29A variants have a narrow amide dispersion typical of random coil conformations. Confirmation that the native peptide and the C6A/C21A variant are well folded and that the other variants are not is evident from the ␣H secondary shifts (Fig. 2B), i.e. the differences between the observed chemical shifts of a given amino acid and those for the corresponding residue in a random coil peptide. The presence of several consecutive residues with positive ␣H secondary shifts of magnitude Ͼ0.1 ppm provides a strong indication of ␤-strand structure. Such regions are seen in Crp4 and in the C6A/C21A mutant and correspond to the regions comprising a triplestranded ␤-sheet typically seen in other ␣-defensins (Fig. 2B, arrows). By contrast, the C4A/C29A, C11A/C28A, and C4A/ C11A/C28A/C29A Crp4 mutants have small secondary shifts characteristic of random coil peptides. These trends correspond well with the gel migration data in Fig. 1C, where the C6A/ C21A variant migrates similarly to the native peptide, but the others have diminished mobility as would be expected for disordered peptides. The ␣-Defensin Disulfide Array Does Not Determine Crp4 Bactericidal Activity in Vitro-To investigate the role of the Crp4 disulfide array, we assayed the in vitro bactericidal activities of Crp4 disulfide mutants against several bacterial species in relation to native Crp4 (Fig. 3). The overall bactericidal activities of Crp4 and Cys 3 Ala Crp4 variants were similar, although not identical, with all of the peptides reducing bacterial cell survival by at least 1000-fold at concentrations at or below 25 g/ml (Figs. 3 and 4 and data not shown). Because differences in peptide bactericidal activities become more apparent in assays against species with inherently lower antimicrobial peptide sensitivities, the peptides were tested against strains of wild-type serovar Typhimurium, which has low ␣-defensin susceptibility relative to other species of bacteria (14 -17). Three disulfide mutants, the C6A/C21A, C4A/C11A/C28A/C29A, and C4A/C6A/C11A/ C21A/C28A/C29A variants of Crp4, were consistently more active than native Crp4 against wild-type serovar Typhimurium (Fig. 4). Therefore, Crp4 bactericidal activity is independent of disulfide mutagenesis with molecules lacking more than one disulfide bond showing enhanced microbicidal activities, although the dose-response curves of certain peptides varied modestly (Figs. 3 and 4). Because mutagenesis at Crp4 disulfides did not induce loss of function, we considered alternative roles for the disulfide array, including protection of the peptide from degradation by the activating proteinase. Disulfide Bonds Protect Crp4 from Proteolysis by MMP-7-Production of functional mouse Paneth cell ␣-defensins requires that MMP-7 mediate proteolytic cleavage of inactive proCrps (5). To test whether the disulfide array protects the Crp4 moiety from MMP-7 proteolysis during activation, we first assayed for cleavage products of native and mutant Crp4 molecules exposed to MMP-7 (Fig. 5). As reported previously (8), native Crp4 was completely resistant to MMP-7 in vitro, but all Crp4 peptides with disrupted cystine pairings were degraded extensively (Fig. 5A). As expected, MMP-7 activated native proCrp4 as shown by AU-PAGE analyses (Fig. 5A) and in functional assays (Fig. 6A, and see below). Consistent with peptide structures, the major degradation products detected on the gels (Fig. 5A) all have increased mobilities relative to the uncleaved disulfide-deficient peptide, except for (C6A/C21A)-Crp4. Its high intrinsic mobility is due to the loss of native-like globular structure on proteolysis, whereas the degradation products of the other, random coil peptide variants increased in mobility. DISCUSSION The tridisulfide array is a universal and defining feature of the ␣-defensins (1,2,18,19), but Crp4 bactericidal activity does not require that the array be intact (Figs. 3 and 4). Similarly unanticipated were results showing that Crp4 variants lacking two or three disulfide bonds were more bactericidal against serovar Typhimurium than the parent molecule (Fig. 4). All Crp4 and proCrp4 disulfide mutants were degraded by MMP-7 at several internal positions as determined by N-terminal peptide sequence analysis (Figs. 5B and 7B), from which we conclude that the disulfide array protects the Crp4 ␣-defensin moiety during activating proteolysis. Although these studies have focused on the mouse Paneth cell pro-␣-defensin processing enzyme (5,20), similar findings have been observed for corresponding mutations in RMAD-4 and RED-4, myeloid and Paneth cell ␣-defensins, respectively (21,22), from rhesus macaque (not shown). We speculate that the disulfide array also may protect ␣-defensins from degradation in phagolysosomes, after release into the small intestinal lumen or in the extracellular environment at sites of inflammation. Of the one-, two-, and three-disulfide Crp4 mutants, C6A/ C21A adopts the most native-like structure and is the variant most resistant to MMP-7 induced degradation. If C6A/C21A so resembles native structure, why is it susceptible to proteolysis at all when Crp4 is completely resistant? The answer appears to be due to the enhanced molecular flexibility of this mutant relative to Crp4. Evidence for this possibility may be seen in the significantly broadened NMR signals for all of the amide protons in the C6A/C21A mutant relative the native peptide (compare the upper two traces in Fig. 2A) and in the reduced size of ␣H secondary shifts (compare the upper two traces in Fig. 2B). Signal broadening is particularly acute at residues 6 -8 and 25-26, and the ␣H signals for these residues are broadened beyond detection in (C6A/C21A)-Crp4. The enhanced mobility near Cys 6 reflects the removal of a crosslinking disulfide bond and potential disruption of the first strand of the triple-stranded ␤-sheet, whereas the broadening at Phe 25 -Leu 26 is associated with an extended hairpin turn between the second and third ␤-strands. This turn appears to be one of the major sites for proteolytic degradation of the mutant peptide with three cleavages occurring nearby, including one directly at the Phe 25 -Leu 26 peptide bond. In the structure of the rabbit kidney ␣-defensin RK-1 (23), this hairpin turn is relatively solvent-exposed and, by homology, is predicted to be exposed similarly in native Crp4 and more so in (C6A/C21A)-Crp4, where a disulfide bond that tethers this region to the molecular core is absent. Overall, RK-1 and Crp4 have similarly folded structures (not shown), even though their primary structures are quite different. Relative to Crp4, RK-1 contains two additional residues between Cys IV and Cys V , i.e. between strands 2 and 3. Possibly, the structure of the turn between these two strands would be less extended in Crp4 than in RK-1, but residues in the turn still would be solvent-accessible and a major site of proteolytic degradation. The enhanced flexibility of the (C6A/C21A)-Crp4 mutant presumably facilitates access to the enzyme active site, thus increasing the degree of proteolysis. The disulfide connectivities of Crp4 variants with just a single disrupted disulfide, C4A/29A, C6A/21A, and C11A/28A, were analyzed by MALDI-TOF MS after digestion with MMP-7 (see Fig. 5). For (C11A/C28A)-Crp4, the only disulfide connectivities consistent with the detected peptide masses of 2250.6, 878.0, 3110.4, 1868.2, 3310.9, and 2837.3 atomic mass units are the predicted Cys I -Cys VI and Cys II -Cys IV bonds, confirming the correct pairings for this peptide. On the basis of similar findings, we could exclude the possibility of Cys I -Cys III and Cys V -Cys VI disulfide pairings in (C6A/C21A)-Crp4 as well as Cys II -Cys III and Cys IV -Cys V disulfide bonds in (C4A/C29A)-Crp4, because no peptide masses consistent with those respective bonding patterns were detected. However, MALDI-TOF MS analysis of (C6A/C21A)-Crp4 MMP-7 digests was unable to distinguish correct Cys I -Cys VI and Cys III -Cys V bond pairings from a possible Cys I -C V and Cys III -Cys VI folded variant. Similarly, we could not differentiate between correct Cys II -Cys IV and Cys III -Cys V connectivities in (C4A/C29A)-Crp4 from a possible Cys II -Cys V and Cys III -Cys IV misfolded variant. Thus, in the case of these two mutants, the relation of peptide tertiary structure to activity is uncertain. Nevertheless, the disulfide pairings of (C11A/C28A)-Crp4, (C4A/C11A/C28A/C29A)-Crp4 with a solitary Cys II -Cys IV bond, and disulfide-null (C4A/C6A/ C11A/C21A/C28A/C29A)-Crp4 are unambiguous. Alterations in the tridisulfide array of ␤-defensin hBD-3 also have little effect on its microbicidal activity (24). Although ␣and ␤-defensins both have six Cys residues that form specific and invariant disulfide bond pairings (2,25), the spacing of ␣and ␤-defensin cysteines and their Cys-Cys pairings differ, and they have markedly different precursor structures. The ␣-defensin cystine connectivities are Cys I -Cys VI , Cys II -Cys IV , and Cys III -Cys V , and the pairings of ␤-defensins are Cys I -Cys V , Cys II -Cys IV , and Cys III -Cys VI , yet the peptides have similar folded conformations (26 -31). Of the six hBD-3 variants with mispaired Cys connectivities analyzed (32), the microbicidal activities of native hBD-3, mispaired variants, and disulfide-null hBD-3 were the same (24,32). Similarly, the bactericidal activity of bovine ␤-defensin BNBD-12 against E. coli was also independent of the disulfide array (33). The possible role of ␤-defensin disulfide connectivities in conferring resistance to proteolysis is unknown to our knowledge, perhaps because the mechanisms of ␤-defensin posttranslational processing remain obscure. Studies with model membranes support the view that Paneth cell and myeloid ␣-defensins kill their targets by permeabilizing the cell envelope, thus leading to dissipation of electrochemical gradients, although the mechanisms of individual peptides often differ (19,34). Mouse Crp4 induces graded leakage from quenched fluorophore-loaded large unilamellar vesicles (9,10,35,36), and preliminary results show that the Crp4 disulfide variants described here induce large unilamellar vesicle leakage by the same mechanism and at levels corresponding to their relative bactericidal activities. 2 Although the disulfide array has been thought to facilitate peptide-membrane interactions by maintaining a constrained amphipathic ␤-sheet structure, those interactions clearly are independent of disulfide bonding. Perhaps the disordered, random coil structures of the disulfide variants in aqueous solution (Fig. 2) assume the ␤-sheet structure of disulfide-stabilized Crp4 when in hydrophobic environments that mimic the lipid-water interface at the membrane surface. Alternatively, in the absence of constraints imposed by the disulfide array, the Crp4 molecule may adopt an unrelated configuration that retains amphipathicity and membrane-disruptive behavior. Transcripts coding for ␣-defensins with mutations at disulfide bonds accumulate in mouse small bowel. For example, C57BL/6 mouse small intestine expresses at least 12 ␣-defensin genes with mutations at varied Cys residue positions. For example, certain mutations are predicted to disrupt the Cys I -Cys VI disulfide bond, including (C6Y)-Crp (GenBank TM accession number AV070313) and two different (C6F)-Crps (AV-064537 and AV061023). An additional (C6F)-Crp mutant also has an Arg 35 to Cys substitution that could enable the formation of an alternative Cys I -Cys VI bond (AV070855). Double mutants of the Cys I -Cys VI linkage also exist as exemplified by (C1W/C6F)-Crp (AV067626) and (C1S/C6F)-Crp (AV067447). Three different (C5W)-Crp mutant peptides would disrupt the Cys III -Cys V bond (AV064900, AV070633, and AV066474), and a (C4F)-Crp peptide would lack the Cys II -Cys IV disulfide (AV-065642). In addition to these predicted single disulfide bond disruptions, (C1S/C2S/C3F/C6F)-Crp (AV066139) and (C3F/C-4F/C5S/C6F)-Crp (AV070606) mutants would lack all disulfides typical of ␣-defensins. Although the actual disulfide connectivities in these deduced Crp mutants are unknown, especially in vivo, our findings predict that the ␣-defensin component of these expressed proforms would be degraded during MMP-7-mediated activation. Given the demonstrated impact of Paneth cell ␣-defensins on enteric immunity (3), a loss of function caused by MMP-7-or trypsin-mediated proteolysis of naturally occurring ␣-defensin disulfide mutants could have adverse consequences for innate immunity in the small intestine.
2018-04-29T23:56:06.348Z
2004-10-15T00:00:00.000
{ "year": 2004, "sha1": "524dc9acbe78937048fb755eaf0f9d4fd2bf713f", "oa_license": "CCBY", "oa_url": "http://www.jbc.org/content/279/42/44188.full.pdf", "oa_status": "HYBRID", "pdf_src": "Highwire", "pdf_hash": "5ed217c9faed7ba75e3233cdebba63109838a5ad", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology" ] }
22509814
pes2o/s2orc
v3-fos-license
Optomechanical Entanglement under Pulse Drive We report a study of optomechanical entanglement under the drive of one or a series of laser pulses with arbitrary detuning and different pulse shapes. Because of the non-existence of system steady state under pulsed driving field, we adopt a different approach from the standard treatment to optomechanical entanglement. The situation of the entanglement evolution in high temperature is also discussed. Introduction Due to their various interesting properties, optomechanical systems (OMS) are under extensive researches over recent years [1][2][3][4][5]. They are regarded as good platforms for realizing high precision detection [6,7], e.g. detection of gravitational wave [8,9], ultra-low temperature cooling, e.g. cooling the nano-mechanical oscillator to its ground-state [10][11][12][13], and macroscopic quantum states, e.g. "cat state" or macroscopic quantum entanglement [14][15][16][17][18][19][20][21][22], and others. An OMS is nonlinear by nature, so it is difficult to find its exact dynamical evolution in the regime of strong optomechanical coupling. In the weak coupling regime, the fluctuation expansion around the classical steady states is a standard method for studying any feature of an OMS, including optomechanical entanglement [14,15]. With the replacementsâ →ā + δâ andb →b + δb, the original cavity (mechanical) mode is expanded into the sum of the average valueā (b), as the steady state found from classical equations of motion, and its fluctuation δâ (δb). Then the initial Hamiltonian will be linearized so that the quantum Langevin equations about the fluctuations can be solved. The practical use of the fluctuation expansion approach, however, relies on the solutions of classical nonlinear dynamical equations, which often take the approximate forms. In the cases of continuous wave (CW) drives, such solutions are the classical steady states determined by Routh-Hurwitz criterion [23]. The explicit situations when there is no steady state include the pulse and blue detuned CW driving fields. A possible way to deal with these situations is the evolution decomposition method [24,25], which works in any regime without the restriction of the steady state condition. A number of new phenomena were discovered based on this approach. For instance, the entanglement under blue detuned drive is predicted to be higher and more robust against high temperature than that under red detuned drive. Moreover, it is found that quantum noise effect can be enhanced by drive intensity, and would destroy the entanglement accordingly, resulting in sudden death and sudden revival of OMS entanglement [24,25]. Here we will apply this method to investigate the OMS entanglement created by pulsed drives. As it is known to all, a blue detuned drive would "heat" an OMS. A blue detuned CW drive, with its longer interacting time with the system, will increase the system temperature continually. The decoherence accompanying the rising temperature would diminish and even destroy the entanglement from optomechanical coupling. If a pulsed drive is used instead, the heating of the system would not be accumulated to significant temperature, and thus have the corresponding decoherence safely neglected for the blue detuned drive. This is one motivation to consider optomechanical entanglement under pulsed drive. OMSs and also predicted that robust EPR steering (without requiring a low temperature reservoir) could be achieved by a square pulse [35]. In this paper, we will present a detailed study of entanglement evolution under pulsed drives of different detunings and different shapes. With these evolutions, we will show that the created entanglement can even last for longer time than pulses themselves, and such entanglement is also robust against relatively high temperature when driven by blue detuned laser pulses. In addition, if a series of pulses continuously drive an OMS, nonzero entanglement could be preserved even when the gap time between the laser pulses is considerable. System Hamiltonian Here we consider an OMS driven by a single laser pulse or a series of laser pulses, which are given in the form ∑ j E j (t − j · t 0 )e iω 0 (t− j·t 0 ) with j being the number of pulses, t 0 being the gap time between the pulses, and ω 0 being the central frequency of the pulsed drive. The function E(t) represents the profile of the pulse. Under the pulsed drive, the system Hamiltonian in the interaction picture with respect to H 0 = ω câ †â + ω mb †b (h ≡ 1) is where g is the optomechanical coupling factor, ω c (ω m ) is the cavity (mechanical) oscillation frequency, ∆ 0 = ω c − ω 0 is the detuning. In the above equation, the first two terms describe the interaction between mechanical mode and cavity mode, while the last terms represents the driving on the cavity. The system we consider is an open system, i.e. the cavity and mechanical mode will damp with the rate κ (γ m ). The coupling to the reservoir can be described as the stochastic Hamiltonian [36]: whereξ c (ξ m ) is the stochastic Langevin noise operator. Under these factors, the joint evolution of the system and reservoir manifests in term of the evolution operator U S (t, 0) = Since the coupling and dissipation processes in the evolution are noncommutative, it is impossible to get a closed form of this time order exponential. Factorization of joint system-reservoir evolution operator Here we adopt a different approach from the standard fluctuation expansion method to study the system evolution. This approach is based on the factorization of evolution operator and is valid for arbitrary driving field including the pulsed ones [24,25]. The factorizations for the evolution operator is given as follows [24]: With these factorizations, the evolution operator can be simplified, so that it is possible to applied the factorized operators in succession to find the system observables that should be obtained by directly acting the original evolution operator U S (t, 0) on system operators or system's initial quantum state. In addition to OMS, this method has been applied to other physical systems recently [37-39]. The factorization procedure for our concerned system is as follows. At first, we implement a right factorization to separate the noise part U D (t, 0) = T exp{−i t 0 dτH D (τ)} out of the joint unitary operator: In the main Hamiltonian U D (t, τ)H S (τ)U † D (t, τ), the cavity, mechanical mode operatorsâ,b are transformed to are the induced quantum noise operators satisfying the following commutation relations: The transformed system operators therefore satisfy the equal-time commutation relation to the left side, and the cavity mode in the rest operator is determined by the transformation Finally, we factorize the term } out of the optomechanical coupling operator from the right side. To the first order of the optomechanical coupling constant g, we obtain the following effective Hamiltonian, which is a good approximation of the effective optomechanical coupling Hamiltonian for the weak coupling regime of an OMS. The joint unitary operator is therefore factorized as Since the OMS we consider is an open system, the joint initial state is the following product state where R(0) is the reservoir state. The operation U D (t, 0) and U K (t, 0) keep the joint initial state χ(0) invariant. So the expectation value of a system operatorÔ will be reduced to Entanglement calculation We work in the regime of g ≪ 1, so an initial Gaussian state will still be Gaussian after OMS evolution. For bipartite Gaussian states, the logarithmic negativity of the correlation matrix (CM) can well measure their entanglement. The CM is defined aŝ with the elementsV i j (t) = 0.5 û iû j +û jûi − û i û j , whereˆ u = (x c (t),p c (t),x m (t),p m (t)) T . The expectation values in the expression are calculated with Eq. (11). Then the corresponding logarithmic negativity is given as [40-42] where and To find the logarithmic negativity experimentally, one should measure the correlations û iû j for the cavity and mechanical modes. One proposal for doing so is discussed in [14]. More efforts should be taken to find the feasible ways to realize the measurement of logarithmic negativity. Since the drive operation U E (t, 0) only displaces the system operators, it will not contribute to CM elements and can be neglected in the calculation of logarithmic negativity. Considering the expectation value in Eq. (11), what we should take into account is only the effective Hamiltonian in Eq. (8). Under this Hamiltonian the evolution of system operators are determined by the following differential equations: The final term in the second equation can be neglected, since it only brings displacement which will not contribute to logarithmic negativity. The terms containingâ andb on the right side of the equations indicate the beamsplitter (BS) effect between the cavity and mechanical modes, while those containingâ † andb † indicate the squeezing effect which is the main cause of entanglement. Both two effects can be enhanced by the higher drive intensity, because each terms including the component D(τ). Moreover, one sees from the second term on the right side of each equation that a higher drive intensity will also enhance the noise effect. We rewrite Eq. (16) in the form wherê where R = ℜe[P(t, τ)] is the real part and I = ℑe[P(t, τ)] is the imaginary part with P(t, τ) = 2ge −(κ+γ m )(t−τ)/2 ∑ j D j (τ). Then the solution of above equation is found aŝ whereK(t, 0) = t 0 dτM(t, τ), and the function m(t, 0) is from the relationK 2 (t, 0) = m(t, 0)Î. Entanglement evolution under a single laser pulse With the solution in Eq. (18), one can obtain the entanglement evolution under arbitrary pulsed drive. The first one we discuss is the Gaussian pulse, whose profile is The pulses are displayed in the top of Fig. 1, with the different width ∆ω = 4ω m , 4/3ω m , 0.8ω m , 0.4ω m . One has the increased entanglement following the increase of pulse intensity. The entanglement keeps increasing though the pulse intensity has started to decrease. Only until the pulse almost disappears, does the entanglement begin to decrease and oscillate for a period. The entanglement will exist longer than the pulse duration. The entanglement under blue detuned central frequency [ Fig. 1(a) and Fig.1(b)] is obviously larger than that under red detuned central frequency [ Fig. 1(c) and Fig. 1(d)]. However, a main difference of the pulse driven entanglement from the CW counterpart is that a spectrum of frequencies with disparate detunings give rise to the actual entanglement. As seen in Fig. 1, the entanglement values for the pulses of narrower bandwidth behave monotonously in both blue and red detuned regime for the central frequency, while those of larger bandwidth show the entanglement patterns of both blue and red detuned frequency components. The entanglement from red detuned frequency components oscillates with time [25], so this pattern always exists for the narrowest pulse (corresponding to the widest frequency bandwidth) in Fig. 1. As the result, the entanglement for the pulses of larger bandwidth show insignificant difference in Figs. 1(a) and 1(b), except that more oscillation pattern is introduced in 1(b) as the central frequency moves toward the red detuned regime. When the pulses' central frequency becomes red detuned, the oscillation for one of the pulses will disappear due to the suppression of entanglement for its whole frequency spectrum in the regime. On the other hand, given the same driving intensity, the entanglement will become higher with a narrower width. For comparison, we consider other pulses of different profiles, such as square pulse, triangle pulse, sawtooth pulse and trapezium pulse; see the top of Fig. 2. Their evolutions for the central frequency detuning ∆ 0 /ω m = −1 [ Fig. 2 (a)] are similar, so the different pulse shapes do not affect the entanglement much in this case. Meanwhile, difference appears in the evolution with red detuning ∆ 0 /ω m = 1 [ Fig. 2 (b)]. Compared with Gaussian pulse driven OMS, the entanglement values under the pulse drives in Fig. 2 are higher around the SQ resonant point (∆ 0 /ω m = −1). It can be explained by the fact that a Gaussian pulse contains more significant off-peak frequency modes, and the frequency components whose detunings are far away from SQ resonant point contribute less to the entanglement. This is also the reason for why a narrower bandwidth of Gaussian pulse is better for entanglement creation. Moreover, due to their more centered frequency spectra, the differences of the entanglement evolutions in the blue and red detuned regime become more obvious for the pulses in Fig. 2. Entanglement evolution under a series of laser pulses As show in Fig. 1, the entanglement could last for longer time than the pulse duration. The entanglement may survive if one more pulse drive the OMS before it disappears. Since the entanglement will be kept and raised in this case, the minimal value of entanglement will be nonzero and the peak value of entanglement can be slight larger than that under a single pulse. This phenomenon will occur if the gap time between the centers of the pulses is not too large. In Fig. 3, we show the entanglement evolution of the OMS driven by a series of Gaussian pulses. The gap time (∆(κt) = 8) is relatively large in Fig. 3(a), resulting in the repeated evolution pattern of the entanglement under single pulse driven OMS [see Fig. 3(b)]. When the gap time (∆(κt) = 5) is relatively small as in Fig. 3(c), the entanglement evolution will be totally different as shown in Fig. 3(d). The peak value is about 1.6 for the width ∆ω/ω m = 4 and 1.1 for the width ∆ω/ω m = 4/3, which are slightly larger than the ones (1.5 and 1, respectively) shown in Fig. 1. At the same time, the minimal value becomes nonzero (about 0.1) for the width ∆ω/ω m = 4. Especially, for the width ∆ω/ω m = 4/3 (the overlap of two pulses can be neglected in this case), the minimal value will be even larger (about 0.3). A rather stable entanglement (> 0.3) can therefore be achieved by pulsed drive, even the overlap of two pulses is not so obvious. Entanglement evolution under high temperature In the above discussions, the temperature is set to be T = 0. However, temperature is also a main factor that affects the existence of entanglement. In Fig. 4 we show the entanglement evolution under the condition n m = 10 4 while other conditions are the same as in Fig. 1. Obviously, the entanglement decreases with high temperature, and the phenomenon of entanglement sudden death and sudden revival becomes apparent. Through the comparison, one sees that the robustness of the entanglement against high temperature for the blue detuned frequency components is much more significant than that from red detuned ones. As it is also shown in these figures, the entanglement evolutions under higher temperature become more complicated, because they result from more intricate interplay of various physical factors (temperature, frequency bandwidth, central frequency detuning and drive intensity). The competition between the factors gives rise to the seemingly unexpected entanglement amplitudes in Figs. 4(b), 4(c) and 4(d). At the central frequency detunings in 4(b) and 4(c), the higher driving intensity of the pulse with the widest bandwidth actually suppresses the entanglement value, because the thermal noise effect killing the entanglement is enhanced by the driving strength [see Eq. (15)]. If the central frequency is exactly BS resonant as in 4(d), the suppression of entanglement for the whole spectrum becomes more significant with the temperature, leaving the remnant due to the far-away frequency components from the point, which is only possible for the pulse with the widest bandwidth. To achieve high and robust entanglement, blue detuning around ∆ 0 /ω m = −1 associated with narrow pulse bandwidth, should be the best choice. Conclusion We have depicted the optomechanical entanglement evolutions under various pulsed drives. Compared with CW drives, high and robust entanglement can also be created especially with pulses of blue detuning and narrow bandwidth. Such entanglement can last for long time with repeated series of driving pulses. The flexibility in engineering pulse frequencies and shapes enables one to achieve variety of entanglement evolution patterns for OMSs.
2015-08-12T09:17:38.000Z
2015-06-08T00:00:00.000
{ "year": 2015, "sha1": "b8836b446ce20ccb4a4d675841dac021c42cd84a", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1364/oe.23.024497", "oa_status": "GOLD", "pdf_src": "Arxiv", "pdf_hash": "b8836b446ce20ccb4a4d675841dac021c42cd84a", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Medicine", "Physics" ] }
80904137
pes2o/s2orc
v3-fos-license
A study of migraine cases in a tertiary care hospital neurology outpatient department: demography, sub classification and clinical features Background: Recurrent headache disorders impose a substantial burden on headache sufferers, family and society. In India, 15 to 20% of people suffer from migraine with an adult female: male ratio of 2:1. This study has been done with an aim at documenting the different types of migraine, their clinical presentations among patients presented to the Headache clinic, Neurology outpatient Department, Government Rajaji Hospital, Madurai during a one year period. Methods: The patients registered at Headache clinic, Neurology outpatient Department, Government Rajaji Hospital, Madurai during one year period between the March 2009 and February 2010 with the diagnosis of migraine as per International Headache Society 2004 criteria were taken for this study. The clinical material was collected from the records and by patient interviews with a detailed pre-prepared proforma. The various parameters of the patients were compared, classified and analysed with specific reference to national and international studies. Results: Migraine is the commonest type of headache comprising of about 76% of total cases of headache. Migraine without aura (48%) was more common than migraine with aura (32%). Female preponderance was noticed in all subtypes of migraine, age of onset being in 2 and 3 decade for majority of the subgroups of migraine, with positive family history in 45% of cases, with predominant unilateral in presentation and temporal in location, lasting for 12 to 24 hours in majority of cases. Conclusions: Migraine is the commonest type of headache in patients observed in this study. Among subtypes migraine without aura is the commonest. Second and third decade is the commonest age group of onsets. INTRODUCTION Headache is one of the chief complaints among patients attending neurology outpatient department. More than 90% of patients with headache will have primary headache. Primary headaches are defined as headaches that are not caused by an identifiable underlying structural, vascular or systemic illness. Hence the diagnosis of primary headache begins with the exclusion of secondary causes for headache. Migraine along with Tension type headache and Trigeminal autonomic cephalalgias form the group of primary headaches. Although Tension type headache is the most common kind of headache, patients with this type of headache Department of Neurology, Government Mohan Kumaramangalam Medical College Hospital, Salem, Tamil Nadu, India rarely seek treatment unless it is daily in occurrence or severe. Hence migraine is the most common headache diagnosis for patients attending headache clinic. Patient's history is essential tool to diagnose migraine. Migraine is among the most common disorders of the nervous system. They are pandemic and, in many cases, life-long conditions. Headache with neuralgia is referred to since the times of ancient Egyptians (1200 BC), with Hippocrates (460-377 BC) describing the visual aura in migraine and its relief through vomiting. 1 In India, 15 to 20% of people suffer from migraine with an adult female: male ratio of 2:1. 2 Migraine is a chronic, often inherited condition involving brain hypersensitivity and lowered threshold for trigeminal vascular activation. Recurrent migraine imposes a substantial burden on headache sufferers, family and society, typically affecting individuals when they are in their teens or twenties with peak prevalence around the age of 40. Migraine results in a marked decrease in patient's quality of life as measured by physical, mental and social health related indices. However, in many of the times migraine remains underdiagnosed and patients with migraine are undertreated. Although migraine is extraordinarily common, its epidemiology and clinical profile are only sparsely documented. Authors undertook this study with an aim of documenting the epidemiology, clinical profile and classification of migraine among patients presented to the Headache clinic, Neurology out patient Department, Government Rajaji Hospital, Madurai during one year period between the March 2009 and February 2010 and comparing the same with previous published studies. METHODS The patients registered at Headache clinic, Neurology Outpatient Department, Government Rajaji Hospital, Madurai during one year period between the March 2009 and February 2010 were taken for this study. Government Rajaji Hospital is a tertiary care teaching hospital attached to Madurai medical college serving the public of Madurai district and nearby districts. Patients are referred to headache clinic of the neurology department from other outpatient departments of the hospital and also from private clinics and primary health centers. The clinical material was collected from the patient records and by patient interviews with a detailed pre-prepared proforma. Classification of headaches Of the 382 case of migraine, 186 (48.69%) patients had migraine without aura, 36 (9.42%) patients had migraine with aura, 90 (23.56%) patients had migraine which presented with and without aura, 62 (16.23%) patients had complications of migraine and 8 (2.09%) patients had probable migraine as shown in Figure 1. 8, 2% Migraine without aura Migraine with aura Migraine with and without aura Complications of migraine Probable migraine Of the patients migraine with aura (36 patients) the predominant subtype was typical aura with headache (16 patients). Present study also included 6 patients of typical aura with non migraine headache, 4 patients with typical aura without headache and 10 patients with basilar type migraine. Present study group did not have patients with familial hemiplegic migraine and sporadic hemiplegic migraine as shown in Figure 2. Of the 62 patients with complications of migraine, 48 patients had chronic migraine (12 patients with chronic migraine since onset and 36 patients with episodic headache converting to chronic migraine), 1 patient had migrainous infarction (right occipital infarction) and 13 patients had migraine trigerred seizures (migraine terminating as seizures -9, migralepsy -4). In present study, authors came across 52 patients with migraine and seizures, 13 patients had migraine triggered seizures, 17 patients had post ictal migrainous headache and 22 patients had migraine and coexistent seizure. Of the 13 patients who had migraine triggered seizures 9 patients had migraine terminating as seizures and 4 patients had migralepsy. Age Migraine was most common in the 2nd and 3rd decades, 249 out of 382 patients were between 11-30years of age (11-20yrs -139 patients, 21-30yrs -110 patients, 31-40yrs -89 patients). In patients with migraine triggered seizures of the total 13, 9 patients were in the age group of 11-20years. Basilar migraine was also found to be common in the age group of 11-20years (7 out of 10) patients and chronic migraine was noticed in 4 th and 5 th decade of life predominantly (35 out of 48). Gender distribution The patients were analysed for gender distribution. Female patients dominated in the category of migraine 262 of the 402 patients. A complication of migraine was more common in female patients. Out of the 13 patients with migraine triggered seizures 10 were female and 3 were male. Chronic migraine was predominantly seen in females (32 out of 48 patients). Family predilection Family history was positive in 174 patients (45.54%) with migraine. Clinical presentations Headache in patients with migraine, 286 presented with unilateral headache, of which 193 patients experienced shift of sides while 96 patients had always a unilateral headache. Most of the episodes of migrainous headache in our patients lasted more than 12 to 24 hours (251 patients). The migrainous headaches were predominantly temporal (217 patients). Most of the patients experienced a throbbing type of headache (263 patients). Symptoms, aggravating and relieving factors The predominant premonitory symptom was fatigue (76 patients), followed by, sense of feeling low, irritability, yawning and over eating in few patients. Associated symptoms in order occurrence being nausea, photophobia, phonophobia, blurring of vision, vomiting and giddiness. Loss of consciousness was reported in 39 patients. Autonomic symptoms like lacrimation, redness and transient syncope were reported in few patients. The most common aggravating factor in our study group was mental stress, while physical stress, head bath, bright sunlight, lack of sleep, travel and consumption of chocolates were also commonly reported. Head bath as an aggravating factor has been observed in 116 patients. The relieving factors were mostly rest and analgesics or topical applications and sleep. Aura The preponderant type of aura reported in studied migranous patients was visual aura in 107 patients while sensory aura was reported in 17 patients and both in 2 patients. The visual aura was predominantly in the form of flickering of lights in 70 patients, while zig-zag lines, scintillating scotomas and fortification spectra was noted in 17 patients, 10 patients and 5 patients respectively. The sensory aura seen was commonly in the form of paresthesia (12 patients) as tabulated in Table1. Migraine triggered seizures Of the 13 patients who had migraine triggered seizures 9 patients had migraine terminating as seizures and 4 patients had migralepsy. On analyzing the seizure pattern of these 13 patients, 7 patients had GTCS, 6 patients had CPS. Associated conditions In present study of 382 patients with migraine the following clinical conditions were seen associatednamely healed granulomatous lesions on CT, seizures, hypertension, and stroke. Diagnostic tests Electrophysiology EEG was taken in 52 of 382 patients with migraine. EEG was taken in all the 13 patients with Migraine Triggered Seizures. Changes were seen in 8 out of 13 patients, 5 patients had spikes and sharp waves in posterior head region, 3 patients had non specific slowing in posterior regions and there were no specific changes in 5 patients. In other 39 patients, nonspecific slowing in posterior regions was seen in 30 patients and EEG was normal in 9 patients. CT scan CT scan of brain was taken in all of the 382 patients, of whom 40 had changes. The most common change reported in CT scan brain was calcified granulomas in 36 patients, gliosis in 3 patients and basal ganglia calcification in one patient. In all the 19 patients with migraine triggered seizures, CT scan brain was found to be normal in all patients. 3 Tension type headaches are considered the most common form of headache in the general population with a prevalence of nearly 80% while the prevalence of migraine is pegged at 16% in various international studies. In contrast, migraine is a more common form of headache reported in clinical practice. This variance is attributed to self-treatment of tension type headaches by the general population. This variation reported in present study correlates with the study of Lance et al. 4 In present study of the 382 cases of migraine, 186 (48.69%) patients had migraine without aura, 36 (9.53%) patients had migraine with aura, 90 (23.09%) patients had migraine which presented with and without aura, 62 (17.37%) patients had complications of migraine and 8 (2.09%) patients had probable migraine. The ratio of migraine without aura and migraine with aura in present study is calculated approximately at 5:1 which correlates with international studies (Ropper AH). 5 However the ratio narrows down to 1.5:1 if migraine without aura is compared against migraine with and without aura. In present study of the 382 cases of migraine, 90 (23.56%) patients had migraine which presented with or without aura. These 90 patients were placed as a separate group. This group of patients have has been identified and given particular mention in the International Classification of Headache Disorders 2004 classification. "Many patients who have frequent attacks with aura also have attacks without aura (code as 1.2 migraine with aura and 1.1 migraine without aura)." 6 Hence in present study authors have placed this group of patients as a separate sub entity within the entity migraine. Age distribution Of the 382 patients having migraine most of them were between 11-40years of age (11-20yrs -139 patients, 21-30yrs -110 patients, 31-40yrs -89 patients). In this study migraine with aura peaked around 12-17 years of age in males (5 out of 10) and around 18-22 years in females (9 out of 26). The onset and peaking in both male and female patients in this study is 8-10 years delayed in contrast to that observed in western population by Stewart et al. According to him the incidence of migraine with aura in females peaked between ages 12 and 13 (14.1/1000 person-years); and in males, migraine with aura peaked in incidence several years earlier, around 5 years of age at 6.6/1000 persons-years. Before puberty, migraine prevalence is higher in boys than in girls. The peak incidence of basilar migraine in this study was around 10-15 years which correlates with the international literature. As adolescence approaches, incidence and prevalence increase more rapidly in girls than in boys. The 7 If the migraine headaches persist beyond 40 years of age there is more chance for transformation into chronic migraine. Gender distribution The patients analysed for gender distribution. Of the 382 patients with migraine 68.58% of them were females. Female predominance is noted in all groups including migraine with aura, migraine without aura, basilar migraine, migraine triggered seizures, migrainous infarction and chronic migraine. Menstrual related migraine was noticed in 24 patients whereas pure menstrual migraine was present in 2 patients. The American Migraine Study-1 (AMS-1) and AMS-II, collected information from 15,000 households representative of the US population in 1989 and 1999. 10 Yet another study, the American Migraine Prevention and Prevalence study (AMPP) replicated the methods of AMS-I and AMS-II. In these three large studies, the prevalence of migraine was about 18% in women and 6% in men (Abu-Arefeh I et al). 8 Family predilection Family history was positive in 229 patients (45.54%) with migraine. Of the 36 patients with migraine with aura 20 out of 36 patients had a first-degree relative suffering from headache. Russell MB et al, has stated that first degree relatives of patients with migraine with aura had a three -four fold increased risk of migraine and it is twofold in first degree relative's patients with migraine without aura. 9 Location and character In patient with migraine 286 presented with unilateral headache, with 193 of these patients experiencing shift of sides while 96 patients had always a unilateral headache. The headache was predominantly temporal (217 patients). Most of the patients experienced a throbbing type of headache (263 patients) with a majority of cases (145) lasting for a duration of 12-24 hours. The predominant premonitory symptom was fatigue (76patients), with nausea, photophobia, phonophobia and blurring of vision the commonest associated symptom in order of occurrence. Loss of consciousness was reported in 39 patients. Aura Among the several types of aura, visual aura was more common (102 of 126 cases) which is in concordance with the literature. Two patients while on treatment for migraine headaches with aura, later had only aura alone without headache. The visual aura was predominantly in the form of flickering of lights in 70 patients, while zig zag lines, scintillating scotomas and fortification spectra was noted in 17 patients, 10 patients and 5 patients respectively. This is in correlation with most international studies (Christopher et al). 10 When patients with typical aura with migraine headache become older, their headache may lose migraine characteristics or disappear completely even though auras continue. Sensory aura was reported in 17 patients and both in 7 patients. The sensory aura seen was commonly in the form of paraesthesia (12 patients). Aggravating and relieving factors In present study the most common aggravating factors were mental stress, while physical stress and lack of sleep were also commonly reported. Head bath as an aggravating factor has been observed in 116 patients. A similar observation has been referred to by Ravishankar et al. 11 This prospective study analysed this unusual trigger link in 94 out of 1000 Indian patients who fulfilled the International Headache Society criteria for migraine. In 11 patients, hair wash was the only trigger; in 45 patients hair wash was one of the triggers and in 38 patients hair wash was a trigger concurrently and in combination with another common trigger. The effect of episodic and long-term prophylaxis in preventing this trigger-like headache has been analysed. The relieving factors were mostly rest and analgesic ingestion and sleep. Associated clinical conditions In present study of 382 patients with migraine the following clinical conditions were seen associatednamely evidence of healed granuloma, seizures hypertension and stroke. Chronic migraine In present study of the 382 case of migraine, 62 (16-18%) patients had complications of migraine. Of these 62 patients, 48 patients had Chronic Migraine (12 patients of chronic migraine since onset and 36 patients of episodic headache converting to migraine), 1 patient had migrainous infarction (right occipital infarction -1) and 13 patients had migraine trigerred seizures (migraine terminating as seizures -9, migralepsy -4). In present study, the most common complication observed in patients with migraine was transformation of migraine to chronic migraine or chronic daily headache. As the chronicity develops migraine headache lost its episodic presentation, as tabulated in Table 2. Most of these patients with transformed migraine are patients with migraine without aura which is concordant with the studies of Siberstein SD et al. 12 One patient was 16 Headache can also be the sole or most predominant manifestation of epileptic seizures, though this is a relatively rare situation. 17 Diagnostic studies Electrophysiology EEG was taken in 52 of the 382 patients with migraine. 33 of the 52 patients showed non specific slowing in posterior region, while 5 patients showed spikes and sharp waves in occipital region while 14 of the patients had no specific changes. The EEG features in studied patients were sharp waves and spikes in posterior occipital region mainly occipital region (5 patients of migraine triggered seizures), more during the period of aura and was normal during the interval period between attacks of migraine. This correlates with a large multicenter study by Beaumanoir A et al, which showed the incidence of spikes and paroxysmal events in 12.5% of migraine patients compared to 0.7% in normal adult volunteers. 18 Radio-imaging studies CT scan of brain was taken in all of the 382 patients, of whom 40 had changes. The most common change reported in CT scan brain was calcified granulomas in 36 patients, gliosis in 3 patients and basal ganglia calcification in 1 patient, which were incidental. Frishberg BM et al, reviewed four CT scan studies, four MRI scan studies, and one combined MRI and CT scan study of 897 scans of patients who had migraine. 19 These findings are combined with more recent reports of one CT scan study of 284 patients and six studies of MRI scans of 444 patients for a total of 1625 scans of patients who had various types of migraine. Other than white matter abnormalities, the studies showed no significant pathology except for four brain tumours and one arteriovenous malformation which were all incidental. CONCLUSION Migraine is the commonest type of headache comprising of about 76%. Migraine without aura (48%) was more common than migraine with aura (32%). Female preponderance was noticed in all subtypes of migraine, age of onset being in 2 nd and 3 rd decade for majority of the subgroups of migraine, except for basilar migraine which was common in 1 st and 2 nd decade. Migraine pain was temporal in location, unilateral, throbbing in character, lasting for 12 to 24 hours in majority of the cases. Chronic migraine, migraine triggered seizures and Migrainous infarction were the complications of migraine encountered in this study in the order of frequency of occurrence. Transformation to chronic migraine was more common from episodic forms and in patients with onset of migraine in teens or twenties.
2019-03-18T14:04:07.902Z
2018-11-26T00:00:00.000
{ "year": 2018, "sha1": "8b589867d36a49f2ed5e8e641f9b346edc31dba4", "oa_license": null, "oa_url": "https://www.msjonline.org/index.php/ijrms/article/download/5628/4316", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "928c0ddf41f2e8b4805702e3463dd320ec7099a3", "s2fieldsofstudy": [ "Medicine", "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
56481297
pes2o/s2orc
v3-fos-license
Waste residues from Opuntia ficus indica for peroxidase-mediated preparation of phenolic dimeric compounds Graphical abstract Introduction Peroxidases (EC 1.11.1.7) are oxidoreductases that catalyze the oxidation of a diverse group of organic compounds using hydrogen peroxide as the ultimate electron acceptor. They are known for a variety of commercial applications and have been used in large scale commercial processes [1,2]. Peroxidases are readily available enzymes characterized by high stability, activity and low substrate specificity that do not require additional cofactors. Therefore, they can be used isolated, mainly from the roots of horseradish plant as main commercial source. The biocatalytic oxidation of guaiacol (o-methoxyphenol) and related compounds generated a variety of oligophenols (dimers to pentamers), and some other oxidation products [3]. The radical character of intermediates formed in the reactions catalyzed by peroxidases constitutes an important drawback that hampers a more extensive use of these enzymes in chemical synthesis. The direct one-electron oxidation of the substrate molecule involved in different coupling reactions leads to formation of a wide variety of polymeric products, decreased yields of target compounds and thus, making their isolation a hard task [4]. In the current study we have tested wastes of a locally available and highly consumed vegetable, Opuntia ficus indica, as an alternative source of natural peroxidases and studied their potential enzymatic activity. O. ficus indica is one of the cactus species most extensively cultivated in Mexico, due both to the nutritional values and high digestibility of its fruits and cladodes. In fact, during 2014 total national production of this vegetable was above 8 Â 10 5 ton (SIAP, Mexico). 1 Before consumption, cladodes are submitted to a process to remove the thorns produced in the meristem. This specific part of the plant is known for lignification processes in which peroxidases are highly-involved. Moreover, this waste product accounts for approximately 30% of dry weight of the cladodes and therefore it can be considered as potentially important source of peroxidases. On the other hand, phenolic compounds, are widely distributed in plants and are present in considerable amounts in fruits, vegetables, and beverages of normal human diet [5]. Compounds such as orthomethoxy para-substituted phenols have attracted considerable attention due to its various biological and pharmacological activities, mainly as antioxidants [6,7]. Furthermore, some orthocoupled dimer derivatives have been reported to present better biological properties than their monomeric counterparts [8][9][10][11][12][13][14][15]. Therefore, in the present paper we report the biotransformation of several phenolic compounds, which are well known for their antioxidant activity, with an enzymatic extract from Opuntia ficus indica waste products yielding phenolic dimeric compounds, along with some preliminary results on the influence of H 2 O 2 concentration and the use of a co-solvent. Experimental Section All chemicals, materials, and commercial horseradish peroxidase C (HRP C) were purchased from Sigma-Aldrich TM or JT Baker TM . All chemicals were of analytical grade and were used without further purification. H 2 O 2 was used as a 30% (v/v) solution in H 2 O. Plant Material The plant material used for preparation of peroxidase crude extract was obtained from O. ficus indica young cladodes (or pads) measuring 6 to 8 cm in length, collected from local farmers in Milpa Alta (Mexico City). Thorns were removed from cladodes and then used for the preparation of enzymatic extract. Crude enzymatic extract Crude enzymatic extract was obtained by homogenizing 100 g of waste material with 300 ml of phosphate buffer pH 7 (100 mM). The homogenized extract was centrifuged at 5000 rpm for 5 min at 4 C, and the supernatant was stored at -65 C until assays were performed. Peroxidase Activity Activity was determined spectrophotometrically by the change in absorbance at 590 nm due to benzidine oxidation. The reaction mixture contained 200 mL benzidine solution (1% in ethanol), 10 mL of enzyme extract, 30% H 2 O 2 solution (0.08 mM), and acetate buffer (100 mM, pH = 5.0), in a total volume of 1.2 mL. The assay was performed at 25 C using a Lambda 2S spectrophotometer. Protein Determination Protein was quantified through a spectrophotometric procedure based on the dye-binding method of Bradford (1976) 2 using bovine serum albumin (BSA) as standard and a commercial Bradford reagent kit: 1 mL of the reagent was mixed with enzymatic extract (10 mL) and after 15 min absorbance of the solution was measured at 595 nm. Analytical methods Gas chromatography-mass spectrometry analysis for conversion degree was performed on a Shimadzu GC-MS QP 5050 instrument using a DB-5 (1% methyl phenyl silicone) capillary column purchased from Alltech Associates, USA (30 m, 0.32 mm i. d., 0.25 mm film thickness), and equipped with electronic impact source (75 eV) and a quadrupole analyzer. Helium was used as the carrier gas. Conditions of injections for all cases were as follow: Temperature of injector and detector 250 C, initial column temperature 150 C (2 min) then raised up to 250 C at a 10 C min-1 rate. 13 C-NMR (75 MHz) spectra were performed on a Varian Gemini 300 with tetramethylsilane (TMS) as the internal reference and CDCl 3 or CD 3 OD used as solvents. Preparative Peroxidase-promoted dimerizations A standard methodology for reactions of all three phenolic compounds was set using eugenol as starting product: reaction mixture contained 100 mM acetate buffer solution (pH = 5.0), 4 Â 10 4 Enzymatic Units (EU) from O. ficus indica peroxidase enzymatic extract, 30% H 2 O 2 solution (100 mL) and eugenol (0.3 mM) in a final volume of 40 mL. Biotransformation reaction was incubated at room temperature for 4 h at 250 rpm on an orbital shaker, after which reaction was stopped with a suitable volume of ethyl acetate. Reaction was monitored every 30 min taking aliquots of 1 mL, extracted with 2 mL of EtOAc, and then developed through TLC (hexane:EtOAc, 8:2 or chloroform:acetic acid, 9:1). After reaction was completed, crude product was purified. Biotransformations of eugenol with different concentrations of H 2 O 2 The reaction mixture contained, in a total volume of 40 mL, 4 Â 10 4 EU O. ficus indica peroxidase enzymatic extract, different H 2 O 2 concentrations (0.3 mM, 0.6 mM or 0.9 mM), eugenol (0.3 mM) and a correspondent volume of acetate buffer. Reaction mixtures were incubated in continuous orbital shaking for 4 h at room temperature and 150 rpm. The enzymatic reaction mixture was extracted with ethyl acetate (1 x 200 mL), organic phase was dried over Na 2 SO 4 and concentrated in a rotatory evaporator. The product of the reaction mixture was purified by column chromatography (hexane:ethyl acetate, 7:3). Reaction was followed through TLC taking 2 ml of the extract and developed with hexane/ethyl acetate (8:2). Biotransformations of eugenol with various solvent-water ratios To a reaction mixture as described above (H 2 O 2 0.3 mM), was added a water miscible solvent (acetone, dioxane or THF, 10 mL), following a buffer:solvent ratio of (3:1). Time and work-up of crude reaction was made exactly as stated previously. Determination of peroxidase activity and TLC An initial goal was to determine peroxidase activity in the thorns removed from cladodes through a standard method. The assay proved an enzymatic activity of 12.51 U/mg of protein. Qualitative TLC results implied peroxidase-promoted bioconversions as new products were detected for all proposed substrates and decreased intensity from starting material was observed (Fig. 2). Control experiments showed no formation of dimers in the absence of either enzymatic crude or H 2 O 2 . All experiments were compared to the biooxidation reactions performed with commercially available horseradish peroxidase (HRP) and compounds 1, 2 and 3. In all cases similar products to those formed with standard reaction were found (Fig. 2). In consequence it was decided to conduct more specific essays for each of the substrates studied. Biotransformation of ferulic acid catalyzed by an Opuntia ficus indica peroxidase enzymatic extract Compound 1, an in vivo substrate for horseradish peroxidase and other oxidoreductases, was partially consumed by the enzymatic extract yielding a major compound ( Fig. 2A and 2 A', 30%) that was purified, and its structure confirmed by 13 C-NMR and MS as symmetric dilactone 4 (Fig. 3). The 13 C-NMR spectrum presented only 10 well differentiated signals, that along with the MS data (M + = 385), suggested the formation of a symmetric molecule such as 4, showing duplicate carbon signals. The data obtained were in accordance with already published studies for this compound [17]. Several works report that ferulic acid dimers usually present better antioxidant activity than the monomer, due to an extended resonance that lead to more stable conjugate-transient structures that quench free radical species [12]. Even though most of the dimers have better properties, compound 4 has been reported as having lesser activity than ferulic acid [20]. Notwithstanding, when dilactone 4 is presented on its open form inhibits lipid peroxidation better than compound 1 [11], as authors claimed that antioxidant capacity in cinnamic acid derivatives is increased with more hydroxyl groups bond to the aromatic ring as well as longer, uninterrupted conjugation within the molecule. These results were recently analyzed through insilico studies [21], and authors found same correlation between extended conjugation and reactivity towards free radicals and antioxidant activity. Similar results were found by Jia et al. [12]. Furthermore, biooxidation of ferulic acid using peroxidase from Opuntia ficus indica crude extract afforded better yield than traditional chemical catalysts applied in previous works. For example, Stafford and Brown [22] reported the synthesis of ferulic acid dilactone by means of chemical catalysts FeCl 3 or (NH 4 ) 2 S 2 O 8 / FeCl 3 with yields of 18% and 20%, respectively. Additionally, in other enzymatic studies dilactone 4 was obtained as a product of peroxidase-promoted dimerization of ferulic acid using either fungus or vegetable source of the enzyme, but yields did not exceed the aforementioned range [22][23][24]. Biotransformation of eugenol 2 and isoeugenol 3 catalyzed by an Opuntia ficus indica peroxidase enzymatic extract Dimerization of phenolic compounds presents a good application potential, as these products have been found with interesting properties when compared with their respective monomers. For example, it has been reported that dimers of eugenol, such as compound 5 (Fig. 3) showed less cytotoxicity and greater antiinflammatory activity than its monomer 2 ( [25][26][27]). Other authors claimed that eugenol and isoeugenol dimers, possess higher antitumoral activity [8], as well as promising use in antidiabetic drugs [15] and neuroprotective activity [13]. Therefore, potential application of these molecules has triggered the study of their preparation, either using chemical or enzymatic methods. For example, dimerization of eugenol was already achieved before using hydrogen peroxide and Fe (II) with yields of 20% and with Cu (I) and amines with yields between 26-79% depending of the amine used [28]. Among several other studies, Llevot et al. (2016) studied dimerization of phenolic compounds, including eugenol, using laccase-catalyzed coupling reactions with excellent yields (87-96%). Structure of compound 5 was determined by comparison of its spectroscopic properties versus chemical arrangement, finding symmetry in the molecule as it was already observed for dilactonic diferulate. 13 C-NMR spectra showed only half number of signals compared to the total amount of carbons presented in the dimer [29], while MS fragmentation pattern was identical to that suggested before for this compound by Krawczyk et al. [30]. Biocatalyzed and synthetic products were compared for its antioxidant activity on a TLC plate that was stained with DPPH reagent (Fig. 4), both substances showing the same Rf value and antioxidant qualitative activity. Preliminary yield for the dimeric oxidation of eugenol obtained in this study was 11%, which is lower than reported earlier [30]. Authors described the coupling of eugenol and isoeugenol by means of horseradish peroxidase and hydrogen peroxide affording two main dimeric compounds: 5, the most common product of ortho-oxidative coupling of eugenol at 22% yield, and 6 obtained from isoeugenol 3 at 19% yield. On the other hand, compound 6 was synthesized using conventional chemical catalysis, in yields that range from 35 to 95%. [19,28,31]. Both antioxidant activity and the ability to quench hydroxy-radicals by this compound were tested by different research groups, [32,33] proving better anti-inflammatory properties than eugenol [31,28,19]. In the context of this work, product 6 showed the best isolated yield of the three dimerized compounds reaching 55%. Similarly, the structure of compound 6 was assigned by comparing NMR and MS data of the chemically synthesized molecule with already published information. [18,29]. To our knowledge, compound 6 has Fig. 3. Structure of products from dimerization reactions. been prepared previously through enzymatic oxidative coupling by Krawczyk et al. [30] and Sarkanen and Wallis [34] with yields ranging from 20% to 65%. Furthermore, it appeared that the presence of guaiacol moiety was necessary to achieve such selectivity, as other cinnamic acid derivatives that were tried with this extract (e.g. p-coumaric and sinapic acids) did not present such behavior (data not shown). In fact, some authors had found specific guaiacol peroxidades from different parts of O. ficus indica species [42]. On this regard, it could not be proved if one or more enzymes were involved in the transformations. On one hand, in a prior study it was reported the isolation of only one peroxidase from this type of plant [43]. On the other, there are reports on the extraction of several protein fractions from O. ficus indica, that present peroxidase activity, thus suggesting the presence of more than one enzyme [44]. Biotransformations of eugenol (2) with different H 2 O 2 To evaluate the effect of experimental conditions on the outcome of reaction, it was decided to work with the substrate that presented the lowest yield, compound 2. Therefore, the effect of H 2 O 2 concentration and the use of different co-solvents were studied (Scheme 1). Since higher conversion yields were reported to be obtained when the concentration of peroxide was closer to that of the substrate [45,46] three different ratios of H 2 O 2 /Eugenol were tested: 1:1, 2:1 and 3:1 (Table 1). In all three cases the quantity of isolated product was higher. Yields for 1:1 and 2:1 ratios increased considerably from 11% (0.5:1 ratio) to 44 and 39%, respectively, whereas a 3:1 ratio yielded a lower conversion (18%), but still better than the first reported. With the aim of understanding observed results, it was decided to increase the ratio of H 2 O 2 /Eugenol (8:1; 12:1 and 16:1). The higher the concentration of peroxide was used, the less product 5 was detected (data not shown), suggesting the possible inactivation of the enzyme(s) as a consequence of the increased concentration of H 2 O 2 [47]. Nonetheless, further experiments should be performed to determine the actual influence of H 2 O 2 on peroxidase extract used in this study. Effect of water-miscible solvents on the biotransformations of eugenol (2) Once concentration of H 2 O 2 in the enzymatic reaction with eugenol was optimized, the effect of water-miscible solvents in the reaction media was evaluated. The proper reaction media modifies the solubility of reactants, intermediates and products, hence altering the reaction outcome by potentially increasing the reactive contact between the substrate and the enzyme. Since eugenol solubility in water is rather low, three co-solvents which are polar, non-protic and water-miscible were examined. Acetone, dioxane and tetrahydrofuran (THF) readily solubilized the substrate at 1:3 (solvent:buffer) ratio (Table 2). However, in terms of reaction yield only acetone presented beneficial effect on compound 5 production, slightly increasing the yield (49%). Addition of dioxane to the reaction mixture decreased product formation (32%), while THF prevented the production of dimer. Obtained results showed no significant improvement on the bioconversion when a co-solvent was added. Although the use of these solvents is limited at various levels by pharmaceutical and food-control government agencies, the authors wanted to present a novel source of peroxidase enzyme activity from an otherwise waste material, preparing already known interesting compounds with important biological activity. Conclusion In conclusion, the crude peroxidase extract obtained from cladodes of Opuntia ficus wastes showed a remarkable activity toward a number of common phenolic compounds. Furthermore, biotransformations of ferulic acid, eugenol and isoeugenol to specific dimeric compounds were achieved. Additionally, it appeared that the presence of guaiacol moiety, common to all three compounds tested was necessary, as other cinnamic acid derivatives did not present such behavior (data not shown). Reaction yields were increased by adjusting the molar ratio between H 2 O 2 and substrate. Usage of co-solvents did not improve reaction efficiency. Overall, a new source of peroxidase enzymatic activity from an abundant food residue was presented and a milder methodology for the preparation of biologically active molecules was proposed. Conflict of interest None.
2019-01-22T22:20:07.461Z
2018-11-03T00:00:00.000
{ "year": 2018, "sha1": "7cd44c30378ffd70b6fdb643fd3e185c97080ae5", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.1016/j.btre.2018.e00291", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "7cd44c30378ffd70b6fdb643fd3e185c97080ae5", "s2fieldsofstudy": [ "Environmental Science", "Chemistry", "Materials Science" ], "extfieldsofstudy": [ "Medicine", "Chemistry" ] }
229403452
pes2o/s2orc
v3-fos-license
What next after GDP-based cost-effectiveness thresholds? Public payers around the world are increasingly using cost-effectiveness thresholds (CETs) to assess the value-for-money of an intervention and make coverage decisions. However, there is still much confusion about the meaning and uses of the CET, how it should be calculated, and what constitutes an adequate evidence base for its formulation. One widely referenced and used threshold in the last decade has been the 1-3 GDP per capita, which is often attributed to the Commission on Macroeconomics and WHO guidelines on Choosing Interventions that are Cost Effective (WHO-CHOICE). For many reasons, however, this threshold has been widely criticised; which has led experts across the world, including the WHO, to discourage its use. This has left a vacuum for policy-makers and technical staff at a time when countries are wanting to move towards Universal Health Coverage . This article seeks to address this gap by offering five practical options for decision-makers in low- and middle-income countries that can be used instead of the 1-3 GDP rule, to combine existing evidence with fair decision-rules or develop locally relevant CETs. It builds on existing literature as well as an engagement with a group of experts and decision-makers working in low, middle and high income countries. Introduction Public payers around the world are increasingly using Health Technology Assessment (HTA) to inform resource allocation decisions (Leech et al., 2018;MacQuilkan et al., 2018;Tantivess et al., 2017;World Health Organization, 2015). These decisions are often based on evidence of the expected additional intervention costs and health benefits summarized as the incremental cost-effectiveness ratio (ICER). This is a measure of the value of resources that are actually needed in a specific location and at a specific time to produce one unit of health (most commonly a Quality Adjusted Life Year -QALY-or a Disability Adjusted Life Year -DALY averted). ICERs can be used to compare competing interventions or can be evaluated against a pre-defined decision rule referred to as a Cost-Effectiveness Threshold (CET). The CET sets, on average, the maximum financial investment a public payer will commit to generate a unit of health (Cameron et al., 2018) and is typically used alongside other information to inform decisions around resource allocation in health, particularly around the introduction of new treatments and benefits. The use of CETs (and ICERs) is typically associated with a goal of health maximisation. They can also, however, be used in conjunction with other criteria. For instance, in Norway a CET was used to maximise both health together with a fair distribution of health (Ottersen et al., 2016). While health system objectives might vary between jurisdictions, maximising health is one aspiration that we assume is widely shared within and across jurisdictions (Culyer, 2016). There is still much confusion about the meaning and uses of the CET, how it should be calculated, and what constitutes adequate evidence base for its formulation (Ochalek et al., 2015). There are three broad bases on which CETs that are used by public payers are set: willingness to pay (WTP), precedence and opportunity cost (for further discussion, see Santos et al., 2018 andVallejo-Torres et al., 2016). However, a commonly used approach following none of the three bases is to set the CET at 1-3 times GDP per capita, including in Low and Middle-Income Countries (LMICs). It is often attributed to the Commission on Macroeconomics, and was later adopted in the WHO guidelines on Choosing Interventions that are Cost Effective (WHO-CHOICE) (World Health Organization Commission on Macroeconomics and Health, 2001). Using this approach, an intervention that averts one DALY for a cost less than GDP per capita is considered "very" cost effective, but an intervention could still count as cost-effective if the ICER did not exceed three times GDP per capita. One rationale behind the rule is that GDP per capita is a proxy for earnings (Robinson et al., 2017). In other words, if an intervention averts a DALY at less than one GDP per capita, then its return on investments in the wider economy (through increased labour productivity) would offset its implementation costs. Because of its simplicity of use and interpretation, the 1-3 GDP threshold has gained much popularity in recent years. A recent review of cost-effectiveness analyses (CEA) found that 66% of published research studies between 2000 and 2015 in LMICs used a GDP-based CET (Leech et al., 2018) i . The 1-3 GDP per capita criterion in the Commission on Macroeconomics and Health report was, however, never intended to be used to determine CETs. It was to be used to value health in benefit-cost analyses (e.g. to make the case for resource allocation to health as opposed to other sectors) (Robinson et al., 2017). Moreover, GDP-based CETs have no direct relation to a country's healthcare budget, technical capacity, population preferences or social values (Marseille et al., 2015). Several experts have also shown that GDP-based CETs can lead to the adoption of interventions that are not in practice locally affordable (Marseille et al., 2015). One may argue that there is no harm in casting the net wide by setting a high CET. This is false, however, because spending health resources inevitably creates opportunity costs. Health opportunity costs arise because resources committed to one intervention are no longer available to fund alternative, perhaps more cost-effective, interventions. As a result, allocating resources to an intervention that would not be included by reference to a more realistic CET can paradoxically result in a loss of health and increase in avoidable deaths, by displacing more health than it creates (Revill et al., 2018). Health opportunity costs are higher in LMICs because spending on the wrong interventions might deplete a country's resources to pay for affordable and effective interventions. In stylized terms, in a country where a life could be saved by spending $1000 dollars, the misallocation of that same $1000 will cause a death. These considerations have contributed to a growing unease with the use of GDP-based CETs, especially in LMICs, given how widely they are used in research (Leech et al., 2018). More recently, several experts, including some at the WHO where the practice has been referenced for decades, have advised against the use of GDP-based CETs as a sole decision rule (Bertram et al., 2016). The retraction of the 1-3 GDP rule leaves a vacuum for policymakers and technical staff at a time when LMICs are aspiring to Universal Health Coverage (UHC) and making decisions that will set the path of health policy and spending for decades to come. As others have observed (Global Burden of Disease Health Financing Collaborator Network, 2019), mobilising additional resources for health is a lengthy and challenging process; making decision rules such as CETs to ensure adequate spending all the more important. An appropriately set CET could support the realisation of the most generous possible form of UHC by ensuring that existing resources are primarily directed toward cost-effective interventions. In addition, countries have been encouraged to develop in-country capacity and institutions to support the use of HTA in decision-making. This is fully supported by the WHO and is reflected in the World Health Assembly WHA67/23 resolution (World Health Organisation, 2014). In several countries (e.g. Indonesia, Kenya, Ghana, South Africa, India and the Philippines), plans to institutionalise HTA or set up HTA committees were The theory of CETs is discussed elsewhere and reviews document how countries with existing CETs have defined them (Cameron et al., 2018;Santos et al., 2018;Thokala et al., 2018;Vallejo-Torres et al., 2016). Shortcomings of existing approaches to CETs, not exclusively focusing on GDP-based CETs, have also been discussed (Bertram et al., 2016;Marseille et al., 2015;Newall et al., 2014;Vallejo-Torres et al., 2016). What is missing is the creation of practical alternatives to the use of GDP-based thresholds. This article seeks to address this gap by offering five practical options for developing locally relevant CETs (i.e. ones that are informed by local data) or utilising evidence (existing, complementary to CEA) to support decision-makers facing urgent resource allocation decisions. The paper builds on a consultation between academics, country technical staff and global donors from high, middle, and low income countries (although not representative), organised by the International Decision Support Initiative (Secretariat based in the UK), Centre for Health Economics (University of York) and the Health Intervention and Technology Assessment Program (Thailand), at the Rockefeller Foundation Bellagio Centre between December 3-7, 2018. The group spent three and a half days discussing the role of CETs in achieving UHC in LMICs ii . Our conversation focussed on supply-side approaches. There is one major reason for this. The willingness-to-pay approach raises question of whose willingness matters, a political matter on which we do not feel able to advise, and can also lead to the use of aspirational CETs that are not connected to budget constraints or health opportunity costs. The supplyside approach by contrast is focussed on the resources available, how they are currently used, and what is most likely to be sacrificed when they are used in one way rather than another. These are concrete matters that decision-makers face on a daily basis. During this meeting, our group collectively defined five options that can replace the GDP based threshold (in the absence of a formal CET) when decision-makers are faced with a new intervention that they need to consider. We have ordered the options from the least to the most resource and data intensive, with option 5 being a within-country empirical estimation of supply-side CET that reflects health opportunity costs. The options are not mutually exclusive: it is possible to combine several approaches according to context and need. 2015), apply country-specific data on health expenditure, epidemiology and demography to calculate a range of cost per DALY averted thresholds estimates in LMICs. Country by country estimates are available in their supplemental materials. To date, those two studies are the best attempts to estimate health opportunity costs using cross-country data sources, despite methodological caveats iii . Although adopting different approaches, both Woods et al. (2016) and Ochalek et al. (2018) estimate CETs averaging iv roughly half of GDP per capita, albeit with a substantial range. On this basis, if a GDP based rule is applied, it would probably be a secure rule to deem as cost-ineffective all interventions averting 1 DALY at more than 1 GDP per capita. Half of GDP per capita is more in accord with countries' realities than the 1-3 GDP per capita, and could be used as an interim rule of thumb rather than the 1-3 GDP rule. A handful of recent papers have already started using half of GDP per capita v . Francke et al. (2016) estimated the cost-effectiveness of diagnostic of HIV infection in early infancy in South Africa, and based on 'emerging literature', used a CET of half of GDP per capita. Other studies also used such a CET, although none provided a justification for doing so (Bilcke et al., 2019;Campos et al., 2018;Mezei et al., 2018). Finally, half of GDP per capita was also referenced in the Disease Control Priorities 3 (DCP3) as an example CET in highly resource constrained countries (Watkins et al., 2017). Half of GDP per capita will lead to underestimating or overestimating health opportunity costs in roughly half of LMICs (Ochalek et al., 2020b). Option 2. Use existing evidence from other settings Short cuts can sometimes offer an informed way forward when decision-makers are uncertain about appropriate methodologies or lack the necessary skill and evidence base for more sophisticated procedures. One shortcut could be to look at evidence elsewhere by asking the following questions: • Have regulatory authorities such as the Food and Drug Administration licensed this product and for which indication? This question can first identify 'wasted buys' (i.e. interventions that have a harmful or non-beneficial effect for a patient) if the intervention under consideration was not licensed (for the right indication), without identifying a CET or conducting a CEA. For instance, a review of the Romanian procurement decisions found ii For more information about the meeting and its participants, go to: https://www. idsihealth.org/blog/developing-cost-effectiveness-thresholds-to-support-universal-health-coverage/ iii Data limitations are acknowledged in the two papers and use of strong assumptions. iv Unweighted average v In preparing this commentary, we searched the cost-effectiveness analyses conducted in LMICs in the last five years using the TUFTS database. Conclusions from this search will be subject to a different piece focusing on the use of CETs in the past five years (forthcoming). that bevacizumab was being used for the treatment of metastatic breast cancer, despite having been withdrawn from FDA approval for this indication (Lopert et al., 2013;US Food & Drug Administration, 2011). Other agencies in charge of ensuring the safety, efficacy and security of drugs and products can be considered. • Was this rejected for funding elsewhere? The National Institute for Health and Care Excellence (NICE) in the UK publishes its technology appraisal guidance, which contains information on the technology under consideration and the accompanying recommendations. NICE's negative recommendations can be a useful starting point since interventions not recommended in a high-income country are very unlikely to be appropriate choices in LMICs unless it was thought that the United Kingdom's health system and financial capacity is very different from the country under consideration. On the other hand, a NICE positive recommendation is not to be followed slavishly in LMICs, because of differential health opportunity costs. CEA estimates are not always transferrable across settings, and there is little guidance on how to make decisions about suitability of estimates in a local context (Drummond et al., 2009). Nonetheless, if an intervention was found not to be cost-effective in a high-income setting, then it is unlikely to be cost-effective in an LMIC; and it would need important differences in disease epidemiology, intervention costs or health state preferences to warrant adoption in an LMICs. Option 3. ICERs and budget impact to inform costeffectiveness and affordability Budget impact analyses (BIA) can also support decision-makers in determining whether an intervention is affordable to the country (Bilinski et al., 2017). BIA is a method of assessing predicted short-term changes in expenditure were a new intervention were introduced. It reflects not only the total cost of its introduction, but also the coverage and uptake rates, as well as potential new health costs (or savings) (Sullivan et al., 2014). There is often a disconnect between the cost-effectiveness of an intervention, and its affordability to a country (Bilinski et al., 2017;Howdon et al., 2019;Lomas, 2019;Wiseman et al., 2016). Presenting BIA alongside CEA can ensure that decision-makers can anticipate the resource implications of a new intervention for the allocation of their budget (Mohara et al., 2012). For instance, treatment of Hepatitis C was found to be cost-effective in many settings, but providing universal access to all eligible patients would have significant resource implications, even in middle-income countries (Urrutia et al., 2016). In the United Kingdom, treatment of Hepatitis C was found to be cost-effective but this decision was found to be controversial due its budget impact (Lomas et al., 2018). Even in countries where a CET is used to inform policy, there is growing consideration of BIA. In Thailand, the ICER is presented alongside budget impact. For instance, the inclusion of Imiglucerase for Gaucher disease type 1 was approved due to the low budget impact, equity concerns and disease severity (terminal condition) even though the treatment was well above the Thai CET (Leelahavarong, 2019). Since 2017, budget impact for the first three years of use is assessed in the United-Kingdom. If it exceeds a certain threshold for the entire National Health Service (currently £20 million), a phased implementation or price negotiation with manufacturers is initiated (National Institute for Health and Care Excellence, 2018). However, Bilinski et al. (2017) report that fewer than 3% of the 384 published CEA included in their review contained a full report of BIA. Option 4. A league table for Health Benefits Package design A league table is a list of health interventions in order of their ICER. It can be a useful approach to allow decision-makers to appraise a wide range of interventions in one summary table. When considering a new inclusion to an existing Health Benefits Package (HBP), a league table can be used to identify the least cost-effective intervention that has been funded under the HBP, which can serve as a benchmark to infer what the maximum investment the country was willing to commit to producing an additional unit of health when developing the HBP -or to initiate more detailed evaluation of its likely cost-effectiveness. In other words, the league table helps identify a proxy of the shadow CET. This approach can be appropriate if an existing package of services is available in the country and ICERs can be derived for a reasonable number of the interventions included in this package. Using this method, decision-makers could gain confidence that a new entrant would not be included unless it produced more health benefit than the least cost-effective intervention already covered. On the other hand, when developing an HBP de novo, a league table can also be used to set a CET when combined with data on coverage and utilisation. For this, the budget envelope will need to be defined from the onset. This may not be easy, especially in countries where contributions from external partners is significant. For instance, in a study in Malawi from Ochalek et al. (2018), the authors highlight that donor funds (often off budget, disbursed through conditionalities) make up 70% of total health expenditure. This creates uncertainties on how the budget line is set (the authors calculate the budget line considering all funding, regardless of the source). In this option, the budget line determines the CET: a league table is constructed in descending order of cost-effectiveness, and estimates of utilisation are used to calculate the budget impact for each intervention. Culyer (2016) uses a metaphor of a bookshelf of healthcare interventions, in which each book is ranked according to its height (i.e., its effectiveness-cost ratio) and the thickness of the book represents the cost of providing the intervention (i.e., the budget impact). The threshold corresponds to the least cost-effective intervention affordable to the country before the (fixed) budget is exhausted. This approach was reported to be implemented in Oregon's Medicaid scheme in the 1990s to define health benefits (although a review (Tengs et al., 1996) later showed that there was no correlation between the final benefits list and the economic literature or Oregon's own cost-effectiveness data). While the data requirements for this option appear be high to some, they can be much reduced by using expert opinion and international evidence to identify a narrower range of likely candidates for more detailed evaluation. This enables decision makers to focus their attention on a manageable number of possible interventions together with their relevant uncertainties. One challenge in LMICs is the low availability of evidence on cost-effectiveness, as well as differences in the methodological specifications employed in studies (Drummond et al., 1993;Mauskopf et al., 2003). Comparing them usefully requires local epidemiological and economic skills. However, there are several global sources which can be compiled to inform country level estimates: DCP3, the TUFTS GHCEA Registry and WHO-CHOICE. These sources were used by the Ochalek et al. (2018) study, although important data gaps remained and were highlighted as a limitation to the study. In Ethiopia, a similar approach was used to develop the Essential Health Services Package (Ministry of Health Ethiopia, 2019). It is worth noting that WHO-CHOICE present average cost-effectiveness ratios (ACERs) instead of incremental ones (Arnold et al., 2019). This is sometimes raised as a concern because average cost-effectiveness ratios compare interventions to a doing nothing scenario, which is only very rarely an appropriate comparator (O'Day & Campbell, 2016). Option 5. Estimating a health opportunity cost CET using within-country data Notwithstanding the different approaches to defining a CET, this group found that defining CETs based on health opportunity costs using within-country data was particularly suitable in LMICs given the high opportunity costs created by severe budget constraints. However, each country may want to develop its own national CET, relevant to their own situation. Relying on health opportunity cost estimates makes it possible to estimate whether the health gains produced by an intervention are greater than the health lost from displacing other interventions in other parts of the health system (Claxton et al., 2015). This approach derives from the goal of health maximisation. Where health must be sacrificed to improve distributional outcomes (or to meet goals other than health maximisation), a health opportunity cost CET can help quantifying the trade-off. Unlike the WTP method, which bears no link to public budgets, the health opportunity cost method calibrates the CET against the reality of local budget constraints (Brouwer et al., 2019;Leech et al., 2018). The seminal works of Claxton et al. (2015) in the UK pioneered the estimation of health opportunity costs CETs. They used a very detailed programme budgeting data from the English National Health Service (NHS) to estimate a health opportunity cost CET, which was much lower (£12,936) than the one then applied (£20,000-30,000 and up to £100,000 in certain cases). Attempts to apply a similar estimation framework have been made in China, Indonesia, India and the Republic of South Africa (Edoka & Stacey, 2020;Ochalek et al., 2020), although with different methods and data (given the paucity of the latter in LMICs). These estimates have two parts: estimating the elasticities of health outcomes with regard to health expenditure using an econometric analysis, and translating the elasticity estimates to health opportunity cost thresholds. In order to estimate health spending elasticities, data on health expenditure and health outcomes (e.g. mortality rates, DALYs or QALYs -and ideally age and gender specific) at a low level of aggregation (e.g. local health authorities or districts and provinces) is required. Ideally, these data will need to be collected across several time periods/years. Furthermore, the estimation of health spending elasticities could be strengthened if controlling for potential confounders (e.g. poverty, literacy rate) and the application of robust estimation strategies to account for unobserved heterogeneity and reverse causality (Edoka, 2019). One such approach consists of using an instrumental variable (IV) (i.e. a variable that has no direct impact on the health outcome but indirectly, through its impact on health expenditure). IVs should be selected by researchers to fit the local context. One example in the UK has been the use of the 'funding rule': local jurisdictions in the UK receive a share of the total budget based on local characteristics, however, the funding rule is revised periodically and this change creates exogenous changes in local funding and generates data for econometric estimation of elasticities (Claxton et al., 2018). Elasticities will need to be translated into population estimates of cost per DALY averted or QALY gained. If researchers estimated elasticities using mortality data, then those must be converted into QALYs or DALYs using additional assumptions. This requires data on the age and gender structure of the population as well as the morbidity burden of disease. It is worth noting that if the elasticities are estimated from the subset of the population (e.g. children), they will need to be extrapolated for the whole population. There are several sources of uncertainty attaching to this approach, as there are to other approaches. Given a shortage of vital population statistics, health outcomes are often drawn from survey data, which come with their own shortcomings. Moreover, data on health expenditure is often incomplete (e.g. missing budget items), poorly collected (e.g. inconsistency in recording practice) and unavailable at a low level of aggregation or across several years. There might be uncertainty stemming from the methods employed (e.g. use of an appropriate instrumental variable). Discussion Setting priorities is more than ever before seen as a pre-requisite for achieving global development goals and UHC (Wiseman et al., 2016). This prerequisite was recognised at the United Nations High-Level Meeting on UHC in 2019 (United Nations General Assembly, 2019). There is now a push for using HTA across the world to drive more efficient resource allocation in health. Addressing the vacuum left by the abandonment of the 1-3 times GDP per capita CET has therefore become central to determining what services or interventions will be included or excluded within the UHC agenda. In addition, a CET or clear decision rule can signal maturity in resource allocation practices for the health budget to Treasuries or Ministry of Finance. This may support further investments in the health sector, especially in LMICs where health receives low priority within the broader government budgeting process. This article lays out five alternatives to GDP-based CETs for decision-makers faced with urgent resource allocation decisions, building from our meeting bringing together a selected group of practitioners and researchers. Estimating a health opportunity cost CET using local data is the long-run solution for LMICs since it will explicitly link CETs to budget constraints and help articulate the trade-offs that inevitably arise in coverage decisions. Stakeholders should understand how a CET can be estimated, what the possible alternatives are, and be able to assess the adequacy of the arrangements in, or proposed for, their country. This will require engagement and communication from the onset. While an empirically estimated supply-side CET using local data, should be the long term aim for all countries, the other four suggestions provide LMICs with tools to structure what is often a difficult ad hoc conversation on value for money and affordability. This intermediate step would represent a significant improvement on current practice and the first four options require no additional or only modest additional resources. In combination, these suggestions can help form deliberative processes that are as evidence-based as possible and that face explicitly up to the complexity of the choices faced by decision-makers (Baltussen et al., 2016;Chalkidou et al., 2016). The dangers of applying thresholds that are set too high have been discussed widely elsewhere (Bertram et al., 2016;Leech et al., 2018;Marseille et al., 2015). It is likely that our proposed approach would lead to more conservative CETs compared to the 1-3 times GDP per capita rule. For this reason, it is worth considering what the implications and risks of under-estimating the CET would be. The first obvious consequence would be that cost-effective interventions would be mistakenly ruled out, causing a loss of health at the population level relative to what would have been possible if resources were fully allocated to cost-effective interventions. Some have also argued that a low CET reduces innovation by discouraging manufacturers from seeking to develop new products. Finally, it may be thought that a more conservative CET is incompatible with other social objectives of the healthcare system that do not align or may even conflict with the goal of health maximisation (e.g. priority to the poor or more broadly equity). These concerns need to be addressed seriously. Further research should help to make more reliable any estimates of a local CET based on health opportunity costs. Further work on the LMIC estimates of Woods et al. (2016) and Ochalek et al. (2018) would enable better methods and reduce some of the uncertainty and data challenges. Moreover, it is worth noting that CETs are not only used as a simple inclusion/exclusion rule, but also as a basis for price negotiation with manufacturers. In Thailand, economic evaluation has successfully been used to bring down drug prices: for instance, the price of Tenofovir was cut down more than two third from the original to the negotiated price using the CET . On innovation, there is growing evidence that the vast majority of new products approved for use are only marginal improvements on existing ones, and new market introductions are often priced well above the existing thresholds, especially in LMICs. For example, a discussion of cancer drugs highlights that new introductions are often 'prohibitively expensive' and therefore unaffordable for publicly funded systems in LMICs (Gyawali & Sullivan, 2017). Recent studies have also pointed to the fact that high CETs created perverse incentives on prices, as manufacturers can use the CET to calculate a maximum ceiling price for their products to be accepted (Gronde et al., 2017). There is no evidence that high aspirational CETs encouraged innovation or access to novel treatments (Claxton et al., 2009), so the disincentive, if that is what it is, of lower CETs may be similarly unimportant. More important is for LMIC countries individually or collectively to identify the kinds of innovation they would most like to see and then to engage in a discussion with manufacturers and other stakeholders as to suitable incentives (or removal of disincentives). The inclusion of social objectives other than health maximization, like equity, positive discrimination, managerial capacities at various healthcare delivery levels, is independent of the level at which a threshold is set, and should be discussed alongside cost-effectiveness in any HTA framework (Cookson, 2016). More realistic CETs might well be intrinsically more equitable than high ones because the inclusion of wasted buys leads to crowding out of resources that would otherwise be spent on cost-effective services, usually benefitting the poor. This piece has provided a menu of options that can be used as an alternative to GDP based thresholds. The primary audience for this piece has been national decision-makers, but our recommendations have ramifications for the global health community. Development partners (DPs) should consider supporting countries in the challenging estimation of locally relevant CETs, working with research institutions with this expertise. The use of CETs to inform resource decisions by DPs has not been widely researched (Drake, 2014;(Morton et al., 2018). Should DPs use a single global threshold (as most countries do) or rely on country-specific thresholds, whether estimated by them or the countries in question? This is a contentious issue. Locally estimated CETs reflect the local health opportunity costs, which will be important for countries in transition with increasing co-financing from domestic resources (Silverman, 2018). This will raise consistency issues if different payers adopt different CETs, or ones that conflict with those preferred by recipient countries. For example, the Global Fund affords priority to the fight against AIDS, Tuberculosis and Malaria, but recipient countries may apply different threshold values to the same programs. On the other hand, using country thresholds would mean that DPs would apply different decision rules to different recipient countries. There is a universal inverse relation between GDP per capita and opportunity cost-based CETs, so using national thresholds may signal a mean that an intervention covered in a middle income country may not be covered in low income one, which may be politically challenging. Conversely, it may also signal to DPs that spending in low income countries is more impactful, as a DALY averted or QALY gain can be realised at lower cost. The matter plainly needs further thought and investigation. Tharani Loganathan Centre for Epidemiology and Evidence-based Practice, Department of Social and Preventive Medicine, University of Malaya, Kuala Lumpur, Malaysia Setting a cost-effectiveness threshold is crucial for making the right decisions on allocating limited national resources. Many nations, especially LMICs depend on the 1-3 GDP thresholds for its ease of use and standardisation. This paper reviews the current arguments and details five options to replace GDP based thresholds. At the end of the day, setting priorities are national decisions. And decision rules must be made at the national level, using national data and capacity, while being transparent enough for policy-makers and non-economists to apply with trust. Thus the tools should be simple, relevant and prioritise health systems goals of health maximation and equity while staying within budget. Thus as stated, this article is written for an audience of national decision-makers but considers the global health community. Does the article adequately reference differing views and opinions? Yes Are all factual statements correct, and are statements and arguments made adequately supported by citations? Yes Is the Open Letter written in accessible language? Yes Where applicable, are recommendations and next steps explained clearly for others to follow?
2020-12-03T09:01:41.186Z
2020-11-30T00:00:00.000
{ "year": 2020, "sha1": "cd45896da4870520ad5beaca7351ce45c78c10c1", "oa_license": "CCBY", "oa_url": "https://gatesopenresearch.org/articles/4-176/v1/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "c56d8dee48869c470c37db814345695a68c1096f", "s2fieldsofstudy": [ "Economics" ], "extfieldsofstudy": [ "Medicine", "Economics" ] }
59564425
pes2o/s2orc
v3-fos-license
Design of Learning Resource Management Module in Personalized Intelligent Learning System : This paper firstly presents domestic and foreign research status of personalized learning system and then stresses that the most important personalities for learners are learning interest and learning style. On the basis of summarizing research status of personalized learning system, this paper offers a new design scheme. It refers to designing vector models and style models respectively for learning resources in accordance with text and non-text. At last, this paper elaborates modeling methods for these models. Introduction Personalized learning system, which develops on the pattern of Intelligent Tutoring System (ITS), has become one of significant study hotspots on intelligent education at present. Among all studies concerning personalized learning system, most of them were undertaken in universities and few research institutions at first. Domestically, Professor Yu Shengquan of Beijing Normal University conducts his study in the field of adaptive learning system; a team led by Professor Wang Lu of Capital Normal University studies the development of Personalized Courseware Generation System, as well as Professor Zhao Wei and Doctor Jiang Qiang of Northeast Normal University make a study of adaptive learning system. Foreign studies include InterBook (a study finding from an American called Brusilovsky), ELM-ART (a joint study made by a German named Weber and an American called Brusilovsky), INSPIRE (a Hypermedia System for personalized education designed and developed by Papanikolaou and other members at University of Athen) As for learners' personalities, learning interest and learning style are two dominant features which can affect learners' learning efficiency. Most of existing studies are based on a learner's sole personality, while studies in which both learning interest and learning style are involved in the process of user modeling are fewer. Regarding studies of learning resources, some just realize the management of textual learning materials, that is, offering resources only and putting aside personalized recommendation; some achieve personalized recommendation to some degree, but they burden resource matching or get an imperfect matching result; Besides, some studies only implement text models in learning resources, but they do not consider multi-media learning resources, nor do they take into account the model matching with learners' learning style. Therefore, this paper offers a new scheme. It focuses on making respective models according to the two key personalities, namely, learners' learning interest and learning style; then discovering effective information through learners' learning behavior so as to timely update their interest models and learning style models; next, confirming learning style models in correspondence to learning resources on the basis of learners' learning style and their evaluation on resources recommended by the system; and finally, reasonably recommending learning resources to learners through scientific recommendation technology so as to realize personalized learning more efficiently. resources at textual and non-textual level. 1) In terms of textual resources, this paper firstly carries out an array of pre-processing operations to textual resources such as Chinese word segmentation, word frequency statistics and synonymy combination. After these pre-processing operations, an N-dimensional eigenvector model is created for every corresponding text. The textual vector space model is adopted to represent the eigenvector model so as to precisely express original information of texts as well as effectively reduce complexity of following calculation. 2)Considering non-textual resources, on the basis of key words and description of resources given by teachers, a corresponding N-dimensional vector space model is shaped for this learning resource. 3)According to learners' learning style and their evaluation offered in the process of learning, a learning style model is made in correspondence to its learning resource. Therefore, this paper centers on making two models for learning resources, namely, eigenvector models and learning style models. Eigenvector models can be divided into textual ones and nontextual ones. Construction of Textual Learning Resource Models In personalized learning system, we adopt a textual vector space model (VSM) to make an eigenvector model for textual learning resources. The method and procedures of making a textual learning resource model are as follows: 1)Word segmentation is reduced in all textual learning resources and frequency of all key words occurred is counted. The vector space model can be showed below: Among that, "document" refers to "文档"; "k i " represents the ith key word in the document, and "θ i " stands for frequency of ith key word occurred in the document. 2)Since a title of textual resource can directly reflect crucial information of a document, the system will judge all key words in the title of the document. If the title of this document contains "m" key words, then k i (0≤i≤m) will be examined. If k i appears in the textual eigenvector model "Resource (document)", then the third procedure will follow; if not, the fourth procedure will be the next one. 3)If k i appears in the textual eigenvector model "Resource (document)" and takes the ith place of all vectors, then make θ i =θ i +3×c. "c" stands for the proportion of times the key word occurred in the title of the document and the number of words of the document. 4)If k i doesn't occur in the textual eigenvector model "Resource (document)", the dimension number of the vector "n=n+1" will be added. New Resource (document) will be as follows: Among that, k n+1 =k i , θ n+1 =3×c. "c" represents the proportion of times the key word occurred in the title of the document and the number of words of the document. 5)Normalization will go on for all key words. That is, "M" refers to the total number of documents, and N i represents the number of documents which the ith key word appears. In order to avoid getting considerable eigenvector models and increasing complexity of following calculation, all textual learning resources are specified in descending order of their importance. And the first 20 key words will be taken as the final eigenvector models for learning resources. The eigenvector model for each piece of learning resource is generated automatically by the system when learning resource is uploaded by teachers. It is stored in the knowledge base together with learning resource. Construction of Non-textual Learning Resource Models For non-textual learning resource, the vector space model (VSM) is also used to make an eigenvector model. The method and procedures of making a non-textual learning resource model are similar to those of making a textual learning resource model. The difference lies in that the former relies on description of resources, which is uploaded simultaneously with learning resources by teachers, to make a model. The procedures are below: 1)Word segmentation is reduced in all non-textual learning resources and frequency of all key words occurred is counted. The vector space model is as follows: Resource (media) = {(k 1 , θ 1 ), (k 2 ,θ 2 )…(k n ,θ n )} 1≤i≤n (4) Among that, "media" refers to a multi-media document; k i represents the ith key word of the document, and θ i stands for frequency of ith key word occurred in the multi-media document. 2)Since titles and key words of non-textual learning resources can directly reflect critical information of learning resources, the system will judge both key words in titles and key words given by resources. If the title of the multi-media document "media" contains "m" key words, then k i (0≤i≤m) will be examined. If k i appears in the textual eigenvector model "Resource (media)", then the th-ird procedure will follow; if not, the fourth procedure will be the next one. 3)If k i appears in the textual eigenvector model "Resource (media)" and takes the ith place of all vectors, then make θ i =θ i +3×c. "c" stands for the proportion of times the key word occurred in tit-les of description of learning resources and the number of words in description of learning resources. 4)If k i doesn't occur in the textual eigenvector model "Resource (media)", the dimension number of vectors "n=n+1" will be added. The updated Resource (media) will be as follows: Resource (media) = {(k 1 ,θ 1 ), (k 2 ,θ 2 )…(k n+1 ,θ n+1 )} (5) Among that, k n+1 =k i , θ n+1 =3×c. "c" represents the proportion of times the key word occurred in the title of this multi-media document "media" and the number of words of the document. 5)Normalization will go on for all key words. That is, "M" refers to the total number of documents described in non-textual resources, and N i represents the number of documents which the ith key word appears. In order to avoid getting considerable eigenvector models and increasing complexity of following calculation, all non-textual learning resources are specified in descending order of their importance. And the first 20 key words will be taken as the final eigenvector models for learning resources. Construction of Learning Style Models of Learning Resources The concrete measures are: when learning resources are being uploaded, the resource style model is empty. Then, the system will recommend learning resources on the basis of learners' learning interests during initial operation. After learning these resources, learners will give a mark on the resources recommended by the system. Next, the system will pick out learners' learning style models with high satisfaction to make corresponding style models of learning resources. In order to meet need of realization for the system, this model adopts two-dimensional vector space model in this paper. The style model consists of four dimensions: information input (visual type/verbal type), information processing (active type/passive type), content understanding (abstract type/specific type), and perception (reasonable type/intuitional type). And every dimension contains three values. For instance, the dimension of information input includes two probability values of visual type and verbal type, as well as its flag bit. The flag bit is used to enhance matching effect between learning style and learning resource style when personalized recommendation is undertaken. When the flag bit is "1", it shows a bias to the former style type of the dimension (such as visual type); when the flag bit is "-1", it indicates a bias to the latter style type of the dimension (such as verbal type), and when the flag bit is "0", it manifests the middle style type, that is, a balanced type. The specific procedures of making a learning resource style model are: 1) In the beginning, the style model for learning resources is empty; 2)In the process of learners' learning, the system will provide five optional buttons to learners for evaluation: quite satisfactory, satisfactory, general, dissatisfactory and quite dissatisfactory. For each document, those learners' learning style models, the evaluation of which are quite satisfactory and satisfactory, are selected to help to modify style models of learning resources. The value of every dimension can be calculated as follows: "m" refers to the number of learners whose evaluation is quite satisfactory; "n" stands for the number of learners whose evaluation is satisfactory; P (media) represents the probability value of some type in a dimension; P i indicates the probability value of the corresponding type with learning style of those learners whose evaluation is quite satisfactory, and P j signifies the probability value of the consistent type with learning style of learners whose evaluation is satisfactory. 3)According to the value of every dimension of the resource style model given by the second procedure, the resource style model should be steady after a period of testing. 4)Normalization will go on for P i1 and P i2 (1≤i≤4) of every dimension; the value of Log i of the flag bit can be determined on the basis of P i1 and P i2 , and the resource style model can be finally obtained. Conclusion Personalized intelligent learning system has become one of study hotspots on intelligent education nowadays, and study findings about it play a vital role in personalized teaching. On the basis of learners' personalities, this system will voluntarily recommend learning resources that fill learners' practical requirements so as to help to improve learners' learning efficiency and learning quality. Since among learners' personalities, learning interest and learning style are two principal features which can affect learners' learning efficiency, this paper focuses on making resource models and style models in accordance with learning resources; then updating learning style models of learning resources on the basis of learners' learning style and their evaluation on resources recommended by the system; next, reasonably recommending learning resources to learners through scientific recommendation technology, and finally achieving personalized teaching more efficiently and realizing teaching philosophy of "teaching students in accordance of their aptitude".
2019-02-05T19:34:28.750Z
2018-01-01T00:00:00.000
{ "year": 2018, "sha1": "806123f1e14d3882dfb62fa244e55f3269f09d7e", "oa_license": "CCBYNC", "oa_url": "https://download.atlantis-press.com/article/25906382.pdf", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "61f7040030c562f871cc13f91919d04172fe3b63", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
269984535
pes2o/s2orc
v3-fos-license
Exploring the mechanism of 6-Methoxydihydrosanguinarine in the treatment of lung adenocarcinoma based on network pharmacology, molecular docking and experimental investigation Background 6-Methoxydihydrosanguinarine (6-MDS) has shown promising potential in fighting against a variety of malignancies. Yet, its anti‑lung adenocarcinoma (LUAD) effect and the underlying mechanism remain largely unexplored. This study sought to explore the targets and the probable mechanism of 6-MDS in LUAD through network pharmacology and experimental validation. Methods The proliferative activity of human LUAD cell line A549 was evaluated by Cell Counting Kit-8 (CCK8) assay. LUAD related targets, potential targets of 6-MDS were obtained from databases. Venn plot analysis were performed on 6-MDS target genes and LUAD related genes to obtain potential target genes for 6-MDS treatment of LUAD. The Search Tool for the Retrieval of Interacting Genes/Proteins (STRING) database was utilized to perform a protein-protein interaction (PPI) analysis, which was then visualized by Cytoscape. The hub genes in the network were singled out by CytoHubba. Metascape was employed for GO and KEGG enrichment analyses. molecular docking was carried out using AutoDock Vina 4.2 software. Gene expression levels, overall survival of hub genes were validated by the GEPIA database. Protein expression levels, promotor methylation levels of hub genes were confirmed by the UALCAN database. Timer database was used for evaluating the association between the expression of hub genes and the abundance of infiltrating immune cells. Furthermore, correlation analysis of hub genes expression with immune subtypes of LUAD were performed by using the TISIDB database. Finally, the results of network pharmacology analysis were validated by qPCR. Results Experiments in vitro revealed that 6-MDS significantly reduced tumor growth. A total of 33 potential targets of 6-MDS in LUAD were obtained by crossing the LUAD related targets with 6-MDS targets. Utilizing CytoHubba, a network analysis tool, the top 10 genes with the highest centrality measures were pinpointed, including MMP9, CDK1, TYMS, CCNA2, ERBB2, CHEK1, KIF11, AURKB, PLK1 and TTK. Analysis of KEGG enrichment hinted that these 10 hub genes were located in the cell cycle signaling pathway, suggesting that 6-MDS may mainly inhibit the occurrence of LUAD by affecting the cell cycle. Molecular docking analysis revealed that the binding energies between 6-MDS and the hub proteins were all higher than − 6 kcal/Mol with the exception of AURKB, indicating that the 9 targets had strong binding ability with 6-MDS.These results were corroborated through assessments of mRNA expression levels, protein expression levels, overall survival analysis, promotor methylation level, immune subtypes andimmune infiltration. Furthermore, qPCR results indicated that 6-MDS can significantly decreased the mRNA levels of CDK1, CHEK1, KIF11, PLK1 and TTK. Conclusions According to our findings, it appears that 6-MDS could possibly serve as a promising option for the treatment of LUAD. Further investigations in live animal models are necessary to confirm its potential in fighting cancer and to delve into the mechanisms at play. Supplementary Information The online version contains supplementary material available at 10.1186/s12906-024-04497-z. Introduction Lung cancer, composed of approximately 85% non-smallcell lung cancer (NSCLC) and 15% small cell lung cancer (SCLC), is one of the most prevalent malignant cancers worldwide, with over 1.4 million deaths each year [1].According to Global cancer statistics 2022 published in 2024, with almost 2.5 million new cases and over 1.8 million deaths worldwide, lung cancer is the leading cause of cancer morbidity and mortality in 2022 [2].Another study estimates that there will be 3.8 million incident cases and 3.2 million deaths globally due to lung cancer in 2050 [3].To date, Lung adenocarcinoma (LUAD) is the most common subtype of NSCLC [4,5].Despite improvements in chemotherapy, radiotherapy, and surgery, the prognosis of NSCLC still remains poor, and the five-year survival rate is only approximately 18% [6].Patients with advanced-stage disease are treated with chemotherapeutic medications such as platinum [7].However, the development of resistance to chemotherapeutic medications and the occurrence of adverse responses to these treatments have become major challenges in modern oncology [8].The mechanisms underlying resistance to chemotherapeutic medications are multifactorial.It has been reported that cisplatin resistance often occurs due to a cellular defense mechanism that confers resistance by reducing the ability to mediate apoptosis, enhancing the repair of DNA damage, altering cell cycle checkpoints, and disrupting cytoskeleton assembly [9].Nevertheless, the exact mechanisms resistance to chemotherapeutic medications remain largely unclear [10].Meanwhile, targeted therapy and immunotherapy have been developed to overcome these problems, but it has faced to acquired resistance, poor therapeutic response, systemic immune dysfunction [11,12].The clinical outcomes and effects of neoadjuvant therapy (including Chemotherapy/ Radiotherapy, Targeted Therapy and Immunotherapy) for NSCLC are still controversial due to its apparent advantages and disadvantages [13].Considering the exists controversy concerning the efficacy of current therapy for patients with NSCLC, searching for low toxicity and effective anti-cancer drugs and new drug targets has been a research direction in recent years. Traditional Chinese medicine (TCM) has long been utilized as a complementary treatment for various types of cancer, such as lung cancer [14].TCM is characterized by the utilization of medicines derived from natural herbs, rather than being created through chemical synthesis [15].These herbal drugs have low toxicity and exert complex anti-cancer effects through a variety of intricate mechanisms [16].Therefore, the active ingredients extracted from Chinese herbs have become a hot spot for global research [17].Macleaya cordata (Chinese name "Bo-luo-hui") is a perennial herb that belongs to the Papaveraceae family and is typically prescribed as a traditional antibacterial medicine, whose effect and usage were well documented in Ben-Cao-Shi-Yi, a Chinese encyclopedia of botany and medicine from the early Tang dynasty [18].According to literature, it has significant therapeutic effects on ulcers, snake and insect bites, anti-tumor effects, and improving liver function [19].The main component of Macleaya cordata extract is alkaloid.More than 70 kinds of alkaloids have been reported to be isolated and identified from Macleaya cordata, and 6-Methoxydihydrosanguinarine (6-MDS) belongs to one of the isoquinoline alkaloids.It is reported that 6-MDS has antimicrobial activity and has significant inhibitory activity against Staphylococcus aureus [20].In addition, 6-MDS can induce proliferation and apoptosis of HT29 cells and Hep G2 cells, with IC 50 values of 3.8 ± 0.2 and 5.0 ± 0.2 µM, respectively [21,22].The latest research showed that 6-MDS can induce apoptosis and autophagy of breast cancer MCF-7 cells by inhibiting PI3K/AKT/mTOR signaling pathway by accumulating ROS, which has great potential in the treatment of cancer [20].Also, 6-MDS exhibits cytotoxicity and sensitizes TRAIL-induced apoptosis of hepatocellular carcinoma cells through ROS-mediated upregulation of DR5 [23]. TYMS, CCNA2, ERBB2, CHEK1, KIF11, AURKB, PLK1 and TTK.Analysis of KEGG enrichment hinted that these 10 hub genes were located in the cell cycle signaling pathway, suggesting that 6-MDS may mainly inhibit the occurrence of LUAD by affecting the cell cycle.Molecular docking analysis revealed that the binding energies between 6-MDS and the hub proteins were all higher than − 6 kcal/Mol with the exception of AURKB, indicating that the 9 targets had strong binding ability with 6-MDS.These results were corroborated through assessments of mRNA expression levels, protein expression levels, overall survival analysis, promotor methylation level, immune subtypes andimmune infiltration.Furthermore, qPCR results indicated that 6-MDS can significantly decreased the mRNA levels of CDK1, CHEK1, KIF11, PLK1 and TTK. Conclusions According to our findings, it appears that 6-MDS could possibly serve as a promising option for the treatment of LUAD.Further investigations in live animal models are necessary to confirm its potential in fighting cancer and to delve into the mechanisms at play.However, its anti-LUAD effect and its mechanism have not yet been reported. Traditional studies often focus on a single gene or target, ignoring the complexity and systematicness of biological processes.However, Chinese medicine has multi-target and multi-pathway mechanisms of action for treating diseases.Therefore, it is necessary to use big data to mine all existing targets and pathways related to 6-MDS and LUAD.In recent years, the network pharmacology has developed, which integrates the system biology and pharmacology, integrates the biological network and pharmacology, changes the traditional search for a single target to comprehensive network analysis, emphasizes the interaction mode of multicomponents, multitargets, and multipathways, and is especially suitable for predicting the action target and possible mechanism of natural compounds from traditional Chinese medicines or different plants [24,25].In the present study, our team used the network pharmacology technique and in vitro experiment to access the molecular pathways behind 6-MDS's ability to block LUAD proliferation. Cell viability assays Cells were inoculated into 96-well flat-bottomed microtiter plates with 6000 cells per well, after 24 h of cultivation, they were treated with 6-MDS for 24 h and 48 h, respectively.Then, on the basis of the manufacturer's instructions, Cell Counting Kit-8 (CCK8, GK10001, Glp-Bio, USA) stock solution was added to each well and the plate was incubated in a cell culture incubator at 37 °C for 60 min.The absorbance at 450 nm wasmeasured using a microplate reader (Multiskan-GO, Thermo Fisher Scientific, USA) to assess the cell viability [27]. Determining the targets of 6-MDS in LUAD For the identification of potential target genes for 6-MDS treatment of LUAD, an analysis using Venn diagrams was conductedon 6-MDS target genes and LUAD related genes.Subsequently, shared targets were found between 6-MDS target genes and LUAD related genes, which were considered as potential targets of 6-MDS in LUAD. Enrichment analysis of the targets of 6-MDS in LUAD Enrichment analysis of Gene Ontology (GO) and Kyoto Encyclopedia of Genes and Genomes (KEGG) were conducted using the online tool Metascape (Version 3.5.20240101,https://metascape.org)[38]. Protein-protein interaction (PPI) network construction and core targets screening The STRING database (Version 12.0, http://string-db.org/) [39], which is an online database for evaluations of PPI, was employed to delve into the relationship between the proteins encoded by the potential targets of 6-MDS in LUAD.Subsequently, the results from STRING database were exported to the plug-in CytoHubba of Cytoscape software (Version 3.9.0,https://cytoscape.org/)[40] to visualize the PPI network.Finally, maximal clique centrality (MCC) algorithm was performed to screen the top ten 10 genes, deemed as potential hub genes for subsequent investigation. Molecular docking Molecular docking is mainly used for the structural docking of small molecules with target proteins, and to evaluate their binding affinity with defined binding sites.It is generally believed that the lower the energy of the ligand receptor binding conformation, the more likely this effect is to occur [41].small molecule ligand files of chemical components were downloaded from the Pub-Chem database, imported them into Chem3D software for the purpose of spatial structure transformation and energy optimization, and output them in mol 2 format file format.After processing with AutoDockTools 1.5.6 software, the files were saved in pdbqt format.Then the gene ID of the core target was retrieved from the Uniprot database and downloaded its corresponding PDB format file from the PDB database (http://www.rcsb.org/).After water molecule removal and ligand separation using PyMol software, the resultant macromolecular receptor file of the core target was imported into AutoDockTools 1.5.6 software for hydrogenation and saved in PDbqt format.Finally, with the help of AutoDockvina 1.1.2software, the core target and its corresponding chemical components were molecular docked, with the binding energy serving as the docking evaluation index.The predicted value of the dissociation constant (Kd) was calculated from ΔG = RT ln(Kd) [42] where ΔG is the binding energy, R is the idea gas constant (kcal*K − 1 *mol − 1 ) and T is the temperature (K), with the help of an online web server developed byNovoPro Bioscience Inc. (https:// www.novoprolabs.com/tools/deltag2kd). External validation of hub genes Gene expression anaysis of hub genes The GEPIA (http://gepia.cancer-pku.cn/)"Expression on Box Plots" and "Pathological Stage Plot" module wereemployedto explore the mRNA expression levels and pathological stages of the core targets in LUAD.A threshold of |log2FC| ≥ 1 and a significance level of p ≤ 0.01 were established for the analysis. Protein expression analysis of core targets UALCAN (https://ualcan.path.uab.edu/)[43] provides protein expression analysis option using data from Clinical Proteomic Tumor Analysis Consortium (CPTAC) and the International Cancer Proteogenome Consortium (ICPC) datasets [44].Here, this database wasutilizedto conduct a comparative analysis of the protein expression levels of the hub genes inLUAD tissues and their corresponding normal tissues. Overall survival analysis of hub genes Using the GEPIA database, the potential relationship between expression of the 9 hub genes and the OS of LUAD patients was evaluated by setting p < 0.05 as the criteria. DNA methylation level of hub genes The UALCAN databases was used to compared DNA methylation level of hub genes between normal and LUAD tissues. Correlation analysis of hub genes expression with immune cell infiltration the online database Timer (Version 2.0, http://timer.cistrome.org/)[45] was ultized to analyze the correlation between the hub genes expression andinfiltration score of B cell, CD4 T cell, CD8 T cell, neutrophil, macrophage and dentritic cell (DC). Correlation analysis of core targets expression with immune and molecular subtypes of LUAD The TISIDB database (http://cis.hku.hk/TISIDB/index.php) is an online integrated repository portal collecting abundant human cancer datasets sourcedfrom the TCGA database [46].The association between the expression level of core targets and immune or molecular subtypes of LAUD were assessed through the TISIDB database.Results were deemed statistically significant if the P-value was less than 0.05. Real-time PCR By using a SteadyPure Universal RNA Extraction Kit (AG21017, Accurate Biology, Hunan, China), total RNA of the cells was extracted.Subsequently, 1 µg of RNA was subjected to the reverse transcription process within a 20 µL reaction volume utilizing Evo M-MLVRTMix Kit with gDNA Clean for qPCR Ver.2 (AG11728, Accurate Biology, Hunan, China).The resulting cDNA was then employed for real-time PCR assay by using the SYBR Green Pro Taq HS qPCR Kit (AG11701,Accurate Biology, Hunan, China).The specificprimer sequences used in this study were as follows: Statistical analysis GraphPad 6-MDS inhibits proliferation of A459 cells To verify the anti-proliferative effect of 6-MDS on LUAD, CCK8 was used to determine the cell viability after treatment with 6-MDS for 24 h and 48 h, respectively.Gradually elevating the concentrations of 6-MDS from 2 to 64 µg/mL resulted in a dose-dependent decrease in the survival rates of LUAD cells (Fig. 1A-B), indicating a pronounced inhibitory influence of 6-MDS on LUAD cell proliferation.Specifically, 6-MDS inhibited the growth of A549 cells with an IC 50 of 5.22 ± 0.60 µM for 24 h and 2.90 ± 0.38 µM for 48 h. Prediction of LUAD and 6-MDS targets Using the TCGA-LUAD cohort in GEPIA database, a total of 4,246 DEGs were identified and visually displayed through box plots, including 1,112 up-regulated and 3,134 down-regulated (Fig. 2A, Table S1).Besides, 9,642 LUAD targets were extracted from the above-mentioned three database: Drugbank, GeneCards and OMIM (Table S2).Finally, 522 LUAD-related targets were obtained by Venn analysis (Table S3).As for the 6-MDS targets, a grand total of 379 target genes were secured from the PharmMapper, SuperPred, SwissTargetPrediction and targetnet (Table S4). Targets of 6-MDS in LUAD acquisition and functional enrichment analysis 33 potential targets of 6-MDS in LUAD were identified by crossing the LUAD-related targets with 6-MDS targets (Fig. 2B, Table S5), which were then submitted to the Metascape database to explore the potential biological functions of these genes.The results of KEGG enrichment analysis indicated that these genes were significantly enriched in cell cycle, pyrimidine metabolism as well as transcriptional mis-regulation in cancer (Fig. 2C). Regarding GO analysis, the results demonstrated that these 33 potential targets of 6-MDS in LUAD were notably related to cell cycle G2/M phase transition, response to wounding and response to amyloid-beta (Fig. 2D).These outcomes imply that 6-MDS may mainly inhibit the occurrence of LUAD by affecting the cell cycle. PPI network construction The 33 potential targets of 6-MDS in LUAD were submitted to the STRING database to obtain the PPI data.The PPI data was then visualized and analyzed using Cytoscape software (Fig. 3A).Subsequently, the cytohubba plug-in was employed to analyze the PPI network and identify the top ten hub genes, which were then displayed in Table 1; Fig. 3B.Nodes in a darker shade of red indicate higher importance.The top 10 hub genes comprised MMP9, CDK1, TYMS, CCNA2, ERBB2, CHEK1, KIF11, AURKB, PLK1 and TTK.Further analysis through KEGG enrichment analysis indicated that these hub genes were primarily involved in the cell cycle and pathways in cancer signaling pathway (Fig. 3C). Fig. 1 Effect of 6-MDS on the viability of A549 for 24 h (A) and 48 h (B), respectively Molecular docking validation of 6-MDS and 10 hub genes In order to confirm the possibility of these 10 core genes as key targets for 6-MDS treatment of LUAD, AutoDock-Tools-1.5.6 software was used to perform virtual molecular docking between 6-MDS and these 10 hub genes. The PDB file of the target protein is downloaded and the details of the protein are collected from the PDB database (Table 2).The parameters of the docking box are collected (Table 3).The resulting binding energy, amino acid residues, hydrogen bonds and Kd value were shown in Table 4, the outcomes of molecular docking were visualized in Fig. 4. Notably, the binding energies between 6-MDS and the hub proteins were all higher than − 6 kcal/Mol except for AURKB, suggesting robust affinity between 6-MDS and 9 of the targets.Remarkably, the binding energy of 6-MDS with PLK1 was − 11.90 kcal/ mol, exhibiting the lowest docking energy among the 10 hub genes.In summary, the results indicated that 6-MDS binds strongly to core target proteins, and 6-MDS may exert anti-cancer effects by binding to core target proteins. External validation of the 9 hub genes The mRNA expression levels of 9 hub genes Since the AURKB showed no Binding energy with 6-MDS, it was excluded for further validation.The results from the GEPIA showed the expression level of the other 9 hub genes was much higher in cancer tissues than in normal tissues for LUAD (Fig. 5).Besides, the expression level of CDK1, CCNA2, CHEK1, KIF11, PLK1 and TTK was significantly elevated with cancer progression in LUAD (Fig. 6). The protein expression levels of 9 hub genes Consistent with the gene expression pattern, the protein expression levels of CDK1, TYMS, CCNA2, significantly up-regulated compared with normal lung tissues (Fig. 7).However, the protein expression level of MMP9 was significantly down-regulated in LUAD tissue compared with normal lung tissues, which was contrary to its gene expression pattern and needs to be verified through subsequent experiments. Overall survival analysis of 9 hub genes As for overall survival analysis of 9 hub genes, the results indicated that high expression of eight out of ten hub genes, including CDK1, TYMS, CCNA2, CHEK1, KIF11, PLK1 and TTK were significantly associated with the OS of LUAD patients (Fig. 8). Analysis of promotor methylation level of hub genes The dysregulation of DNA methylation has been implicated in the development of cancer, and used for cancer diagnosis and therapy [47,48].Therefore, the DNA methylation level of the 9 hub genes were compared between normal and cancer tissues by using the UALCAN databases.The result revealed that the promotor methylation level of TYMS, ERBB2, CHEK1, KIF11, PLK1 and TTK were significantly decreased in LUAD tissues compared with that in normal tissues (Fig. 9). Immune cell infiltration of hub genes The 9 hub genes effects on the immunological milieu of tumors were investigated by assessing the association of its expression with the degree of immune cell infiltration. As is shown in Fig. 10, the expression of CCNA2, CDK1 and TTK were positively correlated with the infiltration of CD8 + T cells and neutrophils, while it was negatively Correlation of hub genes expression with immune subtypes The TISIDB online tool was ultilized to analyze the relationship between the 9 hub genes expression and LUAD immune subtypes.The results obtained from the TISIDB indicated that the expression level of the 9 hub genes was significantly associated with different immune subtypes in LUAD (Fig. 11). 6-MDS down-regulated the mRNA expression level of target genes Since CDK1, CHEK1, KIF11, PLK1 and TTK was significantly elevated with cancer progression in LUAD and exhibited excellent binding performance with 6-MDS, they were chosen for further validation.The results of qPCR indicated that after the intervention of 5 µM 6-MDS for 24 h, the mRNA expression of CDK1, CHEK1, KIF11, PLK1 and TTK decreased significantly compared with the untreated group (P < 0.05).(Fig. 12A-E). Discussion Lung cancer is the leading cause of mortality from cancer worldwide.Lung adenocarcinoma (LUAD) is a type of non-small cell lung cancer (NSCLC) with highest prevalence.Despite advancements in targeted therapy and immunotherapy, the overall survival of LUAD patients remain discouraging due to the metastases [49].Natural products have recently garnered significant interest owing to their potential anti-cancer effects, which could pave the way for the development of innovative medications.6-MDS is a natural benzophenanthridine alkaloid [50] which has shown promising anti-cancer effects.However, whether 6-MDS exhibits anti-cancer properties on LUAD and the underlying pharmacological mechanism needs further study. In recent decades, the integration of network-based pharmacology and computer-assisted drug design technology has gained traction in uncovering the intricate workings of drugs, emerging as a potent approach in pharmaceutical investigations [51][52][53][54][55].Among them, network pharmacology enables the anticipation of disease targets impacted by drugs, while molecular docking facilitates the examination of drug-gene interactions.In this study, a combination of bio-information analysis and network pharmacology, as well as molecular docking, were used to simulate the possible mechanisms of 6-MDS treatment for LUAD.Through network pharmacology, we identified 10 core genes that may be potential targets for 6-MDS treatment of LUAD.KEGG enrichment analysis revealed that these genes were mainly enriched in cell cycle pathway.Cell cycle regulation is orchestrated by a complex network of interactions between proteins, enzymes, cytokines, and cell cycle signaling pathways, and is vital for cell proliferation, growth, and repair.It is well-established that the occurrence, development, and metastasis of tumors are intricately linked with the regulation of the cell cycle [56].These finding proposed that 6-MDS may mainly inhibit the occurrence of LUAD by affecting the cell cycle. The molecular docking analysis unveiled that nine targets had good binding performance with 6-MDS.Notably, the binding energy of 6-MDS with PLK1 was − 11.90 kcal/mol, which had the lowest docking energy mong the targets examined.Besides, the binding energy of CDK1, CHEK1, KIF11 and TTK were all lower than − 8.7 kcal/mol, indicating strong binding capabilities.In addition, the expression level of CDK1, CHEK1, KIF11, PLK1 and TTK was significantly elevated with cancer progression in LUAD.And they also demonstrated a significant correlation with the OS of LUAD patients. Polo-like kinase 1 (PLK1) is crucial for the normal progression of mitosis.The significant upregulation of PLK1 has been found in various human cancers and is significantly associated with poor prognosis in various cancers.Many studies have showed that inhibition of PLK1 could lead to death of cancer cells by interfering with multiple stages of mitosis.In the case of LUAD, PLK1 was found to be highly expressed in LUAD and was positively associated with advanced disease staging and poor survival outcomes.Also, PLK1 plays a critical role in LUAD progression by regulating necroptosis and immune infiltration, and may serve as a potential therapeutic target for immunotherapy [57,58].Cyclin dependent kinases (CDKs) are serine/threonine kinases that are proposed as promising candidate targets for cancer treatment [59].Deregulation of CDK1 has been shown to be closely associated with tumorigenesis.CDK1 activation plays a critical role in a wide range of cancer types; and CDK1 phosphorylation of its many substrates greatly influences their function in tumorigenesis.These proteins complexed with cyclins play a critical role in cell cycle progression.CDK1 is a potential prognostic biomarker and target for lung cancer; CDK1 activity is critical for JAK/ STAT3 signaling activation, and the inhibition of CDK1 can suppress lung cancer.In addition, an in vitro study of LUAD cells showed that reduced CDK1 activity led to cell cycle arrest and promotion of apoptosis in LUAD [60].Hence, CDK1 may be used as potential biomarkers and therapeutic targets for LUAD [61].Checkpoint kinase 1 (CHEK1, also known as CHK1) is a conserved serine/ threonine kinase that plays an important role in replication fork stability and DNA damage response [62].In a study of TP53 mutant NSCLC tumor cells, it was found that inhibiting the expression of CHEK1 can significantly enhance the sensitivity of tumor cells to chemotherapy [63,64].In addition, promoter methylation, amplification, and miRNA regulation in patients with lung adenocarcinoma may lead to the upregulation of the CHEK1 gene, which may be a marker for predicting the survival rate of patients with lung adenocarcinoma [65].The motor protein superfamily consists of 45 family members, among which KIF11 plays a role as a motor protein in mitosis.KIF11 is essential for LUAD cell proliferation and metastasis, and it may serve as an independent prognostic factor as well as a promising therapeutic target for LUAD patients [66].TTK, also known as Monopolar spindle1 (Mps1), is the crucial modulator of the spindle assembly checkpoint, which is responsible for ensuring chromosomal separation.At present, some studies have found that TTK may be related to the occurrence and development of lung cancer.Zheng et al. showed that the expression of TTK was higher in lung adenocarcinoma and squamous cell carcinoma than in normal lung tissue, which was related to the poor prognosis of patients with lung adenocarcinoma [67].When TTK was knocked out in A549 cell, cell proliferation, migration and tumorigenesis were inhibited [68].Therefore, TTK may be a promising prognostic biomarker for LUAD and is worthy of further investigation [69].Furthermore, qPCR resulted indicated that after the intervention of 5 µM 6-MDS for 24 h, the mRNA expression of CDK1, CHEK1, KIF11, PLK1 and TTK decreased significantly compared with the untreated group.In conclusion, the 5 key targets, which had good binding force with 6-MDS may play an important role in cancer progression, which preliminarily confirmed the possibility of 6-MDS against LUAD at the molecular level. Our present study has certain limitations.Firstly, this study reveals possible targets and pathways of the impact of 6-MDS, but our understanding of the exact mechanism by which 6-MDS exerts its anti-tumor properties is still quite limited.Deeper exploration into the downstream signaling pathways and molecular mechanisms that drive the activity of 6-MDS could provide valuable insights into the discovery.Secondly, it is not hard to argue that more accurate experiments should be carried out, such as surface plasmon resonance (SPR) or Cellular Thermal Shift Assay (CETSA) to confirm that 6-MDS does indeed treat lung adenocarcinoma through the above mentioned targets.Furthermore, the incorporation of animal models or clinical trials is imperative to establish stronger evidence regarding the efficacy and safety of 6-MDS for LUAD treatment.Nevertheless, the current limitations in resources and time have confined us to performing only fundamental experiments.Thirdly, it should be noted that the limited water solubility and fast metabolism of 6-MDS may hinder its medical applications, despite its encouraging anti-cancer properties. Therefore, innovative approaches like nanocarriers need to be developed and explored to enhance the bioavailability of 6-MDS.The next phase of our research will focus on this particular area. Conclusions To sum up, a comprehensive evaluation of 6-MDS was performed for revealing its potential mechanism for the treatment of LUAD through network pharmacology, molecular docking and experimental validation.The results of this study suggested that 6-MDS might be a candidate used for treating LUAD.More studies are Fig. 2 Fig. 2 Identification of the DEGs of LUAD in the TCGA cohort and Functional enrichment analysis.A Box plots to visualize the DEGs; B Venn diagram to identify 6-MDS-related targets for LUAD.C KEGG enrichment analysis of the 6-MDS-related targetsfor LUAD; D GO enrichment analysis of the 6-MDSrelated targetsfor LUAD Fig. 3 Fig. 3 PPI network and KEGG analyses of the ten core genes.A PPI network visualized by Cytoscape; B Network of interactions of top ten hub genes; C KEGG enrichment analysis of the top ten hub genes Fig. 6 Fig. 6 Stage diagram of hub gene mRNA expression levels and pathological stages in the GEPIA database Fig. 7 Fig. 7 Box plot of hub protein expression levels in the UALCAN database.Red represents tumor tissues and Blue represents normal tissues.A CCNA2; B CDK1; C CHEK1; D ERBB2; E KIF11; F MMP9; G PLK1; H TTK; I TYMS Fig. 11 Fig. 11 Correlation of hub genes expression with immune subtypes in the TISIDB database.A CCNA2; B CDK1; C CHEK1; D ERBB2; E KIF11; F MMP9; G PLK1; H TTK; I TYMS Prism 5.0 (GraphPad Software, San Diego, CA, USA; RRID: SCR_002798) was applied to analyzed the data.The analysis results of the data were presented in mean ± SD.The statistical difference between two samples was analyzed by Students t test.* means P < 0.05, ** means P < 0.01 and *** means P < 0.001.P less than 0.05 indicates statistical difference in results. Table 1 10 Hub genes identified using 4 different algorithms in the Cytohubba plug-in Table 2 Details of the protein targets in the PDB database Table 3 Grid docking parameters in molecular docking Table 4 Basic information on the molecular docking of 6-MDS and target proteins
2024-05-24T23:08:29.442Z
2024-05-23T00:00:00.000
{ "year": 2024, "sha1": "8bcff0a06cbd2eafe8e2a1d7acfb94718f9cba5d", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "ScienceParsePlus", "pdf_hash": "4ba1f6620706be6faa0fcab18b132e21c1098fa5", "s2fieldsofstudy": [ "Medicine", "Chemistry" ], "extfieldsofstudy": [ "Medicine" ] }
237555712
pes2o/s2orc
v3-fos-license
An Unusual Case of Subacute Appendicitis and Intestinal Spirochetosis Intestinal spirochetosis is a gastrointestinal infection with vague and inconsistent symptoms. It similarly presents multiple gastrointestinal diseases such as inflammatory bowel disease and appendicitis. We present a case of a 27-year-old female with intestinal spirochetosis who was later found to have subacute appendicitis. Further understanding of the disease and a set of criteria may have to be created for its management. Introduction Intestinal spirochetosis (IS) is a gastrointestinal infection with an increasing global prevalence. However, the underlying pathophysiology of spirochetosis is not fully understood. Spirochetes are found in the digestive tracts of many species, including humans [1]. Additionally, the clinical presentation of the infection is usually vague and can be similar to other gastrointestinal infections and conditions, such as appendicitis, irritable bowel syndrome and inflammatory bowel disease [2]. The vague nature and presentation of IS highlight the clinical importance of recognizing the disease. Case Presentation A 27-year-old female presented with gradually progressive severe abdominal pain for four months. The pain has been radiating to the right lower quadrant (RLQ) and recently extended to the right groin. The pain was associated with minimal guarding and rebound tenderness. She also had nausea, vomiting and mild diarrhea. Furthermore, she reported 3-4 kg weight loss in the past two months. There is no known previous medical or family history. The patient was afebrile and vitals were stable. Initial laboratory findings provided in Table 1 An initial computed tomography (CT) scan provided in Figure 1 showed multiple segmental thickening of the terminal ileum and distal ileal loops with surrounding inflammatory changes. Moreover, it was associated with reactive inflammation of the appendix, most likely secondary to inflammatory/infectious processes. An esophagogastroduodenoscopy (EGD) was performed and was insignificant. A colonoscopy was then conducted, and a biopsy of the colonic mucosa was acquired, which revealed a basophilic fringe-like mildly thickened brush border, suggestive of spirochetosis as illustrated in Figure 2. Periodic acid-Schiff (PAS) and Silver stain showed small filamentous structures on the mucosal surface suggestive for IS. The patient was discharged on Metronidazole for two weeks and the pain subsided. FIGURE 1: Initial abdominal CT showing multiple segmental thickening of the small bowel wall (yellow arrow) FIGURE 2: Colonic mucosa biopsy -a periodic acid-Schiff stain showing filamentous structures (black arrow) on the surface epithelium forming a thick bluish fringe A few days after completion of the Metronidazole course, the patient's RLQ pain returned. On reexamination, the abdomen was soft, with minimal guarding but positive for rebound tenderness in the RLQ. Repeat laboratory findings are shown in Table 2. Repeat EGD and colonoscopy were both normal. A second CT provided in Figure 3 revealed a dilated appendix with a thick hyper-enhancing wall. FIGURE 3: CT abdomen pelvis with contrast showing a dilated appendix (red arrow) A diagnosis of subacute appendicitis was made. The underlying IS was likely to be benign and not responsible for the patient's initial presentation. It was agreed on by the general surgery team that management will be conservative and that the patient will be clinically reassessed for surgery upon followup. Three weeks later, there was no improvement in the RLQ pain. The patient was accepted for a laparoscopic appendectomy which was performed with no complications. The patient was discharged on post-operation day 1 and was advised to discontinue antibiotics, take analgesics as needed, and follow up in two weeks' time. Discussion There are three phylogenetic groups of the Spirochaetaceae family. Of which, the Brachyspiraceae are fastidious anaerobic organisms, spread via the fecal-oral route, commonly found in IS [3]. Most cases tend to be asymptomatic; however, patients can present with chronic watery diarrhea, abdominal pain, and occasionally, hematochezia. It is hypothesized that symptoms and clinical presentation are secondary to immune reactions elicited by penetration of spirochetes into the mucosal cells, and by macrophage uptake [4]. However, in many cases, the clinical presentation is vague and unrelated to the presence of intestinal spirochetes and therefore requires further studies. IS tends to the apical membrane of the colonic epithelium [5]. Therefore, diagnosis is made through histopathologic examination of a mucosal biopsy showing adhesion of spirochetes to the brush border mucosa. IS has been reported more commonly in developing countries, suggesting that diet and sanitation are possible factors contributing to the pathogenesis of the infection [5][6][7]. There is a prevalence of around 1.1%-5% of IS in developing countries [7]. Further prevalence includes 32.6% in Australian aboriginal people, 64.3% in villages in India amongst otherwise healthy individuals, and 11.4%-26.7% in hospitalized and healthy people, respectively, in Oman. Spirochetosis has been previously suggested to occur in immunocompromised individuals infected with HIV; however, spirochetosis's prevalence is increasing in the general population [6]. IS masked by systemic or local bowel disease has been previously discussed [2]. Our patient presented with atypical symptoms of IS, masking the diagnosis of appendicitis. A similar presentation has previously been described in a 13-year-old boy, with a differential diagnosis of IBD or infectious colitis and was then later found to have IS [5]. The diagnosis of IS is challenging and may mask other diagnoses. Only two cases of appendiceal spirochetosis and two cases of colorectal spirochetosis have been discussed in Saudi Arabia [4,8,9]. Our patient had unspecific gastrointestinal symptoms indicating an ongoing inflammatory process; however, her laboratory results were unremarkable and were found to have subacute appendicitis. IS is also commonly found with concurrent gastrointestinal infections, usually involving enteric pathogens such as Helicobacter pylori, Shigella flexneri, and Enterobius vermicularis. Furthermore, IS can be an incidental finding in asymptomatic individuals [3,6]. Moreover, diagnostic procedures are often inconclusive. Colonoscopy may aid in the diagnosis, but findings are non-specific to IS, such as erythematous lesions, polypoid lesions, or normal findings in the majority of the mucosa, as seen in our patient [2]. Histopathology can vary from patient to patient, although a "fuzzy brush border" appearance seems to be the common characteristic [2]. Treatment depends on the presentation and severity of symptoms as well as any other underlying conditions or factors [1]. Antibiotic treatment is indicated in symptomatic cases, and metronidazole seems to be the preferred medication [1,6]. Conclusions This case is a novel example suggesting the lack of knowledge regarding the clinical significance of IS. The pathogenesis and pathophysiology of this infection remain unclear and poorly understood. Therefore, a set of guidelines or criteria may have to be put in place to provide clarity for IS diagnosis and treatment. Due to the numerous different clinical presentations of patients with an IS infection, physicians must be made aware of the different presentations. Understanding patients' immune and socioeconomic statuses may aid in gathering more data and understanding IS better in the future. Additional Information Disclosures Human subjects: Consent was obtained or waived by all participants in this study. Conflicts of interest: In compliance with the ICMJE uniform disclosure form, all authors declare the following: Payment/services info: All authors have declared that no financial support was received from any organization for the submitted work. Financial relationships: All authors have declared that they have no financial relationships at present or within the previous three years with any organizations that might have an interest in the submitted work. Other relationships: All authors have declared that there are no other relationships or activities that could appear to have influenced the submitted work.
2021-09-19T05:19:02.486Z
2021-09-01T00:00:00.000
{ "year": 2021, "sha1": "32628bcbbe2b7b87460bd3ceaaaa670166454a43", "oa_license": "CCBY", "oa_url": "https://www.cureus.com/articles/68824-an-unusual-case-of-subacute-appendicitis-and-intestinal-spirochetosis.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "32628bcbbe2b7b87460bd3ceaaaa670166454a43", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
247806851
pes2o/s2orc
v3-fos-license
Quantitative Microbial Risk Assessment of Listeria monocytogenes and Enterohemorrhagic Escherichia coli in Yogurt Listeria monocytogenes can survive in yogurt stored at a refrigeration temperature. Enterohemorrhagic Escherichia coli (EHEC) has a strong acid resistance that can survive in the yogurt with a low pH. We estimated the risk of L. monocytogenes and EHEC due to yogurt consumption with @Risk. Predictive survival models for L. monocytogenes and EHEC in drinking and regular yogurt were developed at 4, 10, 17, 25, and 36 °C, and the survival of both pathogens in yogurt was predicted during distribution and storage at home. The average initial contamination level in drinking and regular yogurt was calculated to be −3.941 log CFU/g and −3.608 log CFU/g, respectively, and the contamination level of both LM and EHEC decreased in yogurt from the market to home. Mean values of the possibility of illness caused by EHEC were higher (drinking: 1.44 × 10−8; regular: 5.09 × 10−9) than L. monocytogenes (drinking: 1.91 × 10−15; regular: 2.87 × 10−16) in the susceptible population. Both pathogens had a positive correlation with the initial contamination level and consumption. These results show that the foodborne illness risk from L. monocytogenes and EHEC due to yogurt consumption is very low. However, controlling the initial contamination level of EHEC during yogurt manufacture should be emphasized. Introduction Yogurt is a dairy product fermented by Streptococcus thermophilus and Lactobacillus bulgaricus [1]. Yogurt provides probiotics known to be beneficial bacteria that can promote health. Worldwide, the consumption of probiotics and yogurt is increasing every year [2][3][4]. Pathogenic Escherichia coli (E. coli) are a group of facultative anaerobes that can cause diseases in healthy individuals with a combination of certain virulence factors, including adhesins, invasins, toxins, and capsules. Pathogenic E. coli are classified into six pathotypes based on clinical, epidemiological, and virulence traits: enteropathogenic E. coli (EPEC), enteroaggregative E. coli (EAEC), diffusely adherent E. coli (DAEC), enterotoxigenic E. coli (ETEC), enteroinvasive E. coli (EIEC) and enterohemorrhagic E. coli (EHEC) [5]. EPEC (60.5%) is the primary cause of pathogenic E. coli outbreaks in Korea, followed by ETEC (31.2%), EHEC (6.8%), and EIEC (1.5%) [6]. Among them, EHEC can cause diarrhea with a mechanism of attaching-effacing (A/E) lesions with only a low infectious dose (1-100 CFU) [7]. EHEC has strong acid resistance that can make it viable in food with a low pH [8]. Morgan et al. [9] reported 16 cases of E. coli O157:H7 Phage Type 49 due to the consumption of a locally produced yogurt occurring in the northwest of England in 1991. In a study by Cutrim et al. [10], E. coli O157:H7 was shown to survive for 10 days in both traditional inoculated yogurt and pre-hydrolyzed inoculated yogurt, whereas its survival increased to 22 days in lactose-free yogurt. The populations of E. coli O157:H7 decreased by only about 1.4 log CFU/g after 28 days in Greek-style yogurt [11]. Listeria monocytogenes (LM) are facultatively anaerobic opportunistic pathogens that can grow between 0 and 45 • C; optimal growth occurs at 30~37 • C [12]. It can grow at Prevalence and Initial Contamination Level in an Offline Market To derive prevalence (PR) data of LM and EHEC in yogurt by season and location, results of yogurt monitoring (195 drinking yogurts and 90 regular yogurts) were used [24]. LM and EHEC were identified with methods as described in the Korean Food Code [25]. The distribution of PR was fitted using Beta distribution (α, β), with α meaning "number of positive samples+1" and β meaning "number of total samples-number of positive samples +1" [26]. Initial contamination levels of LM and EHEC were estimated using the equation [Log (-ln(1-PR)/weight)] of Sanaa et al. [27]. Physicochemical and Microbiological Analyses of Yogurt Ten products of two types of yogurt (drinking and regular) were purchased from an offline market. The pH, water activity (Aw), total aerobic bacteria, coliform, and E. coli were measured. Briefly, 10 g of sample was aseptically placed in a stomacher bag with 90 mL of distilled water and homogenized with a stomacher (Interscience, Paris, France). The pH was measured with a pH meter (Orion TM Star A211, ThermoFisher Scientific Co., Waltham, MA, USA). The Aw of each sample (15 g) was measured in triplicate using a water activity meter (Rotronic HP23-AW-A, Rotronic AG, Bassersdorf, Switzerland). To measure total aerobic bacteria (AC), coliform, and E. coli (EC), 25 g of sample was homogenized with 225 mL of 0.1% sterile peptone water (BD, Sparks, MD, USA) and serially diluted 10-fold with 0.1% peptone water. After inoculating 1 mL aliquot of each dilution onto two or more sheets of 3M Petrifilm E. coli/Coliform Count Plate (3M corporation, St. Paul, MN, USA), AC and EC plates were incubated at 36 ± 1 • C for 48 h and 24 h, respectively. Strain Preparation An LM strain isolated from the gloves of a slaughterhouse worker [28] was stored in tryptic soy broth (TSB, MB cell, Seoul, Korea) containing 0.6% yeast extract with 20% glycerol (Duksan, South Korea) at −80 • C. After thawing at ambient temperature, 10 µL of LM inoculum was added into 10 mL of TSB containing 0.6% yeast extract and then cultured at 36 ± 1 • C for 24 h in a 140 rpm rotary shaker (VS-8480, Vision Scientific, Daejeon, Korea). E. coli (EHEC) strains (NCCP 13720, 13721) including E. coli O157:H7 (NCTC 12079) were obtained from the Ministry of Food and Drug Safety (MFDS) in Korea. After thawing frozen strains that were stored at −80 • C, they were cultured in the same way as described above. All strains were centrifuged at 4000 rpm for 10 min (VS-550, Vision Scientific, Daejeon, Korea) and the supernatant was removed. Pellets were harvested by centrifugation (4000 rpm for 10 min), washed with 10 mL of 0.1% peptone water, and resuspended with 0.1% peptone water to a final concentration of approximately 9.0 log CFU/mL. Sample Preparation and Inoculation For model development, the popularity of yogurt samples and results of physicochemical (high pH value) and microbiological analyses of yogurt were considered. Drinking and regular yogurt were purchased from an offline market (Seoul, Korea) and aseptically divided into 30 mL and 10 g, respectively, into 50 mL conical tubes (SPL Life Science Co., Daejeon, Seoul). LM and the cocktail of E. coli strains were independently inoculated into drinking (4~5 log CFU/g) and regular yogurts (5~6 log CFU/g). Each sample was then stored at 4, 10, 17, 25, and 36 • C until no colonies were detected for up to 21 days. At a specific time, each yogurt sample was homogenized with sterilized 0.1% peptone water for 120 s using a stomacher. Then 1 mL of the aliquot of the homogenate was serially diluted ten-fold with 0.1% peptone water and spread onto PALCAM agar (Oxoid, Basingstoke, Hampshire, UK) for LM and EMB agar (Oxoid, Basingstoke, Hampshire, UK) for EHEC, which were incubated at 36 ± 1 • C for 24 h to analyze the change in pathogen populations. Development of Primary and Secondary Model The Weibull model [29] (Equation (1)) and GinaFit V1.7 program [30] were used to develop the primary survival model of yogurt as a function of temperature. Delta value (time for the first decimal reduction) and p-value (shape of graph) were then calculated. N 0 : log initial number of cells t: time delta: time for the first decimal reduction p: shape (p > 1: concave downward curve, p < 1: concave upward curve, p = 1: log-linear) From results obtained through the primary predictive model, the secondary model was developed by applying the third-order polynomial model (Equation (2)) to delta values of both LM and EHEC as a function of temperature. Third − order polynomial model : Validation To verify the applicability of the predictive model of LM, the delta value was obtained with temperatures not used for model development in this study, which was 7 • C for drinking yogurt and 13 • C for regular yogurt (interpolation). The predictive model of EHEC was verified with enteropathogenic (EPEC) strain (extrapolation), which was detected in [24]. The root mean square error (RMSE; Equation (3)) [31] was used as a measure of applicability: n: the total number of experimental values (values obtained from independent variables) or predicted values (values obtained from the developed survival model). Development of Scenario from Market to Home The exposure assessment scenario for the risk assessment of yogurt was divided into three stages: "market storage", "transportation to home", and "home storage". The storage temperature of yogurt in the market was investigated for an offline market, which was used as an input variable into an Excel (Microsoft@ Excel 2019, Microsoft Corp., USA) spreadsheet. PERT distribution was confirmed as the most suitable probability distribution model using @RISK 7.5 (Palisade Corp., Ithaca, NY, USA). The minimum, mode, and maximum values of storage temperature were 2.1, 7, and 9.7 • C, respectively. Storage time was also input based on the shelf-life of yogurt. The PERT distribution was confirmed as the most suitable model using @RISK 7.5 (Palisade Corp., Ithaca, NY. USA). The minimum, mode, and maximum values of storage time were 0, 240, and 312 h for drinking yogurt and 0, 240, and 480 h for regular yogurt, respectively. At the stage of transporting from market to home, the pert distribution was applied to transportation time and temperature according to Jung [32]. Values of minimum (0.325 h, 10 • C), mode (0.984 h, 18 • C), and maximum (1.643 h, 25 • C) time and temperature were applied. According to data from the MFDS [33], 69.2% of respondents answered that the most frequent storage period for milk was 2-3 days at the refrigeration temperature and the maximum storage period was 30 days or more. As a result, RiskPert (0, 60, 720 h) distribution was input in the scenario and a RiskLogLogistic (−10.407, 13.616, 8.611) distribution was used as the storage temperature [34]. Estimation of Consumption Data of Yogurt The appropriate probability distribution model for consumption amount and intake rate of yogurt was confirmed using data from "Estimation of amount and frequency of consumption of 50 domestic livestock and processed livestock products" from the MFDS [35]. Hazard Characterization For hazard characterization, the exponential model was used for the dose-response model of LM [36] (Equation (4)) and the Beta-Poisson model [37] was used for the doseresponse model of EHEC (Equation (5)): P: the probability of foodborne illness for the intake of LM r: the probability that one cell can cause disease (susceptible population: 1.06 ×10 −12 , general population: 2.37×10 −14 ) N: the number of cells exposed to the consumption of LM Risk Characterization To estimate the probability of foodborne illness per person per day for the intake of drinking and regular yogurt contaminated by LM or EHEC, formulas and inputs of exposure scenarios were written in an Excel spreadsheet. The risk was then calculated through a Monte Carlo simulation of @RISK. Median Latin hypercube sampling was used for sampling type, and a random method was used for generator seed. Finally, the correlation coefficient was calculated based on sensitivity analysis results to analyze factors affecting the probability of occurrence of foodborne illness. Statistical Analysis All experiments were repeated at least three times. All statistical analyses were performed using SAS version 9.4 (SAS Institute Inc., Cary, NC, USA). To describe significant variations of delta values between LM and EHEC at the same temperature, a t-test was used. Differences were considered significant at p < 0.05. Prevalence and Intial Contamination Level in an On-an Offline Market As a first step in the exposure assessment, initial contamination levels for LM and EHEC were analyzed for drinking yogurt (n = 195) and regular yogurt (n = 90) purchased from on and offline markets in Korea. LM and EHEC were not detected in any samples [24]. The average contamination level was calculated using the equation [Log (−ln(1−PR)/weight)] by Sanaa et al. [27]. The average initial contamination level of both LM and EHEC was −3.941 log CFU/g in the drinking yogurt and −3.608 log CFU/g in the regular yogurt ( Figure 1). Risk Characterization To estimate the probability of foodborne illness per person per day for the intake of drinking and regular yogurt contaminated by LM or EHEC, formulas and inputs of exposure scenarios were written in an Excel spreadsheet. The risk was then calculated through a Monte Carlo simulation of @RISK. Median Latin hypercube sampling was used for sampling type, and a random method was used for generator seed. Finally, the correlation coefficient was calculated based on sensitivity analysis results to analyze factors affecting the probability of occurrence of foodborne illness. Statistical Analysis All experiments were repeated at least three times. All statistical analyses were performed using SAS version 9.4 (SAS Institute Inc., Cary, NC, USA). To describe significant variations of delta values between LM and EHEC at the same temperature, a t-test was used. Differences were considered significant at p < 0.05. Prevalence and Intial Contamination Level in an On-an Offline Market As a first step in the exposure assessment, initial contamination levels for LM and EHEC were analyzed for drinking yogurt (n = 195) and regular yogurt (n = 90) purchased from on and offline markets in Korea. LM and EHEC were not detected in any samples [24]. The average contamination level was calculated using the equation [Log (-ln(1-PR)/weight)] by Sanaa et al. [27]. The average initial contamination level of both LM and EHEC was −3.941 log CFU/g in the drinking yogurt and −3.608 log CFU/g in the regular yogurt ( Figure 1). Development of Primary and Secondary Predictive Model The primary models of LM and EHEC in yogurt are shown in Figure 2. Secondary predictive models of delta values for LM and EHEC and equations are shown in Figure 3. Table 1). The delta value corresponds to the time for the first decimal reduction of the surviving populations of LM and EHEC. Overall, the higher the temperature, the lower the delta value, indicating that survival of LM and EHEC is better in yogurt stored at refrigeration temperature. Lactic acid bacteria (LAB) activity in yogurt increases as the temperature increases. Thus, the viability of LM and EHEC can be decreased. LAB can produce large amounts of organic acids and lower the pH value [38]. Some LAB can also produce bacteriocins and bacteriocin-like compounds to inhibit pathogens [39]. The temperature can affect the growth of LAB, and LAB isolated from Calabrian cheeses can inhibit the growth of LM in soft cheese [40]. LAB has the highest specific growth rate at 42-44 °C, the optimum growth temperature for LAB [41]. LAB starters can reduce the survival ability of EHEC in kimchi [42]. Bachrouri et al. [43] have reported that the viability of E. coli O157:H7 decreased as the temperature increased and E. coli O157:H7 is more resistant to death than nonpathogenic E. coli at 4 and 8 °C. The survival ability of LM is drastically decreased at 15 °C, but not significantly changed at 3~12 °C [44]. This work also noticed that LM and EHEC died faster in regular yogurt than in drinking yogurt due to the lower pH of regular yogurt (4.14 ± 0.02) than drinking yogurt (4.60 ± 0.02). This result is consistent with the study of Millet et al. [45], showing that low pH can decrease the growth of LM in raw-milk cheese. Guraya et al. [46] have also suggested that the viability of EHEC is drastically decreased in yogurt with pH below 4.1. Additionally, drinking yogurt has higher water activity (0.961 ± 0.001) than regular yogurt (0.943 ± 0.002) in this work. The Aw is the availability of the water in the product for microbes, and the higher the Aw, the better microorganism can survive. At 10 °C, the highest survival ability of EHEC was observed in drinking yogurt, followed by EHEC in regular yogurt, LM in drinking yogurt, and LM in regular yogurt ( Figure 2). Overall, EHEC survived better than LM at especially low temperatures, regardless of the kind of yogurt in this work ( Figure 3). Development of Primary and Secondary Predictive Model The primary models of LM and EHEC in yogurt are shown in Figure 2. Secondary predictive models of delta values for LM and EHEC and equations are shown in Figure 3. Table 1). The delta value corresponds to the time for the first decimal reduction of the surviving populations of LM and EHEC. Overall, the higher the temperature, the lower the delta value, indicating that survival of LM and EHEC is better in yogurt stored at refrigeration temperature. Lactic acid bacteria (LAB) activity in yogurt increases as the temperature increases. Thus, the viability of LM and EHEC can be decreased. LAB can produce large amounts of organic acids and lower the pH value [38]. Some LAB can also produce bacteriocins and bacteriocin-like compounds to inhibit pathogens [39]. The temperature can affect the growth of LAB, and LAB isolated from Calabrian cheeses can inhibit the growth of LM in soft cheese [40]. LAB has the highest specific growth rate at 42-44 • C, the optimum growth temperature for LAB [41]. LAB starters can reduce the survival ability of EHEC in kimchi [42]. Bachrouri et al. [43] have reported that the viability of E. coli O157:H7 decreased as the temperature increased and E. coli O157:H7 is more resistant to death than nonpathogenic E. coli at 4 and 8 • C. The survival ability of LM is drastically decreased at 15 • C, but not significantly changed at 3~12 • C [44]. This work also noticed that LM and EHEC died faster in regular yogurt than in drinking yogurt due to the lower pH of regular yogurt (4.14 ± 0.02) than drinking yogurt (4.60 ± 0.02). This result is consistent with the study of Millet et al. [45], showing that low pH can decrease the growth of LM in raw-milk cheese. Guraya et al. [46] have also suggested that the viability of EHEC is drastically decreased in yogurt with pH below 4.1. Additionally, drinking yogurt has higher water activity (0.961 ± 0.001) than regular yogurt (0.943 ± 0.002) in this work. The Aw is the availability of the water in the product for microbes, and the higher the Aw, the better microorganism can survive. At 10 • C, the highest survival ability of EHEC was observed in drinking yogurt, followed by EHEC in regular yogurt, LM in drinking yogurt, and LM in regular yogurt ( Figure 2). Overall, EHEC survived better than LM at especially low temperatures, regardless of the kind of yogurt in this work (Figure 3). Validation RMSE value is one of the parameters that can estimate the accuracy of the predictive model, and it was used to calculate the suitability of the model. The predictive model can be considered perfect if RMSE values are close to zero [47]. According to the study of model development using the Weibull model in heat-stressed E. coli O157:H7 and L. monocytogenes in kefir, RMSE values ranged from 0.13 to 0.52 in E. coli O157:H7 and 0.06 to 0.82 in L. monocytogenes [48]. The RMSE value calculated from the estimated data of LM was 0.185 in drinking yogurt and 0.115 in regular yogurt for interpolation. The RMSE value of EPEC was 1.079 in drinking yogurt and 1.001 in regular yogurt for extrapolation. As a result, the developed models in this study were judged to be appropriate to predict the survival of LM, EHEC, and EPEC in drinking and regular yogurt. Change in Contamination Level of Listeria Monocytogenes and EHEC from Market to Home The average contamination level of LM decreased −4.396 log CFU/g in drinking yogurt and −7.965 log CFU/g in regular yogurt at the market. The average contamination level of drinking yogurt during transportation from market to home was slightly decreased to −4.396 log CFU/g, and there was no change in regular yogurt. It was further decreased −5.00 log CFU/g for drinking yogurt and −10.25 log CFU/g for regular yogurt during storage at home before consumption. The initial contamination level of EHEC was the same as that of LM. The contamination level of EHEC was −3.957 log CFU/g in drinking yogurt and −4.244 log CFU/g in regular yogurt at the market, which was maintained when yogurt was transported from market to home. The contamination level decreased −3.969 log CFU/g in drinking yogurt and −4.71 log CFU/g in regular yogurt before consumption at home. The contamination level of both LM and EHEC decreased in yogurt from the market to home because both pathogens cannot grow in yogurt, regardless of the type of yogurt. In this work, a more rapid decrease of contamination level of LM was observed than EHEC in regular yogurt. Hu et al. [49] observed that organic acid produced from Lactobacillus plantarum isolated from traditional dairy products (kumis, milk thistle, yogurt) exhibits antimicrobial activity against pathogenic bacteria. They found that different proportions of organic acid (primarily lactic and acetic acid) show different antimicrobial activity against pathogenic bacteria. The difference in the proportion of organic acid between drinking and regular yogurt may affect the behavior of pathogens in yogurt. Consumption Data of Yogurt The consumption amount and intake rate of yogurt are shown in Figure 4. As a result of fitting the distribution with @Risk, the RiskLaplace model was found to be the most suitable. Daily average consumption amounts of drinking yogurt and regular yogurt were 140 g and 97.046 g, respectively. Intake rates for drinking yogurt and regular yogurt were calculated to be 0.184 and 0.146, respectively. It could be concluded that the consumption of drinking yogurt was higher than that of regular yogurt. Hazard Characterization and Risk Characterization Final risks of LM and EHEC in yogurt were analyzed by separating susceptible population and general population using contamination level, consumption data, and doseresponse model derived according to the scenario of the market to home (Tables 2 and 3). As a result, no risk was estimated for the general group due to LM. However, the probability risk of foodborne illness due to LM was 1.91 × 10 −15 in drinking yogurt and 2.87 × 10 −16 in regular yogurt for susceptible populations per day. It is concluded that the risk of listeriosis is very low with yogurt consumption. The risk assessment result on LM in milk [36] demonstrates that the risk of milk consumption is also low (5.0 × 10 −9 cases per serving). By contrast, this was calculated to be 1.44 × 10 −8 in drinking yogurt and 5.09 × 10 −9 in regular yogurt with EHEC ( Table 4). The risk of foodborne illness from both pathogens was higher from drinking yogurt due to its higher survival ability than regular yogurt. Additionally, the highest risk was found for EHEC in drinking yogurt due to the highest survival ability of EHEC in drinking yogurt (Figure 2), in which the highest delta value was noticed. As a result, the risk of EHEC is higher than LM in yogurt. Yogurt has an inhibition effect on pathogenic microorganisms due to organic acids such as lactic acid and acetic acid, which were produced by LAB [50], low pH below 4.1 [46], and bacteriocin or bacteriocin-like substances produced by LAB [51]. Yang et al. [51] isolated and identified bacteriocinogenic LAB from various cheeses and yogurts. They found that 20% of isolates (28 isolates) out of 138 LAB isolates had antimicrobial effects on all microorganisms tested, except for E. coli. In the present study, we found that EHEC shows better survival ability than LM in both types of yogurts. A similar trend was reported by Gulmez and Guven [52], who compared the inhibitory effects of LM, E. coli O157:H7, and Yersinia Hazard Characterization and Risk Characterization Final risks of LM and EHEC in yogurt were analyzed by separating susceptible population and general population using contamination level, consumption data, and doseresponse model derived according to the scenario of the market to home (Tables 2 and 3). As a result, no risk was estimated for the general group due to LM. However, the probability risk of foodborne illness due to LM was 1.91 × 10 −15 in drinking yogurt and 2.87 × 10 −16 in regular yogurt for susceptible populations per day. It is concluded that the risk of listeriosis is very low with yogurt consumption. The risk assessment result on LM in milk [36] demonstrates that the risk of milk consumption is also low (5.0 × 10 −9 cases per serving). By contrast, this was calculated to be 1.44 × 10 −8 in drinking yogurt and 5.09 × 10 −9 in regular yogurt with EHEC ( Table 4). The risk of foodborne illness from both pathogens was higher from drinking yogurt due to its higher survival ability than regular yogurt. Additionally, the highest risk was found for EHEC in drinking yogurt due to the highest survival ability of EHEC in drinking yogurt (Figure 2), in which the highest delta value was noticed. As a result, the risk of EHEC is higher than LM in yogurt. Yogurt has an inhibition effect on pathogenic microorganisms due to organic acids such as lactic acid and acetic acid, which were produced by LAB [50], low pH below 4.1 [46], and bacteriocin or bacteriocin-like substances produced by LAB [51]. Yang et al. [51] isolated and identified bacteriocinogenic LAB from various cheeses and yogurts. They found that 20% of isolates (28 isolates) out of 138 LAB isolates had antimicrobial effects on all microorganisms tested, except for E. coli. In the present study, we found that EHEC shows better survival ability than LM in both types of yogurts. A similar trend was reported by Gulmez and Guven [52], who compared the inhibitory effects of LM, E. coli O157:H7, and Yersinia enterocolitica in yogurt and kefir samples during 24 h fermentation time and 10 days of storage. They found that E. coli O157:H7 showed the highest resistance during the yogurt's fermentation and storage time. The most recent study showed [53] that most of the bacteriocins produced by LAB isolates are active against Gram-positive bacteria, such as LM and Staphylococcus aureus, whereas Gram-negative bacteria, E. coli, and Salmonella Typhimurium, displayed considerable resistance. Sensitivity Analysis Sensitivity analysis was conducted to identify input variables with a major influence on results. If the result has a negative value, it has a negative correlation. As the input value increases, the output value decreases. If it is 0, there is no correlation. A positive value indicates a positive correlation, meaning that the output value increases as the input value increases [54]. Results of analysis of regression coefficients for the probability risk of foodborne illness caused by LM and EHEC due to yogurt consumption are shown in Figure 5. Both pathogens had a negative correlation with storage time at the market. The risk of foodborne illness decreased with increased storage time at the market. Both pathogens had the greatest positive correlation with the initial contamination level and consumption. As a result, it is considered that initial hygiene management before manufacture can reduce the risk of LM and EHEC. LM can survive longer in yogurt when LM is contaminated with higher concentrations during yogurt manufacture [55]. Kasımoglu and Akgün [56] found that yogurt contaminated at 10 2 CFU/g level of E. coli O157:H7 has a lower elimination time than that contaminated at 10 6 CFU/g level. They suggested that the decline time of E. coli O157:H7 contaminated in the pre-fermentation stage could be affected by the initial contamination level. Therefore, initial hygiene management is important to inhibit the contamination and reduce the risk of pathogens in yogurt. ture can reduce the risk of LM and EHEC. LM can survive longer in yogurt when LM is contaminated with higher concentrations during yogurt manufacture [55]. Kasımoğlu and Akgün [56] found that yogurt contaminated at 10 2 CFU/g level of E. coli O157:H7 has a lower elimination time than that contaminated at 10 6 CFU/g level. They suggested that the decline time of E. coli O157:H7 contaminated in the pre-fermentation stage could be affected by the initial contamination level. Therefore, initial hygiene management is important to inhibit the contamination and reduce the risk of pathogens in yogurt. Conclusions Results showed that the risk of serious illness from LM and EHEC due to drinking and regular yogurt consumption is very low. Yogurt does not permit the growth of LM and EHEC during storage at 4, 10, 17, 25, and 36 °C. The contamination level of both LM and EHEC decreased in yogurt from the market to home, and LM and EHEC died faster in regular yogurt than in drinking yogurt. However, controlling the initial contamination level of EHEC during yogurt manufacture should be emphasized because its survival ability in yogurt is higher in both drinking and regular yogurt than LM. Conclusions Results showed that the risk of serious illness from LM and EHEC due to drinking and regular yogurt consumption is very low. Yogurt does not permit the growth of LM and EHEC during storage at 4, 10, 17, 25, and 36 • C. The contamination level of both LM and EHEC decreased in yogurt from the market to home, and LM and EHEC died faster in regular yogurt than in drinking yogurt. However, controlling the initial contamination level of EHEC during yogurt manufacture should be emphasized because its survival ability in yogurt is higher in both drinking and regular yogurt than LM. Data Availability Statement: We did not report any additional data for this study.
2022-03-31T16:53:30.179Z
2022-03-27T00:00:00.000
{ "year": 2022, "sha1": "5a73f6c52cebc5f7ddf29bedde51406b60090ee6", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2304-8158/11/7/971/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "e466fc72d75069cf905e79add4aeefaa85c360a8", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Medicine" ] }
267555652
pes2o/s2orc
v3-fos-license
Evaluation of Lipopolysaccharide and Interleukin-6 as Useful Screening Tool for Chronic Endometritis Universal diagnostic criteria for chronic endometritis (CE) have not been established due to differences in study design among researchers and a lack of typical clinical cases. Lipopolysaccharides (LPSs) have been reported to cause inflammation in the reproductive systems of several animals. This study aimed to elucidate the influence of LPS in the pathogenesis of CE in humans. We investigated whether LPS affected cytokine production and cell proliferation in the endometrium using in vivo and in vitro experiments. LPS concentrations were analyzed between control and CE patients using endometrial tissues. LPS administration stimulated the proliferation of EM-E6/E7 cells derived from human endometrial cells. High LPS concentrations were detected in CE patients. LPS concentration was found to correlate with IL-6 gene expression in the endometrium. Inflammation signaling evoked by LPS led to the onset of CE, since LPS stimulates inflammatory responses and cell cycles in the endometrium. We identified LPS and IL-6 as suitable candidate markers for the diagnosis of CE. Introduction Chronic endometritis (CE) is one of the causes of unexplained infertility and repeated implantation failure [1,2].CE is an inflammatory disease of the endometrium, which is characterized by mucosal edema, polyps, and abnormal plasma cell infiltration [3].A retrospective cohort study of 1551 premenopausal women reported a 24.4% incidence of CE [4]; however, precise diagnostic criteria for CE have not yet been established.Chronic inflammation of the endometrium may be accompanied by symptoms such as pelvic pain, irregular genital bleeding, and intercourse pain; however, it is often asymptomatic and difficult to diagnose [5,6]. In general, CE is diagnosed using hysteroscopy and pathologic examination.Currently, next-generation sequencing (NGS) analysis is focused on the bacterial flora of the vagina and uterus [5,7].Hysteroscopy provides subjective information by the physician, which may not confirm the clinical findings.The pathological diagnosis of CE involves staining for plasma cells in endometrial tissue, which is frequently performed using CD138 immunostaining.However, the histological method of CD138 cannot be used in all the scenarios due to lack of consensus on a threshold for the definition of CE [8].Additionally, the efficiency of CD138 detection depends on the timing of the menstrual cycle, which influences endometrial proliferation [5,9].Therefore, both hysteroscopy and pathological examination remain uncertain for the diagnosis of CE. Gram-negative bacteria have an endotoxin (lipopolysaccharide: LPS) in the outer membrane of the cell wall [10].LPS has been reported to cause inflammation in the reproductive systems of several animals [11].Intravaginal administration of LPS was used to create a model of acute endometritis in mice [12].LPS administration causes extensive inflammatory macrophage infiltration of uterine tissue, and it also increases the expression of Il-1β, Tnfα and other genes in mice [13].In humans, LPS affects the trophoblastic spheroids and endometrial epithelial cells and decreases uterine receptivity [14].On the other hand, few human studies have reported that LPS is related to endometrial inflammation, which has been associated with infertility and repeated implantation failure [1].Furthermore, another study has shown the effect of proinflammatory cytokines (IL-6, IL-1β and TNFα) on menstrual effluent CE patients.The levels of these cytokines were significantly higher in patients with CE compared to controls [15].An investigation of cytokine levels can be used as a non-invasive method for the detection of CE.However, the pathogenic mechanism of CE remains unknown, and there is no consensus on its diagnostic criteria.In this study, the effects of LPS on the inflammatory response and reproductive function of the human endometrium were investigated to elucidate the mechanisms of CE using endometrial tissues.In addition, we aimed to discern potential markers for CE and mechanisms involved in the molecular functioning of the endometrium. Effect of LPS on Proliferation of EM Cells We performed an experiment to estimate the effect of LPS, which is involved in inflammation and cell cycle in vitro using EM-E6/E7-hTERT-2 cells (EM cells) that originate from endometrial gland epithelial cells [16].IL-1β is a marker of inflammation with increased expression at 4 h by LPS administration in EM cells (Figure S1).Gene expression analysis was performed for inflammatory (IL-1β, IL-6, CD14, TLR4, CD138) and cell cycle markers (CyclinD1, p27, p53, Ki67) 4 h after LPS administration (Figure 1).The expression of the IL-1β gene was increased 5.5-fold at 10 3 ng/mL and 22-fold at 10 4 ng/mL LPS concentrations compared to the controls (p < 0.05, Figure 1A).The expression of the IL-6 gene was 69-fold higher compared to the controls at 10 4 ng/mL LPS concentrations (p < 0.05).The expression of TLR4 was significantly increased at concentrations of 10 ng/mL and 10 2 ng/mL LPS (p < 0.05, Figure 1B).EM cells were examined for gene expression related to cell cycle and proliferation.The expression of the p27 and Ki67 genes was significantly increased at concentrations of 10, 10 2 and 10 3 ng/mL LPS (p < 0.05, Figure 1C).LPS did not alter the expression of CD14, CD138, CyclinD1 and p53. LPS Promotes Cell Proliferation of EM Cells LPS increased the expression of Ki67 in Figure 1, which led to the hypothesis that the proliferation of EM cells was induced by LPS.CCK8 assay indicated that the viability of EM cells increased after LPS administration at a concentration of 10 ng/mL LPS (p < 0.05, Figure 2). LPS Promotes Cell Proliferation of EM Cells LPS increased the expression of Ki67 in Figure 1, which led to the hypothesis that the proliferation of EM cells was induced by LPS.CCK8 assay indicated that the viability of EM cells increased after LPS administration at a concentration of 10 ng/mL LPS (p < 0.05, Figure 2). Effects of LPS Administration on Mouse Uterus In Vivo Furthermore, to examine whether LPS affects the mechanism of inflammation and proliferation in vivo, we performed an experiment using a mouse model of the inflammatory environment of the uterus.The chronic inflammatory condition in mice was consecutively induced by administering LPS (1 mg/mL) once a week for 2 weeks.After a week, the samples were collected and analyzed for inflammation.LPS-injected mice showed no significant differences in physiology, including body weight (BW) and LPS concentrations in plasma and uterine supernatants, compared to controls (Figure S2A,B).Additionally, LPS Promotes Cell Proliferation of EM Cells LPS increased the expression of Ki67 in Figure 1, which led to the hypothesis that the proliferation of EM cells was induced by LPS.CCK8 assay indicated that the viability of EM cells increased after LPS administration at a concentration of 10 ng/mL LPS (p < 0.05, Figure 2). Effects of LPS Administration on Mouse Uterus In Vivo Furthermore, to examine whether LPS affects the mechanism of inflammation and proliferation in vivo, we performed an experiment using a mouse model of the inflammatory environment of the uterus.The chronic inflammatory condition in mice was consecutively induced by administering LPS (1 mg/mL) once a week for 2 weeks.After a week, the samples were collected and analyzed for inflammation.LPS-injected mice showed no significant differences in physiology, including body weight (BW) and LPS concentrations in plasma and uterine supernatants, compared to controls (Figure S2A,B).Additionally, Effects of LPS Administration on Mouse Uterus In Vivo Furthermore, to examine whether LPS affects the mechanism of inflammation and proliferation in vivo, we performed an experiment using a mouse model of the inflammatory environment of the uterus.The chronic inflammatory condition in mice was consecutively induced by administering LPS (1 mg/mL) once a week for 2 weeks.After a week, the samples were collected and analyzed for inflammation.LPS-injected mice showed no significant differences in physiology, including body weight (BW) and LPS concentrations in plasma and uterine supernatants, compared to controls (Figure S2A,B).Additionally, no hyperemia or histological lesions were observed in the uterus post-LPS injection (Figures 3A and S3).The IHC staining of CD138, which is utilized for the diagnosis of CE, demonstrated conspicuous positive signals in the endometrium of the group of mice administered with LPS in the mouse uterus (Figure 3A).The overall area, including the uterine cavity, did not differ between the control and LPS groups (p = 0.13).However, a trend of decrease was indicated in the functional layer area, including the functional layer within the uterine cavity, in the LPS group compared to the control group (p < 0.1, Figure 3B).LPS stimulation increased the expression of Tlr4 in mouse uterus compared to the controls (p < 0.05, Figure 3C).The expression of Ki67, which is a proliferation marker, was significantly lower in the LPS group than in the control group (p < 0.05).LPS did not alter the expression level of Il-6 and CD138 (p = 0.25, p = 0.12).Although the IHC staining of CD138 observed clearly increased the signal of CD138-positive cells, there was no discernible difference in the gene expression of CD138 in uterine tissues between the LPS group and the control group.Despite the detected positive signal in the IHC staining due to LPS stimulation in the mouse uterus, distinguishing the gene expression of CD138 between the LPS group and the control group proved challenging.Furthermore, the group of LPS stimulation into the uterine exhibited a tendency toward reducing the functional layer in the endometrium and the decrease in the gene expression of Ki67 compared to the control group.These findings indicate a disruption in the normalcy of the endometrium, highlighting the impact of consecutively induced inflammatory conditions in the CE mouse model. no hyperemia or histological lesions were observed in the uterus post-LPS injection (Fig- ures 3A and S3).The IHC staining of CD138, which is utilized for the diagnosis of CE, demonstrated conspicuous positive signals in the endometrium of the group of mice administered with LPS in the mouse uterus (Figure 3A).The overall area, including the uterine cavity, did not differ between the control and LPS groups (p = 0.13).However, a trend of decrease was indicated in the functional layer area, including the functional layer within the uterine cavity, in the LPS group compared to the control group (p < 0.1, Figure 3B).LPS stimulation increased the expression of Tlr4 in mouse uterus compared to the controls (p < 0.05, Figure 3C).The expression of Ki67, which is a proliferation marker, was significantly lower in the LPS group than in the control group (p < 0.05).LPS did not alter the expression level of Il-6 and CD138 (p = 0.25, p = 0.12).Although the IHC staining of CD138 observed clearly increased the signal of CD138-positive cells, there was no discernible difference in the gene expression of CD138 in uterine tissues between the LPS group and the control group.Despite the detected positive signal in the IHC staining due to LPS stimulation in the mouse uterus, distinguishing the gene expression of CD138 between the LPS group and the control group proved challenging.Furthermore, the group of LPS stimulation into the uterine exhibited a tendency toward reducing the functional layer in the endometrium and the decrease in the gene expression of Ki67 compared to the control group.These findings indicate a disruption in the normalcy of the endometrium, highlighting the impact of consecutively induced inflammatory conditions in the CE mouse model. Correlation between LPS and Inflammatory Genes in Human Endometrial Tissues The gene expression levels of TLR4, CD138 and IL-6, which are known LPS inflammatory markers, were compared between CE patients and controls (Figure 4).The expression of TLR4 and IL-6 was significantly higher in patients with CE than in control patients (p < 0.05), whereas no significant difference was found in the expression of CD138 (p = 0.22). after LPS administration.Magnification H&E staining (×40) with Scale bar = 50 µm and CD138 (×40) with Scale bar = 10 µm.(B) Area of total, functional layer and uterine cavity, uterine cavity, functional layer and myometrium compared between control (n = 5) and LPS group (n = 5).Values are showed as mean ± SEM. (C) Gene expression related with CE between control and LPS group in mouse uterus.The experiments were conducted twice.Values are showed as mean ± SEM and † p < 0.1, * p < 0.05, ** p < 0.01. Correlation between LPS and Inflammatory Genes in Human Endometrial Tissues The gene expression levels of TLR4, CD138 and IL-6, which are known LPS inflammatory markers, were compared between CE patients and controls (Figure 4).The expression of TLR4 and IL-6 was significantly higher in patients with CE than in control patients (p < 0.05), whereas no significant difference was found in the expression of CD138 (p = 0.22). Correlation between LPS and Cell Cycle Genes in Human Endometrial Tissues The results from EM cells indicated that LPS administration increased the expression of cell cycle markers.We next assessed whether LPS was associated with cell cycle progression in human endometrial tissues.The expression of CyclinD1 and p27 was not significantly different (p = 0.48, p = 0.25, Figure 5).In contrast, p53 and Ki67 showed significantly lower gene expression in patients with CE than in controls (p < 0.05). Correlation between LPS and Cell Cycle Genes in Human Endometrial Tissues The results from EM cells indicated that LPS administration increased the expression of cell cycle markers.We next assessed whether LPS was associated with cell cycle progression in human endometrial tissues.The expression of CyclinD1 and p27 was not significantly different (p = 0.48, p = 0.25, Figure 5).In contrast, p53 and Ki67 showed significantly lower gene expression in patients with CE than in controls (p < 0.05). with Scale bar = 10 µm.(B) Area of total, functional layer and uterine cavity, uterine cavity, functional layer and myometrium compared between control (n = 5) and LPS group (n = 5).Values are showed as mean ± SEM. (C) Gene expression related with CE between control and LPS group in mouse uterus.The experiments were conducted twice.Values are showed as mean ± SEM and † p < 0.1, * p < 0.05, ** p < 0.01. Correlation between LPS and Inflammatory Genes in Human Endometrial Tissues The gene expression levels of TLR4, CD138 and IL-6, which are known LPS inflammatory markers, were compared between CE patients and controls (Figure 4).The expression of TLR4 and IL-6 was significantly higher in patients with CE than in control patients (p < 0.05), whereas no significant difference was found in the expression of CD138 (p = 0.22). Correlation between LPS and Cell Cycle Genes in Human Endometrial Tissues The results from EM cells indicated that LPS administration increased the expression of cell cycle markers.We next assessed whether LPS was associated with cell cycle progression in human endometrial tissues.The expression of CyclinD1 and p27 was not significantly different (p = 0.48, p = 0.25, Figure 5).In contrast, p53 and Ki67 showed significantly lower gene expression in patients with CE than in controls (p < 0.05). Relationship between LPS and CE Figure 6 shows a representative photomicrograph of CD138 immunostaining of uterine sample from a patient with suspected CE (Figure 6A).LPS concentrations in the endometrium were significantly higher in the patients with CE than in controls (p < 0.05, Figure 6B,C).No significant differences were observed between the two groups in context of the age and smoking status (Table S1).To further investigate the effect of LPS-induced inflammation, known inflammatory markers were examined for their correlation with LPS concentrations in the endometrium (Figure S4) The correlation coefficient between LPS and the gene expression of IL-6 was indicated as rs = 0.62, stipulating a moderate positive correlation.The expression of Ki67 indicated a negative correlation with LPS concentration in endometrial tissues (rs = −0.44, Figure S5). Relationship between LPS and CE Figure 6 shows a representative photomicrograph of CD138 immunostaining of uterine sample from a patient with suspected CE (Figure 6A).LPS concentrations in the endometrium were significantly higher in the patients with CE than in controls (p < 0.05, Figure 6B,C).No significant differences were observed between the two groups in context of the age and smoking status (Table S1).To further investigate the effect of LPS-induced inflammation, known inflammatory markers were examined for their correlation with LPS concentrations in the endometrium (Figure S4) The correlation coefficient between LPS and the gene expression of IL-6 was indicated as rs = 0.62, stipulating a moderate positive correlation.The expression of Ki67 indicated a negative correlation with LPS concentration in endometrial tissues (rs = −0.44, Figure S5). Discussion Our study indicated that high LPS concentrations were detected in endometrial tissues of patients with suspected CE.Among various inflammatory markers, we verified that IL-6 expression correlated with LPS, which suggested that LPS increased inflammatory responses in the endometrium.Our findings raise the possibility that LPS and IL-6 are potential diagnostic criteria for the diagnosis of CE.We observed that inflammation promoted the proliferation of EM cells in vitro, whereas a reduction in the endometrium was observed between humans and mice.Here, we provide evidence that LPS induces inflammation and cell cycle disruption in the endometrium and that LPS may continuously lead to the symptoms of CE. Recent studies have reported the likelihood that vaginal and uterine bacteria can cause endometritis [17,18], suggesting that the LPS of bacteria such as E. coli may be responsible for causing CE.Although LPS has been detected in the menstrual effluent of sterile women, as well as in menstrual blood and peritoneal fluid from women with endometriosis [19,20], the existence of LPS in the endometrium has not been identified in gynecological research.High concentrations of LPS were detected in the endometrium of Discussion Our study indicated that high LPS concentrations were detected in endometrial tissues of patients with suspected CE.Among various inflammatory markers, we verified that IL-6 expression correlated with LPS, which suggested that LPS increased inflammatory responses in the endometrium.Our findings raise the possibility that LPS and IL-6 are potential diagnostic criteria for the diagnosis of CE.We observed that inflammation promoted the proliferation of EM cells in vitro, whereas a reduction in the endometrium was observed between humans and mice.Here, we provide evidence that LPS induces inflammation and cell cycle disruption in the endometrium and that LPS may continuously lead to the symptoms of CE. Recent studies have reported the likelihood that vaginal and uterine bacteria can cause endometritis [17,18], suggesting that the LPS of bacteria such as E. coli may be responsible for causing CE.Although LPS has been detected in the menstrual effluent of sterile women, as well as in menstrual blood and peritoneal fluid from women with endometriosis [19,20], the existence of LPS in the endometrium has not been identified in gynecological research.High concentrations of LPS were detected in the endometrium of patients with CE in this experiment, and the results suggested that LPS increased the production of cytokines, including IL-6, and initiated inflammation in the endometrium. Our findings were consistent with those of other studies, which showed that the expression of proinflammatory cytokines, such as IL-6 and IL-1β, was significantly higher in menstrual effluent patients with CE [14].IL-6 is a well-known inflammatory marker of LPS [10,21], and IL-6 has also been shown to be a useful diagnostic marker for endometriosis, which is a chronic inflammatory disease [22].In bovine endometrial cells, the upregulation of IL-6 and IL-8 due to LPS administration is considered a characteristic feature of inflammation [23].Other studies reported that a mouse model of endometritis that was injected with LPS in the uterus showed that LPS significantly increased the expression of Il-6 [12].However, no difference was observed in Il-6 gene expression after 24 h of LPS administration to mouse uterus in this study.This suggests that it was too late to confirm the timing of inflammatory response after LPS administration [24].Our study suggests that continuous LPS stimulation causes inflammation in human endometrial tissues as LPS correlates with IL-6 expression.Similarly, the results of EM cells showed that LPS administration rapidly upregulated the expression of IL-6 after 4 h.Hence, our study provides evidence that cytokine IL-6 was activated by LPS in the endometrium to facilitate the symptoms of CE. Previous studies have reported that IL-6 stimulates LPS signal transduction through CD14 and TLR4 [25,26]; however, CD14 did not correlate with LPS levels in human endometrial tissue in this study.Other studies have reported that CD14 does not necessarily interact directly with TLR4 [27,28], and it was suggested that inflammatory cascades other than CD14 may be present.We revealed that LPS caused increased IL-6 and TLR4 gene expression in the endometrium in our experiments using human endometrial tissue, EM cells, and mice.Therefore, the results indicated that the presence of LPS might elevate cytokines and cause CE through the TLR4 cascade.A previous study reported that LPS in menstrual blood was involved in TLR4-mediated endometrial proliferation [20].We also showed that LPS upregulated TLR4 expression, which affected Ki67 expression and disrupted the cell cycle in the endometrium.Our results suggested that LPS-mediated TLR4 affects cell proliferation in the endometrium.Abnormal proliferation of the endometrium can lead to gynecological problems, such as endometriosis, endometrial hyperplasia, unexplained infertility, repeated implantation failure and repeated pregnancy loss [29,30]. The diagnosis of chronic endometritis is complicated, there are mainly three detection methods, and cross-testing will also be performed according to the clinical situation The diagnostic methods for CE include the observation of uterine hyperemia by hysteroscopy, histological examination using CD138, which is the most specific indicator of plasma cells, and bacterial culture to identify the cause of infection [31][32][33][34].In hysteroscopy, the subjective judgment of the physician affects the test results.There are various problems associated with the use of CD138 for the diagnosis of CE.Healthy women have CD138 in the endometrial stroma [35], and the expression of CD138 is dependent on the menstrual cycle [5,36].In this study, CD138 was used as an inflammatory marker, although it was difficult to distinguish between CE patients and controls.Therefore, we considered IL-6 to be a more appropriate biomarker for differential diagnosis for CE than CD138 because CD138 is known to be involved in the pathogenesis of inflammatory disease, which leads to changes in IL-6 levels [37].Some bacterial species may pose challenges in cultivation, and certain bacteria can influence the accuracy of culture results.These considerations should be thoroughly discussed in the context of chronic endometrial diseases during further exploration.Additionally, it is essential to incorporate information on NGS analysis as it relates to the study of CE [5].NGS provides a powerful tool for comprehensive microbial profiling and may offer valuable insights into the microbial composition and diversity associated with CE. We observed significant differences in estradiol and progesterone hormone concentrations in the supernatant of human endometrial tissue between CE patients and controls (Table S1).This result was considered to be influenced by the menstrual cycle rather than by LPS.We did not observe any relationship among LPS, BMI (body mass index), bacteria testing and smoking status in this study.Samples were collected to examine the effect of LPS on clinical data. Our results showed that LPS increased the expression of p27 and Ki67, and it also increased the proliferation of EM cells.It is known that NF-κB, one of the signaling pathways activated by LPS, promotes cell proliferation in the endometrium [38].However, we found that LPS decreased the expression of proliferation marker Ki67 in human endometrial tissues and mice.Another experiment using cultured cells showed that LPS administration results in abnormal cell cycles [39].Our experiments using humans and mice in vivo were considered to downregulate cell proliferation, because the endometrium was continuously exposed to LPS, and chronic inflammation in vivo may cause a decrease in endometrial cells after acute inflammation.The mouse model of chronic inflammation induced by LPS administration showed an increase in the myometrium.It is thought that abnormal cell proliferation is due to endometrial inflammation, because B cells which exist in the basal layer near the myometrium may induce plasma cells to exert the effect of LPS [40].Although this experiment did not confirm the phenomenon of cell proliferation induced by LPS in vivo, it suggested that LPS might have collapsed the cell cycle of the endometrium in vivo and in vitro.Thus, in the present study, we showed that LPS inflammation disrupts the cell cycle and alters cell proliferation.The proliferation marker Ki67 is recognized as a diagnostic marker for uterine smooth muscle tumors [41].Therefore, the abnormal expression of Ki67 observed in our study is noteworthy not only for the diagnosis of CE but also for its potential contribution to related phenomena in gynecological cancers. In conclusion, our study unveils that the inflammatory signaling triggered by LPS is implicated in the initiation of chronic endometritis (CE), as LPS stimulates both inflammatory responses and the cell cycle within the endometrium (Figure 7).Moreover, our identification of LPS and IL-6 in CE underscores their potential as appropriate diagnostic criteria.While a plausible correlation between LPS and CE exists, establishing a direct causal relationship necessitates further thorough investigation and substantiation.These findings not only contribute significantly to unraveling the mechanism of CE but also offer valuable insights into the potential treatment of gynecological diseases.In the future, integrating these markers with hysteroscopy and histological examination holds promise for enhancing the precision of CE diagnoses in clinical practice. trial tissues and mice.Another experiment using cultured cells showed that LPS administration results in abnormal cell cycles [39].Our experiments using humans and mice in vivo were considered to downregulate cell proliferation, because the endometrium was continuously exposed to LPS, and chronic inflammation in vivo may cause a decrease in endometrial cells after acute inflammation.The mouse model of chronic inflammation induced by LPS administration showed an increase in the myometrium.It is thought that abnormal cell proliferation is due to endometrial inflammation, because B cells which exist in the basal layer near the myometrium may induce plasma cells to exert the effect of LPS [40].Although this experiment did not confirm the phenomenon of cell proliferation induced by LPS in vivo, it suggested that LPS might have collapsed the cell cycle of the endometrium in vivo and in vitro.Thus, in the present study, we showed that LPS inflammation disrupts the cell cycle and alters cell proliferation.The proliferation marker Ki67 is recognized as a diagnostic marker for uterine smooth muscle tumors [41].Therefore, the abnormal expression of Ki67 observed in our study is noteworthy not only for the diagnosis of CE but also for its potential contribution to related phenomena in gynecological cancers. In conclusion, our study unveils that the inflammatory signaling triggered by LPS is implicated in the initiation of chronic endometritis (CE), as LPS stimulates both inflammatory responses and the cell cycle within the endometrium (Figure 7).Moreover, our identification of LPS and IL-6 in CE underscores their potential as appropriate diagnostic criteria.While a plausible correlation between LPS and CE exists, establishing a direct causal relationship necessitates further thorough investigation and substantiation.These findings not only contribute significantly to unraveling the mechanism of CE but also offer valuable insights into the potential treatment of gynecological diseases.In the future, integrating these markers with hysteroscopy and histological examination holds promise for enhancing the precision of CE diagnoses in clinical practice. Cell Proliferation Assay Cell proliferation was assessed using a cell counting kit-8 (CCK8) assay (343-07623, Dojindo, Kumamoto, Japan) according to the manufacturer's protocol.The cells were plated at a density of 5.0 × 10 3 cells per well in 96-well flat-bottom plates (167008, Thermo Fisher Scientific, Waltham, MA, USA).Cells were cultured in the absence or presence of LPS and/or E2 10 −10 mol/mL without FBS for 4 h.After treatment, 10 µL of CCK8 assay solution was added to each well and incubated for another 4 h.The signal was recorded on a microplate spectrophotometer (Multiskan GO, Thermo Fisher Scientific, USA) to measure absorbance at 450 nm wavelength. Animal Treatment Ten female ICR mice (Jackson Laboratory, Tsukuba, Japan) were used in this study.The mice were 6-7 weeks old, 30 ± 5 g in weight, fed a standard diet, and housed in a temperature-controlled room (23 ± 2 • C) and humid (50 ± 5%) environment with 12 h light/12 h dark cycle.This study was performed under the Regulations Regarding Animal Experiments of the Obihiro University of Agriculture and Veterinary Medicine (approve number .The mice were randomly divided into control (n = 5) and LPS (n = 5) groups.The body weight, food intake, and water consumption of the mice were measured once weekly.The method of inducing endometritis in mice was similar to that used in previous studies [12,42].Briefly, 50 µL of LPS (1 mg/mL) was injected in the vagina of the mice in the LPS group once a week for 2 weeks.After a week, the mice were anesthetized using isoflurane, blood from their heart was collected for measurement of LPS concentration, and uterus samples were used for evaluating gene expression and histological analysis.Collected samples were stored at -80 • C until experiments were performed. Real-Time PCR RNA isolated from endometrial tissues and cells was treated with DNase I (18068-015, Thermo Fisher Scientific, USA).cDNA was reverse-transcribed using SuperScript II (18064022, Thermo Fisher Scientific, USA) following the manufacturer's instructions.The primer sequences used are listed in Table 1.Real-time PCR was performed using SsoAdvanced Universal SYBR Green Supermix (1725271, Bio-Rad, Hercules, CA, USA) on a LightCycler 96 system (05815916001, Roche, Rotkreuz, Switzerland).The temperature condition for PCR amplification was followed by 35 cycles consisting of 95 • C for 10 s, 60 • C for 60 s.β-actin (ACTB) was used as a reference gene, and the relative gene expression was analyzed using the 2 −∆∆Ct method. Histological Analysis Histological samples were collected from mouse uterus tissue.The uterine samples were fixed in 10% formalin, dehydrated using ethyl alcohol series, cleared in xylene, and embedded in paraffin wax.Paraffin-embedded samples were sectioned to a thickness of 4 µm using a SM2000R microtome (Leica, Wetzlar, Germany).The sections were airdried, deparaffinized, and stained with hematoxylin and eosin.Images were obtained using a ZEISS Axio Zoom (ZEISS, Jena, Germany).V16 for Bioligy microscope (Carl Zeiss AG, Oberkochen, Germany) with ZEN 3.1 Pro software (Carl Zeiss AG, Germany).The length of the uterine cavity, dense layer, cavernous layer, and myometrium were measured using ImageJ Fiji (https://imagej.net/software/fiji/(accessed on 29 December 2023)).The measurements were repeated on 10 different uterine sections for each sample, and the measurements were averaged. Ethical Considerations This study was conducted in accordance with the tenets of Declaration of Helsinki after obtaining permission from the Obihiro University of Agriculture and Veterinary Medicine (Ethics review document: 2020-05-2).Informed consent was obtained from all the patients prior to the collection of tissue samples. Study Participants and Design Endometrial curettage tissues and blood samples were collected from patients with suspected CE (n = 13) and controls who were not suspected of CE (n = 15) from the Obihiro ART Clinic (Hokkaido, Japan).Patients with suspected CE were defined as those with endometrial polyps detected on transvaginal ultrasonography performed by a single physician.The clinical criteria were superficial stromal edema, increased stromal density, and pleomorphic stromal inflammatory infiltrate dominated by plasma cells. Endometrial tissues were collected from patients aged 27-42 years by dilation and curettage between 2020 and 2022.Tissue was removed from the uterine cavity by curettage or aspiration biopsy.Patient data and CD138 immunostaining results were obtained from the ART clinic.Endometrial biopsy specimens were obtained randomly from patients, and histological examination of CD138 was assessed by a single pathologist who was blinded 4.9.Statistical Analysis All data were presented as the mean ± standard error of the mean (SEM).Statistical analysis was performed using free software R Version 4.1.3(https://www.r-project.org/(accessed on 29 December 2023)) or Microsoft Excel (https://www.microsoft.com/en-sg/microsoft-365/excel (accessed on 29 December 2023)).The results of each experiment were compared with those of the control using Mann-Whitney's U-test or t-test.Dunnett's test was used to compare the means of several experimental groups with the mean of the control group in the experiments with EM cells.In the experiments using human tissue, a correlation coefficient was calculated using Spearman's rank correlation coefficient (rs).Patient data were expressed as percentages and analyzed using Fisher's exact test.Statistical significance was set at p < 0.05. Patents This research report has been issued the patent number 2023-058245 in Japan. Supplementary Materials: The supporting information can be downloaded at https://www.mdpi.com/article/10.3390/ijms25042017/s1.Institutional Review Board Statement: All procedures followed were in accordance with the ethical standards of the Responsible Committee on Human Experimentation (institutional and national) and with the Helsinki Declaration of 1964 and its later amendments.Informed consent was obtained from all patients being including in the study.This study was approved by the permission of ethical review by Obihiro University of Agriculture and Veterinary Medicine (Ethics review document 2020-05-2).The animal studies were performed in accordance with the Guide for the Care and Use of Laboratory Animals published by Obihiro University (approval number . Informed Consent Statement: Written informed consent has been obtained from the patients to publish this paper. Data Availability Statement: Data is contained within the article and Supplementary Material. Figure 3 . Figure 3. Effects of consecutive LPS administration on mouse uterus in vivo.(A) Histology of uterus after LPS administration.Magnification H&E staining (×40) with Scale bar = 50 µm and CD138 (×40) with Scale bar = 10 µm.(B) Area of total, functional layer and uterine cavity, uterine cavity, functional layer and myometrium compared between control (n = 5) and LPS group (n = 5).Values are showed as mean ± SEM. (C) Gene expression related with CE between control and LPS group in mouse uterus.The experiments were conducted twice.Values are showed as mean ± SEM and † p < 0.1, * p < 0.05, ** p < 0.01. Figure 5 . Figure 5. Gene expression of cell cycle markers (CyclinD1, p27, p53, Ki67) in human endometrial tissue.Comparison of gene expression between controls (n = 10) and CE patients (n = 13).The experiments were conducted once, utilizing uterine endometrial tissue samples from each patient.Values are shown as mean ± SEM and * p < 0.05. Figure 7 . Figure 7. Simple graphical abstract for CE induced abnormal factors.Figure 7. Simple graphical abstract for CE induced abnormal factors. Figure 7 . Figure 7. Simple graphical abstract for CE induced abnormal factors.Figure 7. Simple graphical abstract for CE induced abnormal factors. Table 1 . Primer pairs used in gene expression analysis.
2024-02-09T16:05:58.862Z
2024-02-01T00:00:00.000
{ "year": 2024, "sha1": "65c359e25d5bf42878f526da2bcaa89a3e5be755", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1422-0067/25/4/2017/pdf?version=1707298515", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "dfde46729453b43ea7fac2f23337d3715b3d73c5", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
209409834
pes2o/s2orc
v3-fos-license
Factors That Mattered in Helping Travelers From Countries With Ebola Outbreaks Participate in Post-Arrival Monitoring During the 2014-2016 Ebola Epidemic During the 2014-2016 Ebola epidemic in West Africa, the US Centers for Disease Control and Prevention (CDC) developed the CARE+ program to help travelers arriving to the United States from countries with Ebola outbreaks to meet US government requirements of post-arrival monitoring. We assessed 2 outcomes: (1) factors associated with travelers’ intention to monitor themselves and report to local or state public health authority (PHA) and (2) factors associated with self-reported adherence to post-arrival monitoring and reporting requirements. We conducted 1195 intercept in-person interviews with travelers arriving from countries with Ebola outbreaks at 2 airports between April and June 2015. In addition, 654 (54.7%) of these travelers participated in a telephone interview 3 to 5 days after intercept, and 319 (26.7%) participated in a second telephone interview 2 days before the end of their post-arrival monitoring. We used regression modeling to examine variance in the 2 outcomes due to 4 types of factors: (1) programmatic, (2) perceptual, (3) demographic, and (4) travel-related factors. Factors associated with the intention to adhere to requirements included clarity of the purpose of screening (B = 0.051, 95% confidence interval [CI], 0.011-0.092), perceived approval of others (B = 0.103, 95% CI, 0.058-0.148), perceived seriousness of Ebola (B = 0.054, 95% CI, 0.031-0.077), confidence in one’s ability to perform behaviors (B = 0.250, 95% CI, 0.193-0.306), ease of following instructions (B = 0.053, 95% CI, 0.010-0.097), and trust in CARE Ambassador (B = 0.056, 95% CI, 0.009-0.103). Respondents’ perception of the seriousness of Ebola was the single factor associated with adherence to requirements (odds ratio [OR] = 0.81, 95% CI, 0.673-0.980, for non-adherent vs adherent participants and OR = 0.86, 95% CI, 0.745-0.997, for lost to follow-up vs adherent participants). Results from this assessment can guide public health officials in future outbreaks by identifying factors that may affect adherence to public health programs designed to prevent the spread of epidemics. Introduction During the 2014-2016 Ebola epidemic in West Africa, novel approaches were developed to assess and manage the risk of travelers arriving to the United States from countries with Ebola outbreaks. Enhanced Entry Risk Assessment and Post-Arrival Monitoring In October 2014, after 2 imported cases and an associated contact investigation of Ebola in the United States, 1-3 US Centers for Disease Control and Prevention (CDC) revised previously issued movement and monitoring guidance to recommend active monitoring (where travelers had to take their temperature and evaluate themselves for Ebola symptoms twice a day and communicate at least once a day with a state or local public health authority [PHA]), and direct active monitoring (where public health workers had to make a direct contact with the traveler at least once a day to see if they have fever or other Ebola symptoms) in some circumstances, of travelers arriving from countries with Ebola outbreaks. 4 Based on this guidance, travelers at designated US ports of entry were to undergo an enhanced risk assessment which classified them as having "low but not zero risk," "some risk," and "high risk." Those who were designated as "low but not zero risk" were recommended to be actively monitored for 21 days after the last potential Ebola virus exposure. US Customs and Border Protections (CBP) and CDC partnered at 5 US ports of entry to conduct the enhanced entry risk assessment at Chicago O'Hare International Airport (ORD), Hartsfield-Jackson Atlanta International Airport (ATL), Newark Liberty International Airport (EWK), John F Kennedy International Airport (JFK), and Washington Dulles International Airport (IAD). 5 All air travelers who had been in countries with Ebola outbreaks were directed through these airports. Upon arrival, the travelers were directed to a screening area where the risk assessment was conducted. The risk assessment involved asking travelers 5 questions, observing for and asking about symptoms, and taking their temperature. US Customs and Border Protections officers collected travelers' destination and contact information, which CDC passed to the receiving PHA. 4,5 For most travelers, CBP assigned a risk level of "low but not zero." CARE Kit To help travelers self-monitor and communicate with the state or local PHA, CDC created the Check and Report Ebola Kit (CARE Kit), consisting of a digital thermometer, a symptom and temperature log, graphical depictions of Ebola symptoms, contact information for PHAs by jurisdiction, and a wallet-sized CARE card that reminded travelers to monitor their health, contained instructions for safely seeking care if needed, and alerted health care workers of possible Ebola exposure. 6 US Customs and Border Protections officers gave the CARE Kit to travelers after they completed the risk assessment. Launch of CARE+ State and local PHAs struggled to monitor all travelers consistently in the initial weeks of the program. 7,8 In response, approximately 6 weeks after CDC released its recommendations for post-arrival monitoring, CDC launched the CARE+ program, which introduced CARE Ambassadors. 8,9 CARE Ambassadors were health educators trained to explain monitoring requirements and teach travelers how to use CARE Kit tools. They met with travelers for 5 to 8 minutes after CBP finished the risk assessment process. 9 Ambassadors also gave travelers a cellular flip phone with at least 21 days of unlimited voice and text service and showed travelers how to use the phone. The CARE phone number was also provided to the state or local PHA to facilitate initial contact and continued communication between the traveler and the PHA. CARE+ was developed in response to the observed challenges with program implementation and with behavioral science principles, which suggest that adherence to behavioral recommendations reflects not only people's knowledge, motivations, and intentions but also message source credibility, social norms, and availability of resources or tools needed to perform recommended behaviors. [10][11][12][13][14][15] People appear more likely to share personal information when they believe those requesting the information are trustworthy. [16][17][18][19][20] CARE+ Evaluation The assessment of the CARE+ program included factors associated with intentions to adhere and with self-reported adherence to requirements. Our study does not speak to the value of screening as a public health strategy, per se. Several behavioral and information processing theories guided this assessment. [10][11][12][13][14][15][21][22][23] We aimed to answer this question: What were the programmatic and perceptual predictors of travelers' intentions to adhere to post-arrival monitoring and reporting requirements? Methods We collected information in 3 phases: an in-person intercept interview at the airport, a first telephone interview, and a second telephone interview. In all phases, interviewers were trained and supervised by a project staff member. Airport Intercept Interviews From April through June 2015, we conducted airport intercept interviews at JFK and Dulles airports with travelers arriving from countries with Ebola outbreaks who were 18 years or older and spoke either English or French. John F Kennedy International Airport and Dulles airports received the heaviest volume of travelers from countries with Ebola outbreaks. We conducted the interviews during times when traveler volume from Guinea, Liberia, or Sierra Leone was highest. We approached 2426 travelers at the airport immediately after the travelers' encounter with a CARE Ambassador, and 1195 travelers (49.3%) agreed to and completed the airport interview. Of the 1231 who did not complete the airport interview, 692 (56.2%) refused, 225 (18.3%) spoke a language other than English or French, 112 (9.7%) were under age, and 202 (16.4%) could not finish the interview (eg, the traveler terminated early to catch a flight). The airport interview lasted about 10 minutes with interviewers recording responses on handheld electronic tablets. After the conclusion of the interview, we asked participants if they would be willing to take part in a telephone interview and, if they agreed, we asked for a phone number to reach them. Telephone Interviews During the airport interview, 1041 travelers agreed to participate in a telephone interview with 654 (62.8%) completing this interview. Those who consented to the telephone interview were called within 5 days of their airport interview. Of the 387 who agreed to participate in a telephone interview but did not, 316 (81.7%) could not be contacted (eg, the phone number they provided us did not reach them), 69 (17.8%) refused, and 2 (0.01%) terminated early. During the first telephone interview, 562 travelers agreed to participate in a second telephone interview, and of these, 319 (56.8%) completed the interview which was conducted 2 days before the end date of the traveler's monitoring period. Of the 243 who agreed to participate in a second telephone interview but did not, 213 (87.7%) could not be contacted, 28 (11.5%) refused, and 2 (0.01%) terminated early. Computer-assisted telephone interviewing systems were used for telephone interviews which ended in July 2015. Of our total sample of 1195, 541 participants were interviewed only at the airport and we could not reach them for a telephone interview. We designated these as "lost to follow-up." The CDC determined this assessment to be non-research, evaluation of public health response activities, and the US Office of Management and Budget approved data collection (OMB Control No. 0920-0932). Measures The 3 phases of interviews consisted of questions about the traveler's experience with the CARE+ program and factors that could influence their intention and ability to meet requirements (Tables 1 and 2). Measures were based on questions with yes/no response options, Likert scales, multiple-choice responses, open-ended items, and indices created from responses from multiple questions. Independent Variables Independent variables included the traveler's trust in the CARE Ambassadors and PHAs, knowledge and beliefs about Ebola, knowledge of requirements, perceptions of program attributes, beliefs about ease or difficulty in meeting requirements, and supports for fulfilling requirements (Table 1). Dependent Variables Dependent variables included (1) the traveler's stated intention to meet post-arrival monitoring and reporting requirements; that is, what do the travelers say they will do, and (2) self-reported fulfillment of post-arrival monitoring and reporting requirements; that is, what did the travelers say they actually did ( Table 2). The post-arrival monitoring and reporting requirements (hereafter "requirements") were the CDC requirements for all travelers to (1) check their temperature twice a day, (2) check themselves for symptoms, (3) record their temperature and symptoms, and (4) report to the PHA each day. We created an "adherence index" of the 4 self-reported behaviors. If travelers reported that they conducted all 4 behaviors, they were coded as "adherent" to requirements; otherwise they were coded as "non-adherent." Only travelers who completed at least the first telephone interview and answered all 4 adherence questions could be classified for adherence. Covariates Covariates included a travelers' arrival airport (JFK or Dulles), if they "work in the field of public health or health care" ("yes" or "no"), if that day was the first time they had The scales for trust in Ambassadors and trust in the PHA, using 5 items each, were assessed for internal consistency using Cronbach's alpha. We found high consistency. For the Ambassador trust scales, the alpha was .903 and .884 at airport interview and first telephone interview, respectively; for trust in PHA, .928 and .893 at first telephone interview and second telephone interview, respectively. gone through an Ebola screening process in a US airport ("yes" or "no"), and date of arrival. For date of arrival, we dichotomized if (1) the person arrived on or before May 9, 2015, or (2) they arrived after May 9, since Liberia was first declared free of Ebola virus transmission on May 9. This time factor was examined because this declaration may have affected travelers' beliefs about their need to fulfill postarrival monitoring requirements in the United States. Finally, we retrospectively pulled demographics from the Quarantine Activity Reporting System (QARS), a CDC system that records demographic and other data from travelers arriving in the United States. 24 For each participant, CDC pulled the passport country (from what country or countries did the traveler hold a passport), the country or countries with an Ebola outbreak that the traveler had been in, their sex, age, and the unique ID number on the CARE Card issued to the participant. For each traveler, we pulled the age as a continuous variable for ages 25 to 59. Because of the smaller number of younger and older travelers, other ages were put into categories for privacy reasons as follows: 18 to 24, 60 to 64, 65 to 69, and 70 and older. We linked the QARS data set with our data set via the CARE Cared ID number, which participants provided during the airport interview. Data Analysis We used predictive regression models to account for variance in the dependent variables because we aimed to determine the effect of a series of independent variables (ie, predictors) on a dependent variable (ie, outcome). In this assessment, we used an ordinary least squared (OLS) regression model (which assumes a continuous dependent variable) to examine the effect of independent variables on adherence intentions. We used a multinomial logistic regression model (which assumes INQUIRY unordered categorical dependent variable) to assess the effect of independent variables on the odds of being in 1 of 3 groups (adherent, non-adherent, or lost to follow-up) based on the adherence index. We modeled the adherent group as the reference group, focusing on affirmed adherence as the key outcome in question. We reported the regression coefficient B, which in the case of OLS regression reflects the amount of change in the outcome that would be predicted by a unit change in the predictor and in the case of multinomial logistic regression reflects the change in the logit of the outcome relative to the referent group (ie, adherent group) based on a unit change in the predictor variable. We judged statistical significance based on a P-value less than .05. We used SAS Enterprise Guide (version 7, SAS Institute, Inc., Cary, NC, USA) for all analyses. Participant Characteristics Of the 1195 participants who completed the airport interview, almost all (99.1%) were conducted in English (Table 3). Nearly two-thirds of respondents were men (61.4%), and the average age was 42.9. John F Kennedy International Airport arrivals composed the larger portion of the sample (61.9%), and 21.9% of the sample reported working in public health or health care. Using the adherence index, 406 (34.0%) of the sample was adherent, 203 (17.0%) was non-adherent, 541 (45.3%) were lost to follow-up (eg, they did not complete the first telephone interview) and 45 (3.8%) could not be classified because they refused to answer 1 or more adherence questions. For most participants (85.9%), this entry constituted their first experience with the risk assessment process. Liberia was the country most frequently reported as the country of potential Ebola exposure (47.8%). US passport holders constituted the largest percentage of the sample (40.1%), followed by Liberia (29.1%), Sierra Leone (15.9%), Guinea (4.7%), and other countries (10.2%). Regression Results The results from an analysis of bivariate relationships and of the adjusted regression models predicting intentions to adhere showed several variables with positive and statistically significant relationships with intentions to adhere: trust in the CARE Ambassador (B = 0.056, 95% confidence inter- adhere for 1 unit increase in the predictor. These predictors accounted for 18% of the variance in intentions. Several predictors were statistically significant in the model predicting self-reported adherence to 4 required monitoring and reporting actions (Table 5). Specifically, perceptions of Ebola as serious resulted in 19% lower odds of being in the non-adherent group (OR = 0.812, 95% CI, 0.673-0.980) versus the adherent group. The impact of having a non-CARE thermometer on the odds of being in the nonadherent versus the adherent group was just above our threshold for significance. Similarly, higher trust in CARE Ambassador or perceptions of Ebola as serious resulted in 38% (OR = 0.615, 95% CI, 0.453-0.835) and 14% (OR = 0.862, 95% CI, 0.745-0.997) lower odds, respectively, of being in the lost to followup group versus the adherent group. Having a non-CARE+ cell phone resulted in 45% (OR = 1.446, 95% CI, 1.042-2.007) higher odds of being in the lost to follow-up group compared with the adherent group. Several covariates affected the odds of being in the lost to follow-up group versus the adherent group. Specifically, those who had a passport from the United States or a country other than a West African country had higher odds of being in the lost to follow-up group, and being a man was found to lower the odds of being in the lost to follow-up group. Being a health care worker resulted in lower odds of being in the lost to follow-up group, but this relationship was just above the significance threshold. Discussion During the 2014-2016 Ebola epidemic, Farrar and Piot declared that "classic 'outbreak control' efforts are no longer sufficient for an epidemic of this size." They went on to say that behavioral change interventions need to appreciate culture, be consensual, and be collaborative so that trust is built (or rebuilt). 25 Population mobility, cultural norms, and a lack of trust in authority figures were noted as possible contributors to the epidemic. 25 Sociologist Robert Dingwall noted, "The first line of defense will almost always be social and behavioral interventions that interrupt the movement of the disease through a population." 26 While there was not an Ebola outbreak in the United States nor were there any imported cases of Ebola after the monitoring program began, The dependent variable is completion of the telephone survey and reported previous-day adherence with 4 behaviors (reporting to health authorities, recording temperature, checking temperature, and checking for other symptoms) that make up an adherence index. Travelers who originally consented to be contacted but did not participate in the first telephone interview were coded as "lost to follow-up." b For Likert-scale predictors and age, the estimates represent the average increase in intention to adhere for 1 unit increase in the predictor. For instance, 0.174 represents the average increase in intention to adhere for 1 unit increase in trust in ambassador scale. c For categorical predictors, the latter category in the table is the reference group. A positive estimate indicates that the non-reference group has higher intentions to adhere; a negative estimate indicates that the reference group has higher intentions to adhere. The estimate represents the mean difference in intention to adhere between the 2 groups. d On May 9, 2015, Liberia was first declared free of Ebola virus transmission. Because this declaration may have affected travelers' beliefs about their need to fulfill monitoring requirements in the United States, the association was examined. there was demand by the US public and lawmakers to take action. The post-arrival monitoring program was implemented as a less restrictive alternative to travel bans. 4 Several studies have offered additional insights on important aspects of post-arrival monitoring programs for infectious diseases including: costs, 27 reporting of false data, 28 psychosocial impact and preferences for monitoring. 28 Both articles point to the importance of applying risk communication principles and practices in outbreak responses. The CARE+ program was developed to promote adherence to monitoring behaviors. Grounded in social and behavioral science, CARE+ included an interpersonal encounter in the preferred language of the traveler in which Ambassadors conveyed the key instructions for active monitoring, answered Reference group. f Estimate is non-estimable because no one in non-adherent group listed multiple countries as potential exposure. † P < .10. *P < .05. **P < .01. ***P < .001. questions, provided tools needed to meet requirements, and demonstrated the use of those tools. The responses in this assessment substantiate the need for providing these tools: Half reported not having another thermometer, and more than a third reported not having another cell phone that worked in the United States. Tate et al's survey of persons monitored by the New York City Department of Health and Mental Hygiene showed that respondents rated the prepaid cellular telephone they received as useful. In addition, more than twice as many respondents preferred conducting post-arrival monitoring over the telephone rather than via the Internet. 28 The provision of thermometers and cell phones was important in the context of the expectation of 100% adherence to self-monitoring and reporting. Stehling-Ariza et al reported that less INQUIRY than 1% of post-arrival monitoring of 10 344 persons, done by 60 jurisdictions over a 5-month period, was incomplete and that almost 92% of persons monitored were travelers at low risk. 29 Our results suggest that several factors mattered in helping travelers adhere to post-arrival requirements. Significant predictors of intention or behavior included an array of beliefs that future programs can address, including perceptions of threat, trust in government, family support for behaviors, ease of following instructions, and confidence in performing behaviors. Trust in government can be an important influence on whether a person will adhere to government requirements. Messenger credibility is a well-established predictor of persuasion for attitude and behavior. The trustworthiness and knowledgeability dimensions of source credibility have varied influences. 30 If the messenger is not credible, the message is more likely to be discounted and disregarded. Each of the West African countries had governmental responses to the outbreak that likely shaped perceptions of credibility or trust in the government. Those perceptions are also influenced by culture and history. The same is true with the US government and its citizenry. Trust is not static; actions can build or break trust over time. 31 The CARE+ program was an attempt to personalize the US government in the form of a friendly person providing information and tools for travelers as they arrived in the United States. The CARE+ Ambassadors also set expectations that the travelers would be contacted by a local PHA who would give them additional information about the requirements for their jurisdiction, which varied. 32 This handoff matters in creating a seamless system for implementing post-arrival monitoring, because many travelers may not know that public health functions in the United States are jurisdictional. Travelers with itineraries that took them to multiple jurisdictions would interact with multiple PHAs during their visit. Such situations could increase opportunities for travelers to be lost to follow-up or misunderstand varying requirements. Creators of government initiatives, especially those addressing emerging infectious diseases under intense public scrutiny, should consider how their programs or services build or break trust. In addition, they should consider the psychosocial effects a program may have on participants. Tate et al showed that many respondents experienced a range of feelings as a result of being monitored, such as annoyance, frustration, and stress. 28 When asked what respondents found useful in helping to cope with being actively monitored, the top 2 reported answers were the public health post-arrival monitoring staff and support from family or friends. 28 While trust between program implementers and participants appears to influence behavior, so does having the support of other people. Individuals act in accordance with norms and attitudes of those around them. 21 Perceived support for reporting was associated with intentions to adhere to reporting requirements. Because taking the required actions could publicly signal potential exposure to Ebola, there were concerns that travelers might experience stigma. Tate et al's survey showed that some respondents reported being treated differently by someone outside their household and by someone in their workplace. 28 Acknowledgment of the importance of social support to combat stigmatization is likely a crucial step for various infectious diseases. Most travelers reported that it was easy for them to follow the instructions provided. CARE Kit developers intended the instructions to be clear for an audience with low English literacy, offering step-by-step instructions in plain language and graphical depictions of required behaviors. Moreover, program staff were careful to ensure that the materials developed were culturally relevant for and respectful of travelers. 4 Nevertheless, there were challenges in performing required behaviors. 4 For some participants, using a digital thermometer or a flip phone may have been new-hence, the value of a CARE Ambassador who could tailor verbal instruction to a traveler's need for information or skills. Confidence to perform and having the tools to fulfill the required behaviors go hand in hand. Travelers expressed high confidence in their ability to perform behaviors, and findings showed that they had the tools needed. Some travelers preferred to use their own phones to report to PHAs, while others needed the CARE+ phone. 4 Hennenfent et al's assessment of travelers' experiences with the District of Columbia's postarrival monitoring program showed that travelers perceived the program as beneficial and recommended that future programs distribute resources (eg, mobile phones) based on specific needs of the traveler. 8 It should be noted that after local PHAs made initial contact, they informed travelers how to report in their jurisdiction. The varying reporting approaches included phone calls, Web-based reporting, and in-person visits or Skype interfaces that allowed the PHA to watch the travelers take their temperature. 4 Reich et al offer a framework for assessing the cost-effectiveness of post-arrival monitoring and alternate strategies such as quarantine, pharmaceutical interventions, and risk communication. 27 Limitations The findings in this report are subject to several important limitations. First, the assessment was based on a convenience sample which may introduce selection bias. While there were 5 airports to potentially sample from, the resources available for this assessment allowed the selection of only 2. There may have been differences in travelers between airports. We also could approach travelers only when interviewers were at the airport; travelers who arrived at off-hours could not be interviewed. Unfortunately, we have no way of knowing how our sample would vary from the larger universe of all travelers. Second, only 319 (26.7%) of our sample completed all 3 phases, so sample attrition from airport interview to the second telephone interview may have created fundamentally different groups. For example, US passport holders were the largest proportion of respondents who completed the airport interview (46.6%) but represent only 25.8% of those who completed all 3 phases. One possible reason for the attrition is that many respondents provided the interviewer with the CARE+ phone number but after establishing contact with the local PHA may have turned it off and used another phone. Some may have left the United States, while others may have lost interest in participating or felt overwhelmed by the required communications with PHAs. Third, the retrospective data linkage to QARS to associate demographic information to our participants could have introduced inaccuracy caused by potential QARS data entry error. Finally, all responses were self-reported and not verified so were subject to social desirability bias. Conclusions and Practical Implications We present insights gleaned from an evaluation of CARE+, a program designed to support the post-arrival monitoring recommended by CDC to prevent the importation and transmission of Ebola virus in the United States during the 2014-2016 Ebola virus epidemic. This assessment identified predictors of intention and self-reported behavior that can be addressed by integrating social and behavioral science principles in the design of health interventions aimed at preventing the spread of epidemics. The study also suggests that future efforts to promote public health monitoring adherence should consider several key perceptions among people. These include self-confidence in performing the recommended behaviors and trust in program officials. It also is important for people to have the material tools they need to perform monitoring, for example, thermometers. Future initiatives should focus on bolstering such perceptions and ensuring such tools are available for participants.
2019-12-19T09:20:01.163Z
2019-01-01T00:00:00.000
{ "year": 2019, "sha1": "6b7a8ffcc59282805499d975eb6d9fdcfabffaf0", "oa_license": "CCBY", "oa_url": "https://journals.sagepub.com/doi/pdf/10.1177/0046958019894795", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "229da7907f2a50d1e66e7f34c45ce6aa29a7d404", "s2fieldsofstudy": [ "Medicine", "Environmental Science" ], "extfieldsofstudy": [ "Medicine", "Geography" ] }
251191814
pes2o/s2orc
v3-fos-license
Characterization of a Human Sapovirus Genotype GII.3 Strain Generated by a Reverse Genetics System: VP2 Is a Minor Structural Protein of the Virion We devised a reverse genetics system to generate an infectious human sapovirus (HuSaV) GII.3 virus. Capped/uncapped full-length RNAs derived from HuSaV GII.3 AK11 strain generated by in vitro transcription were used to transfect HuTu80 human duodenum carcinoma cells; infectious viruses were recovered from the capped RNA-transfected cells and passaged in the cells. Genome-wide analyses indicated no nucleotide sequence change in the virus genomes in the cell-culture supernatants recovered from the transfection or those from the subsequent infection. No virus growth was detected in the uncapped RNA-transfected cells, suggesting that the 5′-cap structure is essential for the virus’ generation and replication. Two types of virus particles were purified from the cell-culture supernatant. The complete particles were 39.2-nm-dia., at 1.350 g/cm3 density; the empty particles were 42.2-nm-dia. at 1.286 g/cm3. Two proteins (58-kDa p58 and 17-kDa p17) were detected from the purified particles; their molecular weight were similar to those of VP1 (~60-kDa) and VP2 (~16-kDa) of AK11 strain deduced from their amino acids (aa) sequences. Protein p58 interacted with HuSaV GII.3-VP1-specific antiserum, suggesting that p58 is HuSaV VP1. A total of 94 (57%) aa of p17 were identified by mass spectrometry; the sequences were identical to those of VP2, indicating that the p17 is the VP2 of AK11. Our new method produced infectious HuSaVs and demonstrated that VP2 is the minor protein of the virion, suggested to be involved in the HuSaV assembly. Introduction Sapovirus (SaV) is a positive-sense single-stranded RNA virus that belongs to the genus Sapovirus in the family Caliciviridae [1]. SaVs have been detected in humans and other animals including pigs, mink, dogs, rats, sea lions, bats, and chimpanzees [2]. There are 19 genogroups of the virus, and four of them (GI, GII, GIV, and GV) were detected from humans [3,4]. The antigenicity differs not only among the genogroups but also among the genotypes within the particular genogroups, such as GI and GII [5][6][7]. Human SaVs (HuSaVs) are responsible for both sporadic cases and occasional outbreaks of acute gastroenteritis, and the transmission of HuSaV is thought to occur through fecal-oral routes [1,[8][9][10][11][12][13]]. Thus, two mechanisms may underlie the production of SaV VP1. One is that VP1 is cleaved from the ORF1-encoded polyprotein, and the other is that VP1 is translated from the subgenomic RNA [18]. VP1 consists of~560 aa and is the major capsid component of the SaV virion; the molecular weight (MW) of the PoSaV VP1 is~58 kDa [19]. The expression of sapovirus VP1 has been achieved with the use of insect or mammalian cells, followed by a spontaneous assembly of VP1 forming virus-like particles (VLPs), which are morphologically similar to native SaV, which has been demonstrated [5][6][7][20][21][22]. ORF2 is predicted to encode a minor~17-kDa structural protein, VP2. Although PoSaV VP2 was detected in in vitro translation products of a full-length genomic cDNA, it was not identified in porcine SaV virions [19,23]. The function of VP2 is thus unclear. A reverse genetic system is a powerful tool that is widely used to produce infectious viruses from cloned cDNA. Although this system worked in the case of the PoSaV [23], no infectious HuSaV viruses have been recovered with a reverse genetics system. The lack of a cell culture system for the propagation of human SaVs (HuSaV) has so far prevented their rescue by a reverse genetics technique [24]. Recently, a human duodenum carcinoma cell line, HuTu80, was established and shown to propagate HuSaVs of the genotype 3 of genogroup 2 (GII.3), GI.1 and GI.2 strains [25], providing a promising system for the propagation of HuSaV. In the present study, we generated an infectious HuSaV GII.3 using a reverse genetics system with HuTu80 cells, and we confirmed that VP1 and VP2 were the major and minor structural proteins of the virion. These results were further confirmed by experiments using a HuSaV GI.1 strain. In Vitro Transcription of HuSaV GII.3 RNA A complete genome of HuSaV GII.3 AK11 strain [25] was synthesized based on the nt sequence deposited in GenBank (accession no. LC715150), to which a T7 RNA polymerase promoter sequence was added at the 5 end, and an XbaI site was added at the 3 end following a poly A tail (25 nt). This construct was cloned into a pMX vector to generate a plasmid, pMX-T7Rsp23 (Thermo Fisher Scientific, Waltham, MA, USA) ( Figure 1a). The plasmid was linearized with XbaI and purified by phenol/chloroform extraction. Capped and uncapped RNAs were synthesized using an mMACHINE T7 kit and a MEGAscript kit (Ambion, Austin, TX, USA), respectively. All RNAs were purified by lithium chloride precipitation, as described [26]. Cell Culture, Transfection, and Virus Inoculation A human duodenum carcinoma cell line, HuTu80 (American Type Culture Collection [ATCC] HTB-40 TM ), was grown in Iscove's modified Dulbecco's medium (IMDM) (Sigma-Aldrich, St. Louis, MO, USA) supplemented with glutamine (Gibco, Grand Island, NY, USA), 5% (v/v) heat-inactivated fetal bovine serum (FBS) (Biosera, Kansas City, MO, USA), 100 U/mL penicillin, and 100 µg/mL streptomycin (Gibco) at 37 • C in a humidified 5% CO 2 atmosphere, and passaged every 7 days [25]. The confluent HuTu80 cells were dispersed by trypsinization, and 2 × 10 5 cells were cultured in a 25-cm 2 tissue culture flask for 24 h. The cells were then washed with Dulbecco's phosphate buffered saline (PBS) without Mg 2+ and Ca 2+ [PBS (−)] and maintained with 6.6 mL of maintenance medium containing IMDM supplemented with 3% (v/v) heatinactivated FBS and 1 mM glycocholic acid sodium salt (Nacalai Tesque, Kyoto, Japan). Transfection was performed using a TransIT-mRNA Transfection Kit (Mirus Bio, Madison, WI, USA), and each transfection was repeated three times. Briefly, 6.6 µg of the capped or uncapped HuSaV GII.3 RNA was combined with 650 µL of Opti-MEM (Gibco), and then 13.2 µL of mRNA boost reagent and 13.2 µL of TransIT-mRNA reagent were added to the mixture. After 5 min of incubation at room temperature (RT), the mixture was added to the HuTu80 cells. After a 12-h incubation at 37 • C, the medium was replaced with 10 mL of the maintenance medium, followed by further incubation at 37 • C. The medium was replaced with the maintenance medium every 4 days, and the culture supernatant was used for the detection of the capsid protein. For the virus inoculation, the confluent HuTu80 cells were trypsinized, and 2 × 10 5 cells were cultured in a 25-cm 2 tissue culture flask. After incubation for 24 h, the cells were washed two times with PBS (−), and a total of 1 mL of the HuSaV GII.3-positive cell culture supernatant was inoculated onto the HuTu80 cells. After adsorption at 37 • C for 1 h, the cells were washed three times with PBS (−), and replaced with 10 mL of the maintenance medium, followed by further incubation at 37 • C. Antigen Enzyme-Linked Immunosorbent Assay (ELISA) for the Detection of HuSaV GII.3 Capsid VP1 Protein The capsid VP1 protein in the cell culture supernatant was detected using an antigen detection ELISA, as described [25]. Briefly, flat-bottom 96-well polystyrene microplates (Immulon 2 HB, Dynex Technologies, Chantilly, VA, USA) were coated with 50 µL per well of the rabbit hyperimmune antiserum against HuSaV GII.3 C12 VP1-VLPs at 1:5000 dilution in 0.05 M carbonate buffer (pH 9.6) [7]. The plates were incubated overnight at 4 • C, washed twice with PBS (−), and then blocked with 250 µL of PBS (−) containing 0.5% casein for 2 h at RT or overnight at 4 • C. After the wells were washed three times with PBS (−) containing 0.1% Tween 20 (PBS-T), 50 µL of the cell culture supernatants was added. The plates were incubated for 1 h at RT. The detection of the capsid VP1 protein was performed using guinea pig hyperimmune antiserum against HuSaV GII.3 C12 VP1-VLPs [7]. After the wells were washed three times with PBS-T, 50 µL of the hyperimmune serum (1:3000) was added to each well. The plates were incubated for 1 h at RT followed by three washes with PBS-T. Next, 50 µL of horseradish peroxidase-conjugated goat anti-guinea pig IgG (IgG H+L) (Rockland Immunochemicals, Philadelphia, PA, USA) at 1:4000 dilution in PBS-T containing 0.25% casein was added to each well. The plates were incubated for 1 h at RT and then washed three times with PBS-T. Finally, 50 µL per well of 1 mM substrate 3, 3 , 5, 5 -tetramethylbenzidine (Sigma-Aldrich) and 0.01% H 2 O 2 in citrate buffer (pH 3.5) was added, and the plates were left in the dark for 30 min at RT. The reaction was stopped by the addition of 50 µL per well of 1 M H 2 SO 4 , and the absorbance was measured at 450 nm using a Benchmark Plus Microplate Reader (Bio-Rad Laboratories, Hercules, CA, USA). The absorbance at 750 nm was used as the reference for background subtraction. The uninfected HuTu80 cell culture supernatant (three wells per plate) served as the negative control. When the ratio of the optical density (OD) values between the sample and negative control was >3.0, the sample was judged to be positive. Real-Time Reverse Transcription-Quantitative Polymerase Chain Reaction (RT-qPCR) for the Detection of HuSaV GII.3 RNA The viral RNA was extracted from 100 µL of the samples with the use of a High PureRNA Isolation Kit (Roche Applied Science, Mannheim, Germany) according to the manufacturer's recommendations. The cDNA was synthesized using a random hexamer (Takara, Shiga, Japan) and a ReverTra Ace (Toyobo, Osaka, Japan) and then quantified by a TaqMan real-time PCR with a 7500 Fast Real Time PCR System (Applied Biosystems, Foster City, CA, USA). The RT-qPCR was performed with a mixed forward primer, i.e., HuSaV-F1 Purification of HuSaV Particles The cell culture supernatant collected from the infected cells was clarified by centrifugation at 10,000× g for 30 min, and the supernatant was concentrated by ultracentrifugation at 100,000× g for 3 h in a Beckman SW 32 Ti rotor. The resulting pellet was suspended in PBS (−) at 4 • C overnight. For CsCl gradient centrifugation, 4.5 mL of the samples was mixed with 2.1 g of CsCl and centrifuged at 100,000× g for 24 h at 10 • C in a Beckman SW55Ti rotor. The gradient was fractionated into 250 µL aliquots, and each fraction was weighed to estimate the buoyant density. Each fraction was diluted with PBS (−) and cen-trifuged for 2 h at 112,000× g in a Beckman TLA55 rotor, and the pellet was resuspended in PBS (−) and used for the detection of the viral RNA and protein. Sodium Dodecyl Sulfate-Polyacrylamide Gel Electrophoresis (SDS-PAGE) and Western Blot Analysis The viral proteins in each fraction were separated by 5-20% e-PAGEL (Atto, Tokyo) and then stained with Coomassie brilliant blue (CBB). The proteins in the gel were electrophoretically transferred onto a nitrocellulose membrane. The membrane was then blocked with PBS-T containing 5% skim milk and incubated with a rabbit hyperimmune antiserum against HuSaV GII.3 C12 VP1-VLPs (1:1000) in PBS-T containing 1% skim milk [7]. Detection of the rabbit IgG antibody was achieved using alkaline phosphatase-conjugated goat anti-rabbit antibody (1: 1000) (Chemicon International, Billerica, MA, USA). Nitro blue tetrazolium chloride and 5-bromo-4-chloro-3-indolyl phosphate p-toluidine were used for the detection of antibody binding (Bio-Rad Laboratories). Transmission Electron Microscopy (TEM) The purified viral particles were placed on a formvar and carbon-coated grid for 45 s, rinsed with distilled water, and stained with a 2% uranyl acetate solution. The grids were observed under a transmission electron microscope (HT7700; Hitachi High Technologies, Tokyo, Japan) at 80 kV. Viral Genome Sequencing The entire genome sequence of the HuSaV GII.3 strain produced by the reverse genetics system was determined by next-generation sequencing (NGS), as described [28]. Briefly, a 200-base pair (bp) fragment library was constructed for each sample with the NEBNext Ultra RNA Library Prep Kit for Illumina ver. 1.2 (New England Biolabs, Ipswich, MA, USA) according to the manufacturer's instructions. Samples were bar-coded for multiplexing with the use of NEBNext Multiplex Oligos for Illumina and Index Primer Sets 1 and 2 (New England Biolabs). Library purification was done with Agencourt AMPure XP magnetic beads (Beckman Coulter, Pasadena, CA, USA) as recommended in the NEBNext protocol. The quality of the purified libraries was assessed on an MCE-202 MultiNA Microchip Electrophoresis System (Shimadzu, Kyoto, Japan), and the concentrations were determined on a Qubit 2.0 fluorometer using the Qubit HS DNA Assay (Invitrogen, Carlsbad, CA, USA). A 151-cycle paired-end read sequencing run was carried out on a MiSeq desktop sequencer (Illumina, San Diego, CA) using the MiSeq Reagent Kit ver. 2 (300 cycles). After a preliminary analysis, the MiSeq reporter program was used to generate FASTQ formatted sequence data for each sample. The sequence data were analyzed using CLC Genomics Workbench Software ver. 6.5.1 (CLC Bio, Aarhus, Denmark). Contigs were assembled from the obtained sequence reads by de novo assembly. The assembled contig sequences were subsequently used to query the non-redundant nucleotide database in GenBank with the BLAST algorithm [28]. Mass Spectrometry Analysis The purified HuSaV particles were separated by SDS-PAGE, and the protein bands were cut out. The aa sequence was analyzed by liquid chromatography-tandem mass spectrometry (LC-MS/MS) by an outside company (Integrale, Tokushima, Japan) using the EASY-nLC1200 system (Thermo Fisher Scientific). The data were collected and analyzed with Mascot Server (available online: https://www.matrixscience.com/help_index.html, accessed on 20 June 2022). N-Terminal Amino Acid Sequence Analysis The purified virus particles were separated by SDS-PAGE and transferred to a polyvinylidene difluoride (PVDF) membrane. The proteins were stained with CBB, and the bands iden-tified were cut out. The N-terminal aa microsequencing was carried out at Hokkaido System Sciences (Hokkaido, Japan) using the protein sequencer Procise 494HT (Applied Biosystems). Generation of the Infectious HuSaV GII.3 AK11 Strain The capped RNA encoding the entire genome of HuSaV GII.3 AK11 strain was used to transfect HuTu80 cells, and the culture supernatant (p0) was collected every 4 days and used for the detection of the capsid VP1 protein by an antigen-capture ELISA. The capsid protein was detected on day 12 post-transfection (p.t.), with an OD value of 1.215, and it was constantly detected thereafter with similar OD values until day 40 p.t. (Figure 1). In contrast, no viral capsid protein was detected in the supernatants of the uncapped RNA-transfected cells, even as of day 40 p.t. Although the transfection was repeated two more times, the results indicated that the uncapped RNA was not capable of replicating and generating the infectious virus (Figure 1b). The supernatant p0 collected on day 16 p.t. was used to inoculate HuTu80 cells, and the culture supernatant (p1) was collected to examine whether the p0 supernatant contained infectious viruses. The OD value of 1.281 was observed on day 4 post-infection (p.i.), and similar values were detected on days 8 and 12 p.i. (Figure 1), indicating that the p0 supernatant was infectious and the capped RNA produced infectious AK11 viruses. No obvious cytopathic effects were observed in the HuSaV GII.3-infected cells. The virus in supernatant p0 was designated HuSaV GII.3 AK11 p0, and the virus in supernatant p1 was designated HuSaV GII.3 AK11 p1. To determine whether the genetic mutations occurred during the virus replication, we used the AK11 p0 collected on day 28 p.t. and AK11 p1 collected on day 12 p.i. for NGS. The entire nucleotide sequence of both AK11 viruses was identified, with the exception of the first 10 nt on the 5 terminus, and the identified sequences were identical to that of the original nucleotide sequence of the AK11 strain. These results indicated that (i) the generation of the virus occurred in a cap-dependent manner, and (ii) virus replication occurred efficiently in HuTu80 cells. Purification and Characterization of the HuSaV GII.3 AK11 Virions A total of 300 mL of the p1 supernatant was concentrated and purified by CsCl gradient ultracentrifugation as described in the Section 2. When the pelleted virions were analyzed by SDS-PAGE followed by CBB staining, two protein bands, p58 (58 kDa) and p17 (17 kDa), were observed primarily in fractions 3 to 5 (Figure 2a). The MWs of VP1 and VP2 of AK11 strain based on their aa sequences,~60 kDa and~16 kDa, were similar to those of p58 and p17. A Western blot assay indicated that HuSaV GII.3 C12 VP1-specific antiserum is bound to p58 (Figure 2b), demonstrating that p58 is the VP1 of AK11 virus. In addition to fractions 3-5, strong bands of p58 were observed in fractions 13 and 14 by Western blot assay, suggesting that p58 exists with different densities (Figure 2b). We next examined the viral RNA copy numbers in each fraction by RT-qPCR. A peak of viral RNA was detected in fractions 3, 4, and 5, at the values 1.0 × 10 9 copies/µL, 1.9 × 10 9 copies/µL, and 3.0 × 10 9 copies/µL, respectively (Figure 2c). The mean density of fractions 3, 4, and 5 was 1.350 g/cm 3 (1.358 g/cm 3 , 1.351 g/cm 3 , and 1.341 g/cm 3 ), and that of fractions 13 and 14 was 1.286 g/cm 3 (1.289 g/cm 3 and 1.283 g/cm 3 ) (Figure 2c). Purification and characterization of AK11 virions. The p1 supernatants collected on day 8 p.i. were concentrated by ultracentrifugation and then purified by CsCl equilibrium density gradient centrifugation. Aliquots from each fraction were analyzed by electrophoresis on 5-20% polyacrylamide gels and stained with CBB (a), and the capsid protein was detected by a Western blotting assay using a VP1-specific antiserum (b). Molecular weight markers (in kDa) are indicated on the left (a,b). Viral RNA in each fraction ( ) detected by RT-qPCR and the density ( ) is shown (c). Electron micrographs of fractions 5 (d) and 13 (e). Bar: 200 nm. The particle sizes were determined using Hitachi EMIP software ver. 0524 (Hitachi High Technologies, Tokyo, Japan). Electron microscopy revealed many spherical particles showing many cup-shaped depressions in fractions 3, 4, and 5, and the average particle size was 39.2 nm in diameter (n = 5, 38.1 to 39.8 nm) (Figure 2d). The virus particles were also observed in fractions 13 and 14, but most of them were empty, and the average size of the empty particles was 42.2 nm dia. (n = 5, 41.8 to 42.6 nm) (Figure 2e). These results indicated that two types of virus particles were produced in HuTu80 cells: the complete viral 39.2-nm-dia. particles at the density of 1.350 g/cm 3 in CsCl and the empty 42.2-nm-dia. particles at the density of 1.286 g/cm 3 in CsCl. Both types of particles were released into the cell culture supernatants. Analyses of the Capsid Proteins of AK11 and AK20 Virion VP2 of a sapovirus is thought to be a minor structural protein, but this is controversial. Since we observed that p17 was present in the purified virus particles and its MW was similar to that of VP2 of AK11, we speculate that p17 is highly likely to be VP2. To investigate whether p17 exists in other HuSaV strains, we purified HuSaV GI.1 AK20 (LC715151) (AK20) particles from the cell culture supernatants of infected HuTu80 cells, and we compared the structural proteins with those of AK11 by SDS-PAGE, followed by CBB staining (Figure 3a). In addition to a major band corresponding to p58, we observed a faint 16-kDa band (i.e., p16) in the AK20 particles. When the AK11 virions were analyzed at the same time, we observed a major p58 band and a similar faint 17-kDa band (p17), suggesting that the two small proteins were minor structural proteins of AK20 and AK11 virions, although they belong to different genogroups. To further examine the functions of these small proteins, we determined the ratio of the protein content between p58 and p16 or p17 using Image Lab TM software ver. 6.1 (Bio-Rad, Hercules, CA, USA) based on the SDS-PAGE image. The results showed that the ratio between p58 and p17 in the HuSaV GII.3 AK11 particles was 20:1 (124,153,036:6,245,840) ( Figure 3b) and between p58 and p16 in the HuSaV GI.1 AK20 particles was 16:1 (111,100,212:6,958,560) ( Figure 3c). These results demonstrated that p17 and p16 were minor proteins present in the HuSaV particles. We further analyzed the aa sequences of the p17 and p16 by LC-MS/MS and compared them with the aa sequences of the corresponding VP2. As shown in Figure 3d, VP2 of HuSaV GII.3 AK11 consisted of 166 aa, and a total of 94 aa (57%) sequences derived from p17 corresponded to aa 30-53, 61-85, 87-106, and 130-155. Similarly, VP2 of HuSaV GI.1 AK20 consisted of 165 aa, and a total of 147 aa (89%) derived from p16 corresponded to aa 2-29, 37-86, 94-113 and 117-165 (Figure 3e). These results clearly indicate that the p16 is the VP2 of HuSaV GI.1 AK20 and p17 is the VP2 of HuSaV GII.3 AK11. In other words, VP2 is a component of HuSaV particles. The N-terminal amino acid analyses revealed that the N-termini of p58 of both HuSaVs were blocked. Discussion Although feline calicivirus (FCV) and PoSaV have been produced ex vivo by reverse genetic systems using synthetic RNA [23,29], no HuSaV has been successfully produced, since no cell culture system has been developed for the successive replication of the virus. Calicivirus genomes are modified at their 5 ends by a covalently linked VPg [18]. In the present study, we synthesized the capped and uncapped genome RNAs of HuSaV GII.3 AK11 strain and used them to transfect HuTu80 cells. An infectious virus was recovered from the capped RNA-transfected cells, whereas the virus was not generated in the uncapped RNA-transfected cells, suggesting that the 5 -end capping of HuSaV RNA compensates the VPg functions and is essential for the production of the infectious viruses, as was observed for PoSaV and FCV [23,29]. Our results confirm the usability of the reverse genetics system to produce infectious HuSaVs. Caliciviruses have a small positive-sense RNA genome (~7.5 kb) and are packaged within a T = 3 icosahedral shell, assembled from 180 copies of the major capsid protein (VP1) in three quasi-equivalent settings [30]. In addition to VP1, an essential but low copy-number of the capsid protein, VP2, which is encoded in all calicivirus genomes, is incorporated in the virion [31]. The VP2 of FCV forms a large portal-like assembly at a unique three-fold axis of symmetry after receptor engagement, and the FCV particle formed with twelve copies of VP2 [31]. The properties of the HuSaV particles have been identified mainly from the purified virus particles derived from naturally infected patients [32][33][34][35][36]. Due to limited resources, information about the virion (particularly relating to VP2) is lacking. VP2 had not yet been identified in PoSaV virions [19,23]. In the present study, we purified the HuSaV GII.3 and GI.1 particles using a large-scale cell culture and obtained >5 mg/mL of the purified virus particles, allowing us to clearly observe the VP2 protein by SDS-PAGE and to analyze the aa sequence by LC-MS/MS. Our results confirmed that VP2 is the minor structure protein associated with HuSaV particles. Interestingly, the ratios of VP1 and VP2 of the two HuSaVs were 16:1 and 20:1, which are similar to the ratio observed in FCV (15:1) [31]. Further studies are necessary to clarify the precise role(s) of VP2 in the HuSaV virion assembly. VP1 was cleaved from the ORF1 encoded polyprotein or translated from the subgenomic RNA [37]. The cleavage site between protease-polymerase (NS6-NS7) and VP1 is E/G in HuSaV GII.2 Mc10; in addition, an N-terminal tri-peptide, MEG, of VP1 is conserved among the HuSaVs [14,15]. In the present study, we used the purified viral particles derived from HuSaV GI. 1 and GII.3 to identify the N-terminal aa sequence of the VP1, but both VP1 N-termini of HuSaV GI.1 and GII.3 were blocked, and we were unable to identify the N-terminal aa sequence of the VP1. The density of the HuSaV GII.3 AK11 strain purified from the HuTu80 cells was 1.350 g/cm 3 , which is similar to that of HuSaV GI.1 from the stool of patients and the PoSaV Cowden strain from cells [19,35,38]. In our experiments, the empty particles were obtained from the GII.3 AK11-infected cell culture supernatants in addition to the complete virions ( Figure 2b). The morphology of the empty particles is similar to that of the virus-like particles in HuSaV GI.1 Mc114, GI.2 Hou90, GII.2 Mc10, and GII.3 C12 strain produced by the expression of the VP1 gene in insect and mammalian cells [7,39,40]. However, due to the limited number of purified empty particles observed herein, we were unable to determine (i) whether VP2 is involved in the empty particles, and (ii) what the N-terminal sequence of VP1 is. It is also of interest to determine whether the empty particles are present in fecal specimens of patients during the infection. In conclusion, we successfully produced an infectious HuSaV GII.3 strain using a reverse genetic system, and we characterized the protein component of the virion. The application of this system would be a useful method for generating infectious HuSaVs when virus-positive fecal materials are not available but the nucleotide sequence data are available. This method would also be useful for the characterization of replication as well as for studies of fundamental molecular biology, viral morphology, mechanisms of replication, antigenic analyses, and the development of vaccines for HuSaVs. Data Availability Statement: The sequences of HuSaV GII.3 AK11 and GI.1 AK20 used in the study have been assigned (GenBank accession nos. LC715150 and LC715151).
2022-07-31T15:03:42.798Z
2022-07-27T00:00:00.000
{ "year": 2022, "sha1": "d258bf7d899c6cd132df11f285758d739886c403", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1999-4915/14/8/1649/pdf?version=1659510689", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "9579c414a4c343c627720117de94ff551c3c21d4", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
218958764
pes2o/s2orc
v3-fos-license
Assessing the energy performance of VAV and VRF air conditioning systems in an office building located in the city of Florianópolis he objective of this study is to analyze the energy performance of two types of water-cooled air conditioning systems, variable air volume (VAV) and variable refrigerant flow (VRF), in terms of their cooling energy use through building simulation. These systems were designed to operate in an office building located in the city of Florianópolis, Brazil. The analysis involved the application of two building use schedules: a) constant and b) variable. Moreover, an analysis of the coefficient of performance (COP) and partial load ratio (PLR), and the percentage of operating hours for each range of cooling COP and PLR for each air conditioning system, allowed the system cooling efficiency to be assessed and the results to be related to the annual energy consumption. The coefficient of performance of VAV and VRF is 6.7 and 5.0, respectively, but the VRF system presented the lowest energy consumption for both schedules. The difference in the cooling energy consumption values for the VRF and VAV systems, for the variable schedule compared with the constant schedule, is mainly influenced by the partial load performance during the hottest period of the year. W/W and an IPLV of 6.41 for the current the design operation. For the VSD chiller, the nominal COP modeled was also 6.7. The results are a COP of 6.7 W/W and IPLV of 7.87. The COP of 6.7 W/W was selected in order to assess the impact of difference COP curves for the two chillers adopted in this study. The results of the assessment for these performance correlation curves and the relationship between COP and PLR values for different operating Introduction Energy efficiency currently represents a theme with major potential for research, mainly because of increased environmental and economic concerns and technological advances. In this regard, important achievements have been made in the area of buildings, due to their contribution to the total energy consumption, mostly associated with the air conditioning system, which accounts for a significant percentage of this consumption (ZHOU et al., 2008;AYNUR;RADERMACHER, 2009). In 2018 IEA (International Energy Agency) has published the report "The Future of Cooling" that alerts us for the tremendous space cooling problem to come in future years. The energy use for space cooling is growing fast in the world and has tripled from 1990 to 2016 (INTERNATIONAL…, 2018). This is putting an enormous strain on electricity systems in many countries and Brazil is one of them. We need firm policy interventions and more efficient air conditioners to minimize the effect on the electricity grid. An assessment of commercial offices in Brazil indicated that the air conditioning systems represent from 25% to 75% of the total energy consumption (GHISI; GOSCH;LAMBERTS, 2007). Many countries have developed and improved their energy labelling in buildings since the energy crisis in the 1970s, especially in the United States and Europe, where the economy is extremely dependent on oil. Investments in energy efficiency provide financial rewards and environmental benefits. However, in Brazil, efforts to introduce building regulations in this regard are more recent. The first energy efficiency law in Brazil was approved in 2001 (BRASIL, 2001). This resulted in an energy efficiency labelling system, the Regulation for Energy Efficiency Labelling of Commercial Buildings in Brazil (RTQ-C) (BRASIL, 2009), introduced in 2009. The RTQ-C classifies buildings according to five levels: from "A" (most efficient) to "E" (least efficient) according to the building envelope, lighting system and air conditioning system. This classification can be based on a surrogate model or obtained using building energy simulation programs. According to the surrogate model, the efficiency requirements for central air conditioning systems are based on standardized measurements under nominal conditions, such as coefficient of performance (COP) and integrated part load value (IPLV), according to ASHRAE Standard 90.1 minimum requirements (AMERICAN…, 2013a). The simulation method consists of comparing two models: real/proposed and reference building. The two models must have common characteristics, such as same air conditioning system type and set point temperature. However, the real/proposed model should also adopt its own characteristics, for instance, for COP and IPLV. In Brazil, the energy efficiency of HVAC systems is measured according to ISO 5151 (INTERNATIONAL…, 2017), where the equipment is evaluated under full load and under a standard temperature condition for heat absorption and rejection. However, there are no minimum efficiency standards for chillers, VRF (variable refrigerant flow) and self-contained air conditioning system. Therefore, the Brazilian regulation for non-residential buildings adopted the System Part Load Value (SPLV) to determine the efficiency of the system. SPLV is a single numerical indicator of part-load performance and a representative indication of a system performance operating under specific conditions. The SPLV calculation is based on the annual thermal load and the performance of the entire air=conditioning system, considering four thermal load conditions (100%, 75%, 50% and 25%). It is based on IPLV calculation methodology for equipment but covers all equipment involved in the air conditioning system and is calculated specifically for the project using actual weather data, actual part load characteristics, actual equipment performance, and the anticipated operating hours. The variable air volume (VAV) air conditioning systems were introduced in the 1960s to reduce building energy consumption during operation at partial load (AYNUR; HWANG; RADERMACHER, 2009) and have reached a consolidated acceptance on the market (YAO et al., 2007;AYNUR, 2010). This type of system has a lower acquisition cost compared to variable refrigerant flow (VRF) systems and presents flexibility and control in relation to the air distribution and design requirement (AYNUR; HWANG; RADERMACHER, 2008;AYNUR, 2010). The VAV air conditioning system consists of modulating the motor rotation of the blower fan through a frequency inverter in order to provide the necessary airflow to each conditioned zone according to its demand for thermal load. One of the other main benefits of typical VAV systems is the ability to use airside economizing to provide cooling when outside conditions are favorable. Thus, the reduction in the energy consumption, in relation to systems with constant air volume, is related with the blower fan and due to a reduction in the airflow that passes through the coil to be cooled. This affects the chiller performance, which is the most important factor in terms of the system energy consumption. Centrifugal chillers are commonly adopted in large buildings, as their capacity is a function of the amount of refrigerant to be displaced and operation. According to ASHRAE (AMERICAN…, 2012), the capacity control in a centrifugal compressor is mainly performed in two ways: using a pre-rotation at the inlet rotor (known as pre-rotation vanes or inlet guide vanes) or variable speed drive (VSD). In the pre-rotation, fins installed in the suction of the rotor are operated by an electric-pneumatic mechanism, in order to regulate the flow of refrigerant according to a signal coming from a sensor that controls the chilled water temperature. In the case of VSD, the compressor capacity is controlled by varying the rotation speed of the rotor, the flow of refrigerant in the suction and the pressure in the discharge of the compressor. In the search for increased energy efficiency in buildings and their systems, air conditioning system companies have been improving the variable refrigerant flow (VRF) technology (AYNUR; HWANG; RADERMACHER, 2009;LI;SHIOCHI, 2009;LI, WU, 2010). The VRF air conditioning system was introduced in Japan in the early 1980s. It is a central multi-split direct expansion system that varies the refrigerant flow according to the use of a variable speed compressor and to the electronic expansion valves (EEVs) situated in each indoor unit, which connects to the outdoor unit through the same refrigeration circuit. There are three different types of VRF available on the market today: single cold type, heat pump type and heat recovery type. The heat pump VRF system only supplies cooling or heating at a time and the heat recovery VRF system supplies both cooling and heating simultaneously. The outdoor unit can be air-cooled or water-cooled; and the main type of compressor used is the scroll. The main advantages of using the VRF system are a flexible and compact configuration, easy operation, individualized comfort control, and a high coefficient of performance in the partial load condition compared to other central air conditioning systems (AMARNATH, BLATT, 2008). This technology is becoming more frequently adopted in office buildings in Brazil. The competition in this segment is increasing around the world due to a strong commercial appeal focused on sustainability and energy saving (LI, WU, 2010;LIU, WONG, 2010). The various benefits of VRF systems drive the growth in the market of this system globally. In Asia and Europe, the VRF air conditioning system is well accepted in the market. In Japan, the VRF system has been used in approximately 50% of medium-sized commercial buildings and in 33% of large buildings. In Europe, where many existing buildings do not have air conditioning, retrofit opportunities represent a growing demand for their application. In the US, although the VRF air-conditioning market is still growing, Asian manufacturers seek a position, individually or in partnership with US manufacturers (AMARNATH, BLATT, 2008). The improvement of building energy simulation programs, especially in the last decade, has provided support in studies on air conditioning system efficiency. Computational simulations represent a significant collaboration in the context of sustainable development, supporting building labeling programs, especially in Brazil, where investments in this regard are recent. Also, building energy simulation allows technical and economic analysis of the cost/benefit ratio to be carried out during the design process. Literature review Comparative studies regarding VRF system performance are recent. VRF models have been developed and implemented in DOE-2.1E and IES-VE (HONG et al., 2016) and available in Trace700, eQuest, EnergyPro and EnergyPlus (AMARNATH, BLATT, 2008). Studies to improve the VRF simulation in EnergyPlus program have been conducted since 2007, considering several configurations: air cooled VRF systems (ZHOU et al., 2007), water-cooled VRF systems (LI, WU, 2010) and heat recovery VRF systems (AMARNATH, BLATT, 2008). Field research has also been conducted to evaluate the use of these implementations (ZHOU et al., 2008;LI et al., 2010;EGAN, 2009). The EnergyPlus program, version 7.2 (ENERGYPLUS, 2018), presented the first validated approach to modeling a VRF system, the System Curve-based Model (VRF-SysCurve) (RAUSTAD, 2013). The validated VRF model was experimentally analyzed by Kim et al. (2018) and observed greater agreement between measured and simulated results when compared with results obtained by Zhou et al. (2008). In version 8.4, the EnergyPlus program presented a second VRF model named the Physics-based Model (VRF-FluidTCtrl), which was validated by Hong et al. (2016). Studies indicate that the VRF system is becoming competitive in relation to the VAV chilled water system regarding the former potential reduction in energy consumption and higher partial load efficiency. Zhou et al. (2007) simulated a 10-story office building in Shanghai and the air-cooled VRF systems could consume from 11% to 22% less energy compared to water-cooled chiller-based fan-coil plus fresh air systems (FPFA) and VAV systems, respectively. It was observed the air conditioning systems operated for around 80% of the time in a PLR range of 30% to 90%. The same building was analyzed by Li et al. (2009) and the energy consumption of the VRF air-cooled system was 4% lower than the VRF water-cooled system and 24% lower than the fan-coil plus fresh air systems (FPFA). The greatest difference in cooling energy consumption was in June, when the PLR range was 0.35 to 0.6, and in July and August, when the PLR differ from 0.5 to 0.9. The energy savings of a VRF system compared to a VAV system was observed by Aynur, Hwang and 264 Radermacher (2009), in three different existing office building in Maryland, USA. Results show that VRF system ranged from 27.1% to 57.9% in relation to a central VAV systems, depending on the system configuration and design conditions. Yu et al. (2016) compared the cooling energy use data for VAV and VRF systems in five typical office buildings in China, using a site survey and field measurements to quantify the impact of the system operation. The cooling energy savings of the VRF system was higher when compared to VAV system. Kim et al. (2017) observed the energy saving potential of a VRF system in relation to a VAV system for 16 United States climates zones. It was observed that VRF saves more energy than a VAV system, and the amount depends on the climate zone. In general, energy savings in hot and mild climates are higher than in cold climates due to the heating energy consumption. These mentioned studies have set an acceptable system design Combination Ratio (CR) for their VRF system equipment. The CR is defined as the ratio of the capacity of the indoor units to the capacity of the outdoor unit coupled to them, and CR ranges vary from manufacturer to manufacturer. However, it is important to highlight the advantages and disadvantages by considering a larger or smaller CR. The VAV chilled water and VRF air conditioning systems are adopted in energy efficient buildings in Brazil. However, simulation studies on both systems are still scarce in Brazil and in other countries, specifically those addressing the water-cooled VRF system. The Brazilian regulation for energy efficient buildings is in continuous development, not only to include updates regarding new technologies, but also to improve the efficiency level of the systems. Studies to address the performance of air conditioning systems can provide information that promotes the energy efficient use of buildings, either through stimulating new research or the adoption of better design practices. This current study compares the VAV and VRF energy consumption and performance under nominal conditions and partial loads, addressing the importance of gaining information from the use of building energy simulation tools that allow air conditioning systems to be modelled, and the great contribution that these assessments can provide with regard to reducing building energy consumption. Moreover, the study intends to observe how the performance indexes, such as COP and IPLV, influence the energy efficiency of VAV and VRF systems in buildings. Method The comparative energy analysis methodology regarding the VAV (variable air volume) system and the VRF (variable refrigerant flow) system is presented in Figure 1. The VAV and VRF systems were simulated for an office building located in Florianópolis, Brazil, assuming the design practices and energy efficiency levels determined by the Brazilian Regulation for Energy Efficient Buildings. The computational simulations were run using the EnergyPlus program, version 8.1 to compare and analyze the energy performance of air conditioning systems. Two building use schedules, during the occupation period, were simulated: constant internal loads; and variable internal load to assess the use schedules performance on the energy consumption of VAV and VRF systems. The air conditioning system selection criteria and building energy simulation input data may radically alter the findings. However, the approach is a case study. The main focus is to assess results based on nominal projects requirements considered as efficient in the annual energy performance of air conditioning systems. Both air conditioning systems are water-cooled and single cold type and operate for cooling throughout the year. Each system is simulated with two configurations: VAV system with a centrifugal chiller with an inlet guide vanes-type compressor, named standard chiller, and a VSD centrifugal chiller; VRF system with combination ratio (CR) greater than 1 (CR > 1) and smaller than 1 (CR < 1). The performance curves for the chillers and outdoor VRF units are modeled considering manufacturer performance data, showing the differences of a system with a combination ratio greater or smaller than 1. The total cooling energy consumption from both air conditioning system is compared, and a discussion on the annual results according to the percentage of operation time for different COP and PLR values and the cooling energy performance is provided. Design parameters of the air conditioning systems The design days are very important for use in HVAC Sizing Calculation in EnergyPlus program, allowing the user to use a pre-defined design day profile based on ASHRAE Handbook of Fundamentals, or to create your own profile. For this study, the VAV and VRF systems cooling capacity were auto-sized based on a summer design day pre-defined according to ASHRAE Handbook of Fundamentals (AMERICAN…, 2013b) with sizing factor 1.15. The heating system was not considered in the simulations as it is not typical for office buildings in this location. Figure 1 -Comparative energy simulation approach Performance curves were applied to determine VAV and VRF characteristics, being aware that a slight change in the performance curves would reflect on results. The same attention was also considered for all input data regarding the information from air conditioning system adopted. The decision about each system is based on three criteria: (a) 1. Capacity result dimensioned by the program; (b) 2. Availability of data from a real system; and (c) 3. Minimal requirements according to the Brazilian regulation for commercial buildings. For cooling performance results, the influential inputs data are nominal capacity, COP and curves. Curves correct the available capacity and the power consumed during operation, with a value of 1 at a nominal condition. Varying the equipment capacity, influence the calculated PLR at each moment, affecting the consumption based on the nominal power. Varying the COP, influence the reference nominal power. For the main purpose in this case study, both VAV systems have the same input data and two curves of two different compressors are adopted. The VRF systems have same curves and same COP, but different nominal capacities. In addition, the COP value from VAV systems is different from VRF system. Variable air volume system Figure 2 provides a schematic diagram of the chilled water system. A conventional primary (constant)/secondary (variable) system was adopted. The chilled water load distribution scheme is sequential; the main chiller operates primarily, and the secondary chiller operates to complete the need for the cooling demand. The chilled water setpoint ranges from 6.7°C to 12°C when the outdoor air dry-bulb temperature ranges from 27°C to 16°C, and when this temperature is greater than 27°C the setpoint is 6.7°C, and for lower than 16°C the setpoint is 12°C. The condenser water setpoint changes according to the outdoor air wet-bulb temperature (plus 5.6°C). The VAV air loop system is shown in Figure 3. All requirements and design considerations were identical for both VAV air conditioning systems considered, with centrifugal chillers and with VSD centrifugal chillers. All pumps have intermittent operation and shut off when no load is identified. The nominal power values adopted were 349 kW/m³/s for the chilled water pumps and 310 kW/m³/s for the condenser water pumps (AMERICAN…, 2012). The curve used for variable-speed pump power correction was selected from the DataSets folder of the EnergyPlus program, according to Equation 1. Where: is the fraction of full load power of the variable-speed pump; vis the current variable-speed pump water flow rate (m³/s); and v is the design maximum variable-speed pump water flow rate (m³/s). The VAV supply fan has an efficiency of 0.65, pressure rise of 600 Pa and motor efficiency of 0.8. The curve for the fan power correction is modeled according to Appendix G of ASHRAE Standard 90.1 (AMERICAN…, 2013a). The main VAV air conditioning systems are detailed in Table 1. Where: is the cooling capacity factor function of temperature; is the energy input to cooling output factor function of temperature; is energy input to cooling output factor function of PLR; , is the leaving chilled water temperature (°C); , is the entering condenser water temperature (°C); is the partial-load ratio, cooling load/chiller available cooling capacity; and to are the coefficients of the performance curves. In Brazil, the chiller performance curves are not commonly available by the manufactures. Therefore, the coefficients of performance curves for the standard and VSD centrifugal chillers adopted in the simulations were obtained from the DataSet folder of the EnergyPlus program: standard chiller Trane CVHE model with a capacity of 1329.3 kW and refrigerant fluid R-123 with a COP of 5.38 W/W and the VSD chiller Carrier 19XR model with a capacity of 1407 kW and refrigerant fluid R-134 with a COP of 6.04. All of the coefficient of performance curves values is presented in the Appendices A -Variable air volume system characteristics. Variable refrigerant flow system The indoor units of each thermal zone, represented by a single object for modeling purposes, are connected to an outdoor unit, that is a total of 15 subsystems, as shown in Figure 4. All design requirements and considerations are identical for the two VRF air conditioning systems, with the exception of the combination ratio (CR) selection for the outdoor units: with combination ratio (CR) greater than 1 (CR > 1) and smaller than 1 (CR < 1), highlighting the advantages and disadvantages by considering a larger or smaller CR. Figure 4 -Schematic diagram of the VRF system Manufacturer catalogs (LG ELECTRONICS, 2014a) were used to select the VRF equipment and modelling the characteristics, detailed in the Appendices B -Variable refrigerant flow system characteristics. The efficiency of indoor unit's supply air fan is 0.65 and motor efficiency is 0.8. The air flow rate of the fan is constant and the supply fan operates continuously. The supply fan stays off when the VRF system is not demanded. Two different outdoor unit models were selected for these indoor units to understand the differences of a system with a combination ratio greater or smaller than 1. The CR is a parameter chosen by the designer and, in addition to safety criteria, the choice is influenced by the cost/benefit ratio, the cost of the system per kW and the characteristic of better performance in partial load of the VRF system. The LG ARWN480DA2 outdoor unit was selected for the VRF system with a CR > 1, and the LG ARWN580DA2 outdoor unit for the VRF system with a CR < 1. The condenser water flow rate was adopted according to the catalog (LG ELECTRONICS, 2014b). The other design and operating criteria for the condenser loop are the same as those used to configure the VAV air conditioning system. Data regarding the dimensional parameters adopted for the outdoor units, the condenser water pumps and the cooling tower of the VRF air conditioning systems are also given in the Appendices B -Variable refrigerant flow system characteristics The EnergyPlus simulation model used was the VRF System Curve-based Model. The correction factors adjust the off-reference performance of the VRF system. In this study, the coefficients of the performance curves were obtained by polynomial regression using performance data obtained from catalogs (LG ELECTRONICS, 2014b). Two curves were fitted to the available cooling capacity of the indoor unit, according to Equations 5 and 6: Eq. 5 Where: is the total capacity modifier curve function of temperature; is the total capacity modifier curve function of flow fraction; , is the wet-bulb temperature of the air entering the cooling coil (°C); is the flow fraction, actual air flow rate/rated air flow rate; and to are the coefficients of performance curves. The same coefficients of performance were adopted for the curves related to all indoor units. The model used as a reference was the LG ARNU423BGA2 (LG ELECTRONICS, 2014b). The coefficients of performance curves are reported in the Appendices B -Variable refrigerant flow system characteristics. For each indoor unit, the cooling capacity data, as a function of the wet-bulb temperature obtained from the catalogs is compared with the cooling capacity calculated using the curve obtained from LG ARNU423BGA2 ( Figure 5). It was observed the difference was not relevant. Ambiente Construído,Porto Alegre,v. 20,n. 2,abr Where: Figure 5 -Cooling capacity of the indoor units as a function of indoor wet-bulb temperatur is the capacity ratio modifier function of low temperature; is the capacity ratio modifier function of high temperature; , is the capacity ratio boundary, entering condenser water temperature boundary; is the energy input ratio modifier function of low temperature; is the energy input ratio modifier function of high temperature; , is the energy input ratio boundary, entering condenser water temperature boundary; is the energy input ratio modifier function of high part-load ratio, PLR>1; is the energy input ratio modifier function of low part-load ratio, PLR≤1; is the combination ratio capacity correction factor; is the piping correction factor for length; is the part-load fraction correlation, factor is used to account for startup losses of the compression system, when it cycles on and off. The default value of the EnergyPlus program was adopted as the data for this curve is not reported in catalogs; , is the load-weighted average wet-bulb temperature of the air entering all operating cooling coils (°C); , is the entering condenser water temperature (°C); is the partial-load ratio, actual cooling capacity / available cooling capacity; is the combination ratio, rated total cooling capacity of the indoor terminal unit divided by the rated total cooling capacity of outdoor terminal unit; is the equivalent piping length between indoor terminal unit and outdoor terminal unit (m); is the vertical height between indoor terminal unit and outdoor terminal unit (m); is the minimum partial-load ratio; and to are the coefficients of performance curves. Schedule for air conditioning system Two different building use schedules were adopted. The first one is a constant schedule (Table 2), often considered for the evaluation of office building energy performance, in Brazil, when no database for the building use is available. The constant schedule was adopted to size the cooling system. The second is a variable schedule (Table 3), based on the office occupancy described by ASHRAE (AMERICAN…, 2007), modified to match the working hours in Brazil. As the cooling energy demand should be lower for a variable schedule, it is important to observe the performance of the air conditioning systems in partial load operation for different cooling demands. The zone thermostat setpoint was 24 o C for both types of air conditioning systems, from 08:00 to 20:00 on weekdays and from 08:00 to 12:00 on Saturdays. For Sundays and public holidays, the air conditioning system is off. Any optimal automation system was considered for the air conditioning system. Table 4, with 50% window to wall ratio, green glass 6 mm thick, 0.623 solar factor. The solar absorptance is 0.5 for walls and roof. The prototype building model The internal loads parameters are also based on requirements established in the Regulation for Energy Efficiency Labelling of Commercial Buildings in Brazil (BRAZIL, 2009), represented by equipment (155 W/person), lighting (9.7 W/m 2 ), and people (7 m 2 /person). Assessing the energy performance of VAV and VRF air conditioning systems in an office building located in the city of Florianópolis 271 Figure 6 -A 3D representation of the model The air conditioning air change rate was a constant rate of 27 m³/h.person, considering a nominal occupancy rate for the entire period. The thermal zones were considered slightly pressurized during the operation of the air conditioning system. During the period of non-operation of the air conditioning system, an air infiltration rate of 0.5 air change per hour (ACH) was adopted for all thermal zones. The weather data type assumed is a Test Reference Year (TRY) for the city of Florianópolis. According to NBR -15220-3: Thermal performance of buildings (ABNT, 2005), the city of Florianópolis is in Zone 03. Considering Koppen's climate classification, Florianópolis is defined as a Humid Subtropical Climate. The number of cooling degree hours (24 o C) is 4517 (GOULART, 1993). Cooling energy analysis The cooling loads of the VRF and VAV systems were compared, according to two building use schedules (constant and variable). The end use energy consumption represents the importance of cooling, fan, tower and pumps for both systems, including impact on partial load. The annual energy consumption and performance under nominal conditions and partial loads was also performed for VAV and VRF systems. Therefore, the percentage of hours for which the systems operated in each range of COP and PLR values was investigated, to assess the achievement of the design concepts used as parameters of efficient choices for the systems applied in the case study. Performance curves Variable air volume system The standard chiller nominal COP was 6.7 W/W, resulting in a COP of 6.51 W/W and an IPLV of 6.41 for the current the design operation. For the VSD chiller, the nominal COP modeled was also 6.7. The results are a COP of 6.7 W/W and IPLV of 7.87. The COP of 6.7 W/W was selected in order to assess the impact of difference COP curves for the two chillers adopted in this study. The results of the assessment for these performance correlation curves and the relationship between COP and PLR values for different operating temperatures is presented in Figure 4. For the standard chiller, it can be observed (Figure 7a) that the maximum cooling efficiency was obtained with the operation of the compressor under full load (PLR of 100%) and the minimum cooling efficiency with a PLR of 30%. For the VSD chiller (Figure 7b), the highest cooling efficiency range occured at 70 to 90% PLR, decreasing significantly for PLR below 60%. Variable refrigerant flow system The same coefficients of performance curves were adopted for both systems, "VRF CR > 1" and "VRF CR < 1". The model selected as a reference to obtain the coefficients was the LG ARWN480DA2. The coefficient of performance curves for the outdoor units are shown in Appendices B -Variable refrigerant flow system characteristics. The study shows satisfactory agreement for the case of the LG ARWN480DA2 (rated cooling capacity of 140 kW) and it was observed that the difference was not relevant for LG ARWN580DA2 case (rated cooling capacity of 168 kW). The curves , and , are presented in Figure 8 and the curves , and , are presented in Figure 9. These two figures show the difference between the correlations calculated from the regression coefficients and the values calculated directly from the data obtained from the catalog. The curves , and , are considered to be the same. For all operating conditions, the limit temperature of , is equal to 29.4°C. Below this value the software uses the and curves and above this value it uses and . The curves , and is evaluated in Figure 10. The power data as a function of PLR is determined by these curves and compared with the power data from the catalogs. The curve is used for a PLR below 0.3. The available capacity as a function of PLR (equivalent to ) is calculated using the curve and compared to the capacity data from the catalogs, as shown in Figure 11. The available capacity is not adjusted for PLR ≤ 1. The simulation model assumes that the capacity supplied is equal to the capacity required when the total capacity of the indoor units is less than or equal to the capacity of the outdoor unit. The manufacturer data for the are the same for the two outdoor units selected, as presented in Figure 12. Figure 13 shows the performance of the COP in relation to the PLR. The results were calculated considering different ranges for an operation temperature. The value 5 presented in the Figure 13 represents the nominal COP, considering an internal wet-bulb temperature of 19.4°C and a condenser water temperature of 29.4°C. The simulated VRF outdoor unit shows a better performance in the PLR range of 40-60%. The efficiency is maximum at a PLR of 50% and decreases significantly until the minimum PLR of 30%. Assessing the energy performance of VAV and VRF air conditioning systems in an office building located in the city of Florianópolis 275 Energy consumption according to end use Figure 14 shows the annual energy consumption according to end use for the VAV and VRF air conditioning systems. For the constant schedule, the VRF system with CR > 1 provides energy consumption reductions of 17.8% and 11.7% in relation to the VAV systems with standard chillers and VSD chillers, respectively. For the variable schedule, the energy consumption reductions are 19.2% and 12.5% compared to the VAV systems with standard chillers and VSD chillers. The VRF system with CR > 1 provided reductions of 6.6% and 7.6% for the constant and variable building uses, respectively, when compared to the VRF system with CR < 1. There is a decrease in the cooling energy consumption of the VAV and VRF systems for the variable schedule. The VAV systems showed a higher pump energy consumption when compared to VRF systems as the pump power system varies with cooling demand. The pump energy consumption of the VRF systems is set as constant. On the other hand, the fan energy consumption is higher for the VRF system when compared to VAV systems, for both schedules. The supply fan of the VRF indoor units operates continuously and the VAV supply fan varies the air flow rate according to the cooling demand. Annual cooling efficiency analysis end-use The cooling is the most influential end use energy based on a comparison of the energy consumption results obtained in this study ( Figure 14). However, the pump and fan showed a significant influence on the annual energy consumption. For both schedules, it was observed in Figure 14 that the annual energy consumption of the VAV systems was higher than that of the VRF systems. Even with a COP of 5 W/W under the nominal condition, the VRF systems consumed less cooling energy than the VAV systems with a nominal COP of 6.51 W/W (standard chillers) and 6.70 (VSD chillers). The COP values as a function of PLR showed that the VRF system performs better than the VAV with VSD chillers only in the PLR range of 40-55%. The VRF system with CR < 1 operates for a longer time with partial load than the VRF system with CR > 1, but the VRF system with CR < 1 resulted in inferior performance. The results for the occurrence of COP operation for VAV and VRF systems, at constant and variable schedule is presented in Figure 15 and Figure 16, respectively. Figure 15 shows that the VAV system with standard chillers presented the lowest COP values and that the operation of the secondary chiller contributed to reducing its performance in relation to the other systems. The main standard chiller operated with COP values from 6 to 7 for 87% of the time, and with COP values from 5 to 6 for 12% of the time. The secondary standard chiller operated in the COP range of 5 to 6 (76% of the time), with values of 6 to 7 occurring 5% of the time. The VAV system with VSD chillers presented higher COP values for the primary chiller, and lower COP values for the secondary chiller in relation to the values presented by the VRF system with CR <1. This resulted in the small difference in the cooling consumption results obtained for these systems. 276 For the VAV system with VSD chillers, COP ranges of 6 to 7 and 7 to 8 are predominant. The main and secondary chillers operated for 39% of the time in the former range. In the latter range, the main and secondary chillers operated for 45% and 23% of the time, respectively. It should also be noted that for a significant percentage of the time (16%) the main VSD chiller operated with COP values between 8 and 9. The COP ranges with the highest percentage of operation time for the VRF CR <1 system was 6 to 7 and 7 to 8. The VRF CR> 1 system presented the best COP values. The system operated most frequently with COP values in the ranges of 7 to 8, 6 to 7, and 8 to 9. It can be observed in Figure 16 that the COP values for the variable schedule are lower than those observed for the constant schedule, except for the main VSD chiller. The COP values for the secondary chillers of the VAV systems contributed significantly to the reduction in the performance of these systems in relation to the VRF systems. For the VAV system with standard chillers, the main standard chiller operated with COP values of 6 to 7 for 58% of the time, and of 5 to 6 for 40% of the time. The secondary standard chiller operated mainly in the COP range of 5 to 6 (74% of the time). For the VSD chillers, the main chiller operated for 37% of the time and the secondary chiller for 66% of the time in the COP ranges of 6 to 7. In the COP ranges 7 to 8, the percentages of operation time were 58% and 9% for the main and secondary VSD chillers, respectively. The COP ranges in which the VRF CR>1 system operated most were 6 to 7, 7 to 8, and 5 to 6. For the VRF CR<1 system, the COP ranges that presented the highest percentages of operation time were 6 to 7, 5 to 6, and 7 to 8. For the VAV systems, it was observed that the main chillers operated in the PLR range of 0.6 to 1.1 for 65% and 57% of the time for the constant and variable schedules, respectively. In this PLR range the main chillerpresented a high efficiency. The secondary chillers operated for a significant amount of the time in the PLR range of 0.3 to 0.4 for both building use schedules (55% for the constant schedule and 58% for the variable schedule), but this PLR range was not associated with a high efficiency. Moreover, the percentage of operation time at PLR values of < 0.3 increased from 18% to 26% with the use of the variable schedule. On the other hand, the VRF systems operated most of the time in the PLR range of 0.3 to 0.4, at which their performance starts to decrease. The percentages of operation time in this PLR range were 45% to 58% for the constant schedule and 60% to 70% for the variable schedule. In the PLR range of 0.4 to 0.7, at which the best performance was observed, the percentages of operation time were 38% and 23% for the constant and variable schedules, respectively. Therefore, it can be noted that, in the case of the constant schedule, the air conditioning systems were operating at PLR values below the range considered to be the most efficient. Since the cooling demand is lower in the case of the variable schedule, the operation time spent in the less efficient PLR range for each air conditioning system increased, resulting the lower COP values. 278 The difference in the cooling energy consumption values for the VRF and VAV systems, for the variable schedule compared with the constant schedule, is mainly influenced by the partial load performance during the hottest period of the year. With a reduction in the PLR values, the VRF system operates for a higher percentage of the time at values closer to a PLR of 0.50. This PLR value is more competitively efficient, contributing to reducing the decrease in the annual performance of the VRF systems. In periods of milder outdoor temperatures, the performance characteristics are different. Although there is operation at a PLR of 0.50 during the period with the highest thermal load of the day, the PLR values are very low at the beginning and end of the day, hindering the cooling performance. In addition, the poor performance of the secondary chillers, due to the high percentage of operation time in PLR ranges associated with a significantly lower COP, contributes to the energy consumption difference observed. For both building schedules, the higher cooling efficiency observed for the VRF CR > 1 system in relation to the VRF CR < 1 is due to the fact that the system operates at higher PLR values. The use of a larger outdoor unit aiming to increase the time of operation at partial load had a negative influence on the performance of the VRF air conditioning system evaluated in this study. Also, it can be noted that acceptable CR ranges vary from manufacturer to manufacturer. It is important to understand the reasons for the differences and if there are any advantages, disadvantages or risks to setting up a system with a larger or smaller CR. For the VAV systems, the intermediate annual COP result of 5 to 8, compared to the nominal COP of 6.51 and 6.70, is associated with the balance between a satisfactory performance of the main chiller (PLR operation > 0.6) and very unsatisfactory performance for the secondary chiller (operation in PLR < 0.4). For VRF systems, the COP values of 5 to 9 compared to the nominal COP of 5 are associated with operation in the PLR range of 0.3 to 0.7. Discussion The performance of an air conditioning system is a function of its cooling capacity in response to the variation in the thermal loads throughout the year, and also many other factors, such as the ratio of the actual cooling loads to the design loads. This is particularly relevant to variable flow systems (either VAV or VRF). Therefore, the characteristics of the design and operation conditions should be considered to compare air conditioning systems, as well as climatic conditions, building materials and building use schedule. This paper demonstrates the potential of building energy simulation (BES) to model different types of air conditioning systems. BES is becoming a key element to compare and analyze the energy performance of air conditioning systems (ZHOU et al., 2008;RAUSTAD, 2013;HONG et al., 2016). A comparison study between the simulation results of a VAV and VRF system is made to assess their cooling energy performance, particularly in partial load operation. For VAV systems, for the same COP of 6.7 and PLR, the VSD chillers reached the best performance for all operating conditions. The final energy consumption was reduced by adopting chillers with better performance in partial loads. On the other hand, for both VAV systems, it was observed that dividing the nominal thermal load into two chillers was not very efficient. The operation of the secondary chiller significantly influenced the annual energy performance of the system by presenting a high frequency of COP values below the nominal COP. For VRF systems, the hourly COP values were often higher than the nominal COP of 5, reducing the annual energy consumption. In this study, the VRF was more efficient in partial loads when compared to VSD chiller with nominal COP of 6.7. However, it was observed that the operation of the VRF systems was also not very efficient as the frequency of operation of these systems happened in a low PLR. Although this is a case study, these remarks are significant. The results show that the Brazilian regulation requirements related to air conditioning systems, such as the COP and IPLV, design and operation conditions, are not enough to obtain systems with adequate performance in terms of efficiency level and energy consumption. Measurement procedures considering part load operation and aligned with international standards need to be established in Brazil. Units of measurement and measurement procedures should be consistent to be possible to compare the energy performance of HVAC systems in different locations around the world. The results reported in this paper address the importance in developing studies regarding VRF systems in Brazil in order to contribute with information that promotes the energy efficient use in buildings. The results show that the VRF systems presented less energy consumption when compared to the VAV systems. The same findings were observed in other studies (AYNUR; HWANG; RADERMACHER, 2009;LIU, HONG, 2010;KIM et al., 2017). However, it is important to note that this study is based on several assumptions and some of them are briefly discussed in this section. Regarding building energy simulation, it is important to mention that system selection criteria may radically alter the findings. For example, a slight change in the performance curves used for each system would have a large effect on the results; or by using more efficient chillers. Some choices would likely make the VAV or VRF system a far more efficient choice (YIN; PATE; SWEENEY, 2016). Building energy simulation results can contribute with information that promotes the energy efficient use in buildings. However, it is important to note that building energy simulation is based on several input data and machine performance has to be properly accounted. There is only one investigated building, and the building requirements were established according to the Brazilian regulation for commercial buildings. Hence, the findings were primarily applicable to office buildings in Brazil. It should be noted that other limitation is related to the assess for only one city in Brazil, Florianopolis. A difference in the annual air conditioning savings according to hot, mild and cold climates is presented by Kim et al. (2017). The VAV and VRF systems are expected to present different operation regarding climate condition. One limitation of the simulation model is the lack of information on performance curves of each air conditioning system. The coefficients of the performance curves considered were obtained from the DataSet folder of the EnergyPlus program and from catalogs. In addition, it is important to mention that the advantages from VRF instead of VAV air conditioning system in this study is based on building energy simulation results. A detailed site survey, measurement and sub-metering can provide more comprehensive data for more detailed comparisons (YU et al., 2016). One important and recent initiative is the ASHRAE 205P -"Standard Representation of Performance Simulation Data for HVAC&R and Other Facility Equipment" (AMERICAN…, 2019). This standard it to facilitate equipment data exchange formats to improve the accuracy and efficiency of equipment modeling in simulation software. ASHRAE 205P will allow manufacturers and other data producers to implement data writing and reading methods supporting detailed performance data information such as capacity and input power for all operating conditions. Conclusions In this paper, the cooling energy performance of variable air volume (VAV) and variable refrigerant flow (VRF) were assessed and compared in terms of their cooling energy use. Based on the results the following conclusions can be drawn: (a) theperformance curves of air conditioning equipment showed that the partial load operation characteristic of each system is as influential as the input data in the nominal condition for the annual energy consumption. The VRF systems, with COP of 5, presented lower energy consumption than the VAV systems, with COP of 6.7; (b) despite of achieving all requirements from the Brazilian regulation for office buildings regarding COP, IPLV and operating strategy, the VAV systems do not perform efficiently in partial loads; (c) forVRF systems, the only requirement of the Brazilian regulation is the nominal COP of 2.93, which is exceeded in this study. However, these systems do not perform well in partial load operation. (d) resultsindicate the need for the improvement of energy efficiency requirements for air conditioning systems in Brazil and highlight the importance of gaining information from the use of building energy simulation tools that allow air conditioning systems to be modelled. However, information regarding the air conditioning performance curves are essential to allow proper evaluation of its advantage when using building simulation for labeling or retrofit evaluation; and (e) theresults address the importance in developing studies regarding VRF systems in Brazil in order to contribute with information that promotes the energy efficient use in buildings. Therefore, this comparative analysis with VRF and VAV systems, even with assumptions considered, contributes to other studies regarding design practices in Brazil.
2020-05-21T00:06:11.513Z
2020-05-08T00:00:00.000
{ "year": 2020, "sha1": "ff7dcaddd1f17abc370b664d89e599b4fc6d4302", "oa_license": "CCBY", "oa_url": "http://www.scielo.br/j/ac/a/ZzbxNThsqtCM7hKF5GPnrTh/?format=pdf&lang=en", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "c31c82d3156e423fea4fbe19d376c2fbb39d00d3", "s2fieldsofstudy": [ "Environmental Science", "Engineering" ], "extfieldsofstudy": [ "Environmental Science" ] }
249735735
pes2o/s2orc
v3-fos-license
The Association among Cancer in Family History and Psychosocial Stress Aim: The purpose of this study is to look at the link among a family history of cancer, coping style, also emotional suffering. Methods: The self-reporting questionnaire, coping style scale, also effect of occasion scale-revised were also used to assess 85 individuals through the family history of cancer also 74 normal patients. Results: There have been substantial variations in nervousness, despair, cancer-exact distress, and psychological adjustment here between two groups of patients. Psychological discomfort (anxiety, sadness, and cancer-specific distress) was regarded as a negative coping style and the family history of cancer. In the family history and psychological discomfort, an undesirable coping style served as an intermediate. Conclusion: People through the family history of cancer are now extra probable to have a negative coping style, which predisposes them to more severe psychological discomfort. Original Research Article Khalid et al.; JPRI, 34(41B): 41-44, 2022; Article no.JPRI.86742 42 INTRODUCTION Researchers from throughout the world are focusing on fit females having the family history of breast tumor to examine connection among family history of tumor, coping style, also psychological suffering [1]. It has been found that cancer-specific discomfort was greater in females having the family history record of breast tumor than in those without the need for the family history record. Individual environmental adaptability and psychological wellness are greatly influenced by coping style [2]. Previous research has found that positive coping styles are connected with excellent psychosocial health, whereas negative coping styles are connected to maladjustment then are detrimental to discrete psychological well-being [3]. A study of healthy females having the family history of breast cancer found that 47 percent were afraid that they might acquire breast cancer in the future, and 29 percent indicated that their fear of developing breast cancer altered their everyday lives. In our nation, illness of breast tumor, lung tumor, and stomach cancer is pretty high. The participants of the current study were healthy people through the family history record of three types of tumor [4]. The current research goal was to identify coping styles that remain detrimental to subjective stress adaptation and to identify more good adaptation styles by exploring the association among cancer family history, coping style, and psychological pain that will offer respected data for psychological health meddling [5]. METHODOLOGY The participants were split into 2 sets. One set consisted of strong persons through such disease family history (breast cancer, lung cancer or gastric cancer). Researcher enlisted the help of 82 healthy persons to attend cancer sufferers for their re-examination. This study was conducted at cancer departments of Mayo Hospital, Lahore, Services Hospital, Lahore and Anmol cancer Hospital, Lahore from June 2018 to May 2020. Patients were enrolled after obtaining informed and written consent from them. Patient's data was collected after permission from Ethical Review Committee of the concerned hospital. We divided the patients in two groups, 1 st the patients having family history of cancer and 2 nd with the cancer patients without any family history. Then comparison of these two groups was shown in the result section. First, humans used family history record as assertion and negative coping style as the predictor variables to start investigating forecasting impact of family history on deleterious coping style; second, researcher remained using negative coping style as assertion and concern (anxiety, cancer-specific anguish) as the regression model to start investigating prognostic result of negative coping strategy on anxiety: finally, hierarchical reversion method was conducted to analyze predictive effect of negative coping style on anxiety. RESULTS The overall amount of two-dimensional mood and anxiety questions extracted was 23. Intrusion, avoidance, and hyperarousal were all on the scale. The question number was 23. The term "case" was changed with "cancer" in this study, and total score of three components represents amount of cancer-specific suffering. There were a total of 26 questions, with 14 positive coping questions also 9 negative coping questions. The better overall, more upset the sufferers are. In terms of vulnerability, despair, intrusion, avoidance, and hyperarousal, there was a substantial difference between the two groups ( Table-I). Here remained not any substantial relationship among positive coping style and the five characteristics described above (concern, despair, intrusion, resistance, and hyperarousal). Family history in addition a poor coping style remained positively related to those elements. DISCUSSION Various coping techniques can have an impact on an individual's personal emotional state, which in turn has an impact on their mental health condition [6,7]. The current study shows that individuals through the family history record had meaningly advanced stages of anxiety, sadness, and cancer-specific discomfort than individuals without a familial history. Family history, on other hand, has the substantial prognostic influence on psychological pain, which is connected with belief that individuals with the family history would likely get cancer in future. Coping style can operate as a controller or as an intermediate between stress and psychological reaction. Individuals with such a cancer family history must be aware that, while they may be predisposed to cancer owing to genetic factors, they must prevent needless worry; on the other hand, they must focus more on preventing cancer also strive to identify, identify, also treat cancer as soon as more likely. Hereditary cancer accounts for around 12 to 17 percent of all cancers [8]. CONCLUSION The findings indicate that when individuals use a negative coping strategy to deal with stress, they will suffer advanced levels of negative emotions in addition psychological discomfort. Individuals with such the family history of tumor will remain concerned by "hereditary" cognition, also they remain expected to adopt the negative coping style, resulting in maladjustment. As a result, unpleasant emotions emerge, particularly cancer-precise sadness, that is detrimental to psychological health. ETHICAL APPROVAL As per international standard or university standard written ethical approval has been collected and preserved by the author(s). CONSENT As per international standard or university standard, patients' written consent has been collected and preserved by the author(s).
2022-06-17T15:19:59.060Z
2022-06-13T00:00:00.000
{ "year": 2022, "sha1": "c422fd9ac044f53202009231b43785669b1cd634", "oa_license": "CCBY", "oa_url": "https://journaljpri.com/index.php/JPRI/article/download/36285/68618", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "3d93722a7834cd7e8efeff91515c041f842f57c1", "s2fieldsofstudy": [ "Medicine", "Psychology" ], "extfieldsofstudy": [] }
2184309
pes2o/s2orc
v3-fos-license
Bioinformatics clouds for big data manipulation Abstract As advances in life sciences and information technology bring profound influences on bioinformatics due to its interdisciplinary nature, bioinformatics is experiencing a new leap-forward from in-house computing infrastructure into utility-supplied cloud computing delivered over the Internet, in order to handle the vast quantities of biological data generated by high-throughput experimental technologies. Albeit relatively new, cloud computing promises to address big data storage and analysis issues in the bioinformatics field. Here we review extant cloud-based services in bioinformatics, classify them into Data as a Service (DaaS), Software as a Service (SaaS), Platform as a Service (PaaS), and Infrastructure as a Service (IaaS), and present our perspectives on the adoption of cloud computing in bioinformatics. Reviewers This article was reviewed by Frank Eisenhaber, Igor Zhulin, and Sandor Pongor. Background With significant advances in high-throughput sequencing technologies and consequently the exponential expansion of biological data, bioinformatics encounters difficulties in storage and analysis of vast amounts of biological data. The gap between sequencing throughput and computer capabilities in dealing with such big data is growing [1]. As reported on February 2012, two nanopore sequencing platforms (GridION and MinION) are capable of delivering ultra-long sequencing reads (~100kb) with additionally higher throughput and much lower cost [2]. This means that biological data will accumulate at an ever-faster pace. Digging out the "treasure" from massive biological data represents the primary challenge in bioinformatics, consequently placing unprecedented demands on big data storage and analysis. With the amount of data growing continuously, it is becoming increasingly daunting for small laboratories or even large institutions to establish and maintain computational infrastructures for data processing. At present, a promising solution to address this challenge is cloud computing [3][4][5], which exploits the full potential of multiple computers and delivers computation and storage as dynamically allocated virtual resources via the Internet [6]. Cloud computing as a public utility The term of "cloud computing" was inspired by the cloud symbol that is often employed to depict the Internet in flowcharts. As a matter of fact, cloud computing is not a new concept; it can date back to 1961 at the MIT Centennial when John McCarthy opined that "computation may someday be organized as a public utility" [7]. Cloud computing makes the best use of multiple computers to provide convenient and on-demand access to hosted resources (e.g., computation, storage, applications, servers, network) via Web Application Programming Interfaces (API). Due to its efficient and economical features, it is believed that cloud computing gains promise in transforming computing into a public utility [8]. Similar to extant public utilities (viz., water, electricity, gas, and telephone), computing utility packages a variety of computer resources as metered services ("pay-as-you-go"), which can be accessed by any person without the necessity to know where the services are hosted or how they are delivered. Whether a public utility or not, cloud computing has already become a significant technology in big data storage and analysis, exerting revolutionary influences on both academia and industry. Cloud-based resources in bioinformatics The popularization of cloud computing is to some extent attributable to open source software in aid of its implementation, such as Hadoop and associated software. Hadoop (http://hadoop.apache.org) features two key modules-MapReduce and Hadoop Distributed File System (HDFS). MapReduce divides a computational program into many small sub-problems and distributes them on multiple computer nodes, and HDFS provides a distributed file system that stores data on these nodes. Hadoop and its associated software are designed to handle load balancing among multiple nodes and to detect node failures that can be automatically re-executed on any node. In brief, Hadoop allows for the distributed processing of large datasets across multiple computer nodes, supports big data scaling (HDFS, HBase), and enables fault-tolerant parallelized analysis (MapReduce). Thus, Hadoop meets the needs of bioinformatics and several studies have successfully used Hadoop in bioinformatics [9][10][11], accordingly leading to cloud-based bioinformatics resources. As mentioned, cloud computing delivers hosted services over the Internet. Thus, bioinformatics clouds involve a large variety of services from data storage, data acquisition, to data analysis, which in general fall into four categories (Figure 1), viz., Data as a Service, Software as a Service, Platform as a Service, and Infrastructure as a Service [12]. In what follows, we summarize existing cloudbased resources in bioinformatics and classify them into these four categories (Table 1). Data as a service Bioinformatics clouds are heavily dependent on data, as data are fundamentally crucial for downstream analyses and knowledge discovery. It is reported that annual worldwide sequencing capacity is currently beyond 13 Pbp and still on the increase annually by a factor of five (http://sourceforge.net/apps/mediawiki/jnomics). Due to such unprecedented growth in biological data, delivering Data as a Service (DaaS) via the Internet is of utmost importance [13,14]. DaaS enables dynamic data access on demand and provides up-to-date data that are accessible by a wide range of devices that are connected over the Web. A case in point is Amazon Web Services (AWS) which provides a centralized repository of public data sets, including archives of GenBank, Ensembl, 1000 Genomes, Model Organism Encyclopedia of DNA Elements, Unigene, Influenza Virus, etc. As a matter of fact, AWS contains multiple public datasets for a variety of scientific fields, such as biology, astronomy, chemistry, climate, economics, etc. (http://aws. amazon.com/publicdatasets). All public datasets in AWS are delivered as services and thus can be seamlessly integrated into cloud-based applications [15]. Software as a service Bioinformatics requires a large variety of software tools for different types of data analyses. Software as a Service (SaaS) delivers software services online and facilitates remote access to available bioinformatics software tools through the Internet. As a consequence, SaaS eliminates the need for local installation and eases software maintenances and updates, providing up-to-date cloud-based services for bioinformatic data analysis over the Web. In the past several years, efforts have been made to develop cloud-scale tools, including sequence mapping [16][17][18], alignment [19], assembly (Contrail, Gaea and Hecate; unpublished), expression analysis [20][21][22], sequence analysis (Jnomics; unpublished), orthology detection [23], peak caller for ChIP-seq data [24], functional annotation of variants from multiple personal genomes [25], identification of epistatic interactions of single nucleotide polymorphisms (SNPs) [26], and various cloud-based applications for NGS (Next-Generation Sequencing) data analysis (BGI Cloud and EasyGenomics; unpublished). Platform as a service To make the cloud programmable, Platform as a Service (PaaS) offers an environment for users to develop, test and deploy cloud applications where computer resources scale automatically and dynamically to match application demand, so that users do not need to know how many resources are required or to assign resources manually in advance. PaaS features rapid application development and good scalability, presenting usefulness in developing specific applications for big biological data analysis. Typically, the environment delivered by PaaS includes programming language execution environments, web servers, and databases. From this point, by delivering data as a service and functioning as a database, DaaS can be regarded as an enhancement of PaaS. Currently, there are only two PaaS platforms in bioinformatics belonging to web servers, namely, Eoulsan [27], which is a cloud-based platform for high-throughput sequencing analyses, and Galaxy Cloud [28,29], which is a cloud-scale Galaxy for large-scale data analyses. Infrastructure as a service To reach the full potential of computer resources, Infrastructure as a service (IaaS) offers a full computer infrastructure by delivering all kinds of virtualized resources via the Internet, including hardware (e.g., CPUs) and software (e.g., operating systems). Users can access virtualized resources as a public utility and pay for the cloud resources that they utilize. Since different users often need different cloud resources, flexibility and customization are essential to IaaS. With the ongoing and rapid advancement of IT, it is increasingly efficient to run applications within Virtual Machines (VMs). VM isolates users from the underlying infrastructure and provides flexibility to meet the customized needs of different users. To date, there are two examples of IaaS in bioinformatics, viz., Cloud BioLinux [30], which is a publicly accessible VM for high-performance bioinformatics computing, and CloVR [31], which is a portable VM that incorporates several pipelines for automated sequence analysis. Toward bioinformatics clouds Albeit relatively new, cloud computing holds great promise in effectively addressing big data storage and analysis problems in bioinformatics. Below, we present our perspectives on the adoption of cloud computing in bioinformatics research. Placing data and software into the cloud The traditional way for bioinformatics analysis often involves downloading data from public sites (e.g., NCBI, Ensembl), installing software tools locally, and running analyses on in-house computer resources. By placing data and software into the cloud and delivering them as services (namely, DaaS and SaaS), data and software can be seamlessly and easily integrated into the cloud so as to achieve big data storage and analysis. Thus, it would be desirable to store and analyze big biological data within the cloud. In the era of big data, however, only a tiny amount of biological data is accessible in the cloud at present (only AWS, including GenBank, Ensembl, 1000 Genomes, etc.) and the vast majority of data are still deposited in conventional biological databases. In the long run, more and more sequencing projects, such as the Genome 10K Project (a collection of DNA sequences representing the genomes of 10,000 vertebrate species; http://www.genome10k.org), the 1001 Genomes Project (a catalog of genetic variation in 1001 strains of Arabidopsis thaliana; http://www.1001genomes.org), the 1KITE Project (1K Insect Transcriptome Evolution; http://www.1kite.org), TCGA (the Cancer Genome Atlas; http://cancergenome.nih.gov), etc., would generate ultra-large volumes of biological data and thus require bioinformatics clouds for big data storage, sharing and analysis [32]. In addition, to decipher the most important and complex biological questions often involves the utilization of multiple tools [33]. However, extant efforts have only touched a small fraction of cloud-based tools. Most software tools are written for desktop (rather than cloud) [34] and therefore are not provided as cloud-based web services accessible via the Web, making it infeasible to perform complex bioinformatics tasks. To fulfill big data storage, sharing and analysis with lower cost and higher efficiency, it is essentially required that a large number of biological data as well as a wide variety of bioinformatics tools should be publicly accessible in the cloud and delivered as services through the Internet. However, it should be also noted that biology is in its infancy (compared with other disciplines, e.g., physics) and many theoretical problems in biology are under the surface, albeit a huge volume of biological data are available now. To address theoretical problems and uncover fundamental theories in biology, it often involves hypothesis formulation, experiment design, data generation/collection, tool development (for data analysis), and knowledge formalization, which can be helpful to recognize what data and tools should be placed into the cloud. Big data transfer Transferring vast amounts of biological data to the cloud is a significant bottleneck in cloud computing [6]; it is not unusual at present to physically ship hard drives to the cloud center (http://aws.amazon.com/importexport). Currently, a promising solution is to integrate innovative transferring technologies into cloud computing. One example is the cloud-based EasyGenomics (http://www. genomics.cn/en/news/show_news?nid=99014), released by BGI (Beijing Genomics Institute), that achieves highspeed genomic data delivery by Aspera's fasp ™ highspeed file transfer technology (which dramatically speeds file transfers over the Web and outperforms conventional technologies such as FTP and HTTP; http://www. asperasoft.com/en/technology/fasp_overview_1/fasp_ technology_overview_1). In June 2012, BGI succeeded in transferring genomic data across the Pacific Ocean at a sustained rate of almost 10 Gigabits per second (http:// www.bio-itworld.com/news/06/29/12/Data-tsunami-BGItransfers-genomic-data-Pacific-10-gigabits-second.html), demonstrating that high-speed transfer technologies (such as Aspera's fasp) are capable of dealing with big data transfers over the Web. Aside from high-speed transfer technologies, there are other technologies that can also aid big data transfer, such as data compression [35,36] and Peer-to-Peer (P2P) data distribution [37,38]. A cloud-based lightweight programming environment To automate data analysis, bioinformatic tasks are often implemented as pipelines by linking the output of one tool with the input of another. To perform large-scale data analysis and aid in the development of corresponding bioinformatic pipelines, a cloud-based lightweight programming environment is needed, which allows the swift development of customized pipelines from a large pool of tools and enables automated and configurable analysis on the cloud. Currently, the cloud-based programming paradigm adopted in the bioinformatics community is Hadoop [11,16,17,20,27], in which computation-consuming and data-intensive analyses are primarily solved by distributing tasks over multiple nodes (see tutorials at http://wiki.apache.org/hadoop). However, substantial computational skills are still required for developing cloud-based pipelines in Hadoop and its programming environment is not lightweight for most biologists or people with no or limited programming experience. Ideally, this would be a lightweight programming environment that does not require extensive coding by keyboard. Rather, it would be done easily by mouse, viz., "drag+drop". Such an environment also features remote access to diverse types of resources based on utility-supplied cloud computing, consisting well with the potential trend of e-Science [39] (that is, scientific research in many disciplines is carried out via the internet). Furthermore, when building such an environment, attention should be paid to setting up a system of standards for data exchanges among different software tools [40], which can in return pave the way for realizing the full potential of lightweight programming environment. Open bioinformatics clouds Incentivized by the potential big profits to be made on a pay-as-you-go basis, there are multiple cloud providers and it is predicted that in the following years there will be more providers, building industrial or academic, private or public clouds. Currently, the largest provider is Amazon, which provides commercial clouds for processing big data. Additionally, Google also provides a cloud platform (https://cloud.google.com) that allows users to develop and host web applications, and store and analyze data. However, commercial clouds are currently not yet able to provide ample data and software for bioinformatics analysis. Moreover, it is very difficult for commercial clouds to keep pace with the emerging needs from academic research, consequently requiring specific clouds for bioinformatics studies. It goes without saying that open access and public availability of data and software are of great significance to science [41]. When data and software are all in the cloud, keeping the cloud open and publicly available to the scientific community is essential for bioinformatics research [42]. Therefore, it is most likely that future efforts should be devoted to building open bioinformatics clouds and providing public access to the scientific community. The potential resulting benefits of such bioinformatics clouds include easing large-scale data integration, enabling repeatable and reproducible analyses, maximizing the scope for sharing, and harnessing collective intelligence for knowledge discovery. With the presence of numerous bioinformatics clouds, interoperability and standardization between clouds will become important issues [43,44]. Conclusions Here we reviewed extant cloud-based resources in bioinformatics and classified them into DaaS, SaaS, PaaS, and IaaS. Since cloud computing bears great promise in effectively addressing big data storage and analysis, future efforts in building bioinformatics clouds involve developing a large variety of services from data storage, data acquisition, to data analysis, accordingly making utility-supplied cloud computing delivered over the Internet. In the era of big data, bioinformatics clouds should integrate both data and software tools, equip with high-speed transfer technologies and other related technologies in aid of big data transfer, provide a lightweight programming environment to help people develop customized pipelines for data analysis, and most important, be open and publicly accessible to the whole scientific community. Abbreviations DaaS: Data as a service; SaaS: Software as a service; PaaS: Platform as a service; IaaS: Infrastructure as a service; API: Application programming interface; AWS: Amazon web service; VM: Virtual machine. Competing interests The authors declare that they have no competing interests. Authors' contributions LD drafted the manuscript. XG, JX, and YG revised the manuscript critically for important intellectual content. ZZ conceived of the study, supervised the project, and revised the manuscript. All authors read and approved the final manuscript. Reviewers' comments Reviewer 1: Dr. Frank Eisenhaber (Bioinformatic Institute, Singapore) The report by Dai et al. is a timely, well structured review of the opportunities associated with cloud computing in the bioinformatics domain and provides insight both into the existing state of the art and near-future possibilities. As such, it appears useful for the community. At the same time, nothing is said to which extent these development have already contributed to new biological insight. Do the authors have examples for this? Authors' response: Yes, Galaxy is a case in point; it is widely used for a wide range of bioinformatics analyses and cited by more than 300 times. There is the danger that the excitement with large data and with making it available shadows the actual reason why this data is recorded. Many of these so-called high impact, recently published OMICS papers are actually a boring reading with no new idea (even at the methodical level) and no new biology discovered. At the end, it is about biomolecular mechanisms that need to be known. Authors' response: We fully agree. To reflect this point more clearly, we expanded our description accordingly. Albeit currently we have high-throughput sequencers and large-scale data, biology is likened to the 16 th century physics in its state of development, lacking fundamental rules and theories, and many theoretical problems in biology are still under the surface. To address theoretical problems and uncover fundamental theories in biology, it often involves hypotheses formulation, experiment design, data generation/collection, tool development (for data analysis), and knowledge formalization, which can be helpful to recognize what data and tools should be placed into the cloud. The large data stream today is, to a great extent, due to the infantile methodology of measuring sequences and expression profiles. The TBs and PBs of sequencing data can be condensed to GBs with post-experimental processing and it are these GBs that are the actual object of scientific analysis. The human genome sequence per individual is just a few GB and, most likely and hopefully:-), it will not grow much in the future. Similarly, thousands of individual genomes might be stored as variations of a reference genome and this will, most likely, cost less than a GB per genome. Expression profiles are actually arrays of genomic locations and occurrence numbers and thus, are also in the GB range. Thus, the shift to big data science and the emphasis on new IT developments instead of doing the actual life science research might drift a considerable part of the community away into non-relevant efforts. To remember a historical example: There was a great public, large data effort with creating astronomic maps for solving the longitudinal problem in the 17th century Britain; yet, John Harrison solved it by constructing a clock. Authors' response: Thank you for your excellent points, which we fully agree. As an interdisciplinary field, bioinformatics is influenced by any advance in IT or biology. Take Web technology as an example, in retrospect, Web 1.0 was "readonly" (50K average band width, ABW), Web 2.0 is "read-write" (1M ABW), and Web 3.0 promises to have the attributes of "read-write-execute" with greater ABW [40], as cloud computing delivers all kinds of computer resources via the internet which to some extent aids the furtherance of Web 3.0. The rapid advancement of information technology (e.g., Web technology) brings a lot to bioinformatics, including data analysis, data storage, and data sharing. Thus, it is likely inevitable that there will be a shift to "big" data science based on utilitysupplied cloud computing, which is consistent with the potential trend of e-Science [39] (that is, scientific research in many disciplines is carried out via the internet). At the same time, we completely agree that "big" data to some degree result from lacks of fundamental theories; these theories can guide us in developing methods and formulating hypotheses and then know what data should be stored. Our perspectives are that in the future more and more computer scientists will help biologists build a light-weighted programming environment for biological data analysis and collaborate with biologists to conduct actual life science research. We clarified the corresponding description and added a related citation (Ref [39]) in the revised version. Reviewer 2: Dr. Igor Zhulin (University of Tennessee, United States of America) The review summarizes advantages of using cloud computing for "big data" storage and analysis issues in bioinformatics. In general, it does a fair job on this front. However, disadvantages of clouds are not discussed in this review at all. For example, time-critical calculations, complex tasks that require data management (load balancing, fault tolerance issues, etc.) will not do well on clouds that lack the edge of advanced HPC architectures. Authors' response: Thanks for your valuable comments. We accepted your comments and added some description in the main text. Hadoop (http:// hadoop.apache.org) features two key modules-MapReduce and Hadoop Distributed File System (HDFS). MapReduce divides a computational program into many small sub-problems and distributes them on multiple computer nodes, and HDFS provides a distributed file system that stores data on these nodes. Hadoop and its associated software are designed to handle load balancing among multiple nodes and to detect node failures that can be automatically re-executed on any node. Therefore, Hadoop is capable of performing time-critical calculations by distributing tasks and large datasets over multiple computer nodes, supporting big data scaling, and enabling faulttolerant parallelized analysis. Reviewer 3: Prof. Sandor Pongor (International Centre for Genetic Engineering and Biotechnology, Italy) I am not an expert of cloud computing but I am am very curious to see the potentials of this technology for bioinformatics. I found the paper clearly written, concise and well structured -it gives good introduction to this field, that readers of this journal will appreciate in my opinion. What I kind of miss is the user's perspective. When shall a student or a user of bioinformatics think about a cloud solution? Where to get introductory texts? On a more practical side: Teaching the bioinformatics of big data (high throughput) seems to be easy by putting a virtual server on the cloud . . . Where are the could based courses? These are simple questions and the authors may want to dedicate a little space to dealing with them. Authors' response: Thank you for your thoughtful and excellent comments. "When shall a student or a user of bioinformatics think about a cloud solution", is highly dependent on the user' s need and the computer resources the user has. Taking SaaS as an example, if a computational task is very time-consuming and also this task can be divided into many small sub-tasks, it might be better to put this task into the cloud and to run it as SaaS. The cloud just mentioned can be offered by a cloud provider (e.g., Amazon), or be in-house, self-made which makes full use of multiple computers available in one lab/institution. The relevant introductory texts are available at http://wiki.apache.org/hadoop, which provides general information and tutorials on how to build a Hadoop-based cloud and includes several practical examples. There is an online course-"Cloud Application Development" at http://my.ss.sysu.edu.cn/cloud/, which also contains other related resources and references. In addition, Galaxy (https:// main.g2.bx.psu.edu) is capable of performing interactive data analyses, which can serve as an E-learning platform for students.
2015-03-27T18:11:09.000Z
2012-11-28T00:00:00.000
{ "year": 2012, "sha1": "f0cb15fe57fc7c0df374e09c2f415b4f45a46ee1", "oa_license": "CCBY", "oa_url": "https://biologydirect.biomedcentral.com/track/pdf/10.1186/1745-6150-7-43", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "f0cb15fe57fc7c0df374e09c2f415b4f45a46ee1", "s2fieldsofstudy": [ "Biology", "Computer Science" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
267320073
pes2o/s2orc
v3-fos-license
Anomalous intralayer growth of epitaxial Si on Ag(111) The epitaxial growth of silicene has been the subject of many investigations, controversies and non-classical results. In particular, the initially promising deposition of Si on a metallic substrate such as Ag(111) has revealed unexpected growth modes where Si is inserted at the beginning of the growth in the first atomic plane of the substrate. In order to rationalize this anomalous growth mode, we develop an out-of-equilibrium description of a lattice-based epitaxial growth model, which growth dynamics are analyzed via kinetic Monte-Carlo simulations. This model incorporates several effects revealed by the experiments such as the intermixing between Si and Ag, and surface effects. It is parametrized thanks to an approach in which we show that relatively precise estimates of energy barriers can be deduced by meticulous analysis of atomic microscopy images. This analysis enables us to reproduce both qualitatively and quantitatively the anomalous growth patterns of Si on Ag(111). We show that the dynamics results in two modes, a classical sub-monolayer growth mode at low temperature, and an inserted growth mode at higher temperatures, where the deposited Si atoms insert in the first layer of the substrate by replacing Ag atoms. Furthermore, we reproduce the non-standard \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\Lambda $$\end{document}Λ shape of the experimental plot of the island density as a function of temperature, with a shift in island density variation during the transition between the submonoloyer and inserted growth modes. Anomalous intralayer growth of epitaxial Si on Ag(111) Kejian Wang , Geoffroy Prévot & Jean-Noël Aqua * The epitaxial growth of silicene has been the subject of many investigations, controversies and nonclassical results.In particular, the initially promising deposition of Si on a metallic substrate such as Ag(111) has revealed unexpected growth modes where Si is inserted at the beginning of the growth in the first atomic plane of the substrate.In order to rationalize this anomalous growth mode, we develop an out-of-equilibrium description of a lattice-based epitaxial growth model, which growth dynamics are analyzed via kinetic Monte-Carlo simulations.This model incorporates several effects revealed by the experiments such as the intermixing between Si and Ag, and surface effects.It is parametrized thanks to an approach in which we show that relatively precise estimates of energy barriers can be deduced by meticulous analysis of atomic microscopy images.This analysis enables us to reproduce both qualitatively and quantitatively the anomalous growth patterns of Si on Ag(111).We show that the dynamics results in two modes, a classical sub-monolayer growth mode at low temperature, and an inserted growth mode at higher temperatures, where the deposited Si atoms insert in the first layer of the substrate by replacing Ag atoms.Furthermore, we reproduce the nonstandard shape of the experimental plot of the island density as a function of temperature, with a shift in island density variation during the transition between the submonoloyer and inserted growth modes. Since the isolation of graphene flakes thanks to exfoliation, 2D materials have aroused an ever-growing interest as different materials can form 2D layers (e.g.carbon, silicon, transition metal dichalcogenides...) [1][2][3][4][5][6] .They are expected to potentially surpass all previous technologies thanks to their outstanding electronic properties, and their integration in devices has attracted numerous studies.Graphene is probably the most studied material worldwide, due to its unique properties related to the Dirac-cone-shaped energy bands at the Fermi level and high carrier mobility.However, despite significant efforts, there has been no reproducible method to open up its bandgap while preserving high carrier mobility.2D Materials (2DM) based on group IV elements such as Si (silicene) and Ge (germanene) are promising alternatives 7 .They are predicted to be Dirac materials in which electrons behave as relativistic massless particles, and, more importantly, are better suited than graphene in the race for ultimate thickness scaling of nanoelectronic devices.Obtaining 2DM by the common exfoliation technique has intrinsic limitations, notably on the size and quality of the 2D crystals thus obtained, and is not appropriate for silicene or germanene that do not exist in nature in allotropes of Si or Ge.The remaining alternative is the production of 2DM by epitaxy that is a technological lock for any potential industrial application of these systems.A huge experimental effort has been dedicated to the epitaxial growth of silicene but is still ongoing as no reliable, high-quality and large-scale method is available.Some results are controversial, others are not conclusive.Epitaxial growth of silicene was first reported on metallic substrates.The deposition of Si on Ir(111) led to a layer of Si atoms strongly bonded and hybridized with substrate atoms 8,9 .The deposition on Ru(0001) led again to a 2D film but with no Dirac cones due to charge transfert with the substrate atoms 10 .The deposition on Ag(111) and (110) led to a long-standing controversy as linearly dispersing bands at the Fermi level were first reported as being Dirac cones, but turned out to result from the strong coupling between Si and Ag atoms [11][12][13][14][15][16][17][18][19][20] .In any case, the understanding and control of the epitaxial growth of 2D materials is largely insufficient, and today's progress is limited by the lack of wafer-scale uniform growth that requires further investigation of dynamical mechanisms. To progress in this direction, we develop numerical simulations of the epitaxial growth by considering a concrete example that has given rise to many experiments, the growth of 2D silicium on Ag(111).Experiments revealed the importance of the film/substrate interactions, but also exotic growth modes in some conditions, and especially in the early stages of growth in the submonolayer regime.Even though not miscible in bulk, Si and Ag were indeed found to significantly intermix at the film/substrate interface and even to lead to Si islands inserted in the substrate in some temperature range.Only a few theoretical works have been devoted to the study of the epitaxial growth of 2DM, 21,22 , and even less on silicene or germanene.First principles-calculations have revealed the stability of structures 17 , but the dynamical description of these systems is sparse due to major difficulties.The challenge is to simulate out-of-equilibrium systems of sufficient size (typically of the order of a hundred nanometers) yet incorporating atomic details, over sufficiently long times (typically of the order of a minute or more) yet describing atomic events (diffusion, incorporation, exchange...).These space and time scales are unreachable with usual ab-initio or molecular dynamics tools, but kinetic Monte-Carlo simulations of lattice models allow us to do the splits between these scales thanks to a probabilistic description of microscopic transitions.We thence develop here both a new modelization of the growth dynamics and extensive simulations in order to rationalize the anomalous growth modes of Si on Ag(111).We show that both intermixing and layer-dependent energetics can rationalize both the growth modes and its statistical properties. Experimental results We briefly recall the highlights of the anomalous growth of Si on Ag(111) described in Ref. 17 .Although Si and Ag are not miscible, the deposition of Si on Ag(111) revealed two temperature-dependent growth modes where the deposited Si atoms can be inserted into the Ag substrate, see Fig. 1.At low temperatures ( T = 200 K), a classical growth occurs where two-dimensional islands nucleate on the substrate surface and then grow during further deposition, as in the homoepitaxial sub-monolayer epitaxial growth.On the other hand, for higher temperatures ( T ≥ 300 K), Si atoms insert themselves into the first layer of the substrate and expel the Ag atoms which dif- fuse to accumulate at the step edges.For increasing Si deposition, this nucleation phase is followed by a growth phase where the islands expand in the first layer of the substrate, by continuously expelling Ag atoms.This island growth regime ends with a coalescence regime for which the island density decreases up to the completion of a complete two-dimensional Si layer on the substrate. The variation of the island density in the island growth regime with temperature is also abnormal, see Fig. 2. The density of islands strongly increases between T = 200 K and 300 K, by a factor of 10.Then, the island density decreases with temperature above 300 K, in the abnormal growth regime where the islands are inserted in the first layer of the substrate.This latter decrease is usual, and corresponds to the fact that with increasing temperature, the deposited adadoms can further diffuse before being captured by an already nucleated island, giving rise to larger and less dense islands (for a given amount of deposited material).Yet, the strong increase of the density between 200 and 300 K is abnormal, even if already observed in some heteroepitaxy on metallic substates, e.g.Pb/Cu(110) and Ni/Ag(111) 23,24 .It corresponds to the effects of intermixing and insertion of atoms in the substrate for temperatures high enough to pass the insertion barriers.As a whole, the unsual shape variation of the Arrhenius plot of the island density was attributed to the competition between the insertion of isolated Si atoms and the mobility of these atoms. Kinetic model We consider a lattice model suitable for performing kinetic Monte-Carlo simulations of epitaxial growth [25][26][27][28][29][30][31][32][33][34][35] .We consider a face-centered cubic lattice bonded by a (111) surface and assume a pseudomorphic growth, see Fig. 3. Every site can be either empty, or populated either by a Si or Ag atom.We assume that every atom is bonded only to its nearest neighbors.Yet, the atom binding energies generally depend on the atomic configuration, and may be layer-dependent and species-dependent, thus introducing surface and composition effects. The dynamics is defined on the basis of a Markovian random process characterized by characteristic frequencies.We consider as elementary processes diffusion, attachment, detachment, intermixing and the atomic deposition characterized by the flux F in monolayer (ML)/s, see Fig. 3. Thermal fluctuations are embedded in the vibrational frequency ν 0 = k B T/h .Except for the atomic deposition, the elementary processes are characterized by a rate α ν 0 exp �E , where E is the energy barrier to be overcome for the process to occur 26,29,[36][37][38][39] , and α is a prefactor that may depend on the species of the atom under consideration.We consider the energy barriers corresponding to the bond breaking of the nearest-neighbor interactions, thus satisfying detailed balance.Interactions may be (1) anisotropic depending whether it is in-plane or out-of-plane, (2) species-dependent according to the Si or Ag atoms involved in the bond and (3) layer-dependent according to the height of the atom with respect to the surface.Hence, in the general case, we assume that the elementary energy barrier for both diffusion, attachment and detachment of a tested atom of species σ = {Si, Ag} reads The sum runs over the species of the nearest-neighbors, either in-plane (in) or out-of-plane (out), while n in τ is the number of in-plane nearest-neighbors of species τ , n out τ ,↑ (resp.n out τ ,↓ ) is the number of out-plane nearestneighbors of species τ above (resp.below) the tested atom.To avoid ambiguity, the barrier E σ ,in τ always concerns a bond pointing downwards between an atom of species σ lying on top of an atom of species τ .In addition, we also introduce surface (or wetting) effects by considering that the energy barrier per bond E σ ,in/out τ as well as the prefactors α σ may depend on the height of the tested atom.For simplification, we account only for two sets of energy barriers, one for h ≤ 0 that characterizes atoms in the substrate (either Ag atoms or possibly inserted Si atoms), and one for h ≥ 1 that characterizes atoms on top of the substrate.This restriction is not restrictive, since we will only be considering sub-monolayer growth, where in practice the maximum height is essentially h = 1 .As regards intermixing between Si and Ag atoms, we consider that it is characterized by an energy barrier E abs inter for the absorption of a Si atom on top of the surface with a Ag atom beneath, and by E des inter for the possible desorption and interchange of an inserted Si atom below a Ag adatom on the surface.Considering all the above mentioned (1) www.nature.com/scientificreports/possible processes for every atom, including deposition, we can build a ladder of rates and randomly select the elementary moves.We use in the following the rejection-free continuous-time Bortz-Kalos-Lebowitz algorithm 40 . Parametrization In order to parameterize this model, we have developed a new approach that allows us to estimate energy barriers with relative precision.Parameterization of this model is complex since it is a priori characterized by 11 parameters for each height h: 2 amplitude coefficients α for Si and Ag, 3 in-plane barriers (Si-Si, Ag-Ag, and Si-Ag or Ag-Si which are equal), 4 downward facing out-of-plane barriers (Si on Si, Si on Ag, Ag on Si, Ag on Ag), and 2 inter-mixing barriers ( E abs/des inter ).In addition, most of these coefficients have an impact on the morphological and statistical results of the model.In these conditions, reproducing experimental results qualitatively and quantitatively is a tour-de-force that we achieve thanks to the meticulous inspection of the morphologies revealed by atomic microscopy.Indeed, comparison of these images with the model results lead to relatively precise estimates of energy barriers (to within 0.01eV typically) which forms a set of values reflecting the system. We proceeded by intuiting a starting parameters set through literature (both experimental estimates and abinitio calculations) and comparison with the morphologies of Si deposits on Ag(111), and then, thanks to a backand-forth approach, by fine-tuning the set of parameters to reproduce both qualitatively and quantitatively the experimental results.The first set of parameters allows the main qualitative aspects of the system to be reproduced by analyzing various morphological details of the system, see Supplementary Materials: convex vs dendritic vs facetted shapes of 2D islands, decoration of the step edges, 2D vs 3D shapes, voids in the first layer, intermixing in the islands, etc.If this first set of parameters allows to obtain the qualitative characteristics for the growth of Si on Ag(111) (i.e.islands shapes, step edge decoration, island insertion in the first layer only for T ≥ 300 K etc), it does not allow us to obtain the quantitative predictions of island densities as a function of temperature.In a second step, we therefore refined the values of the different energy barriers, in order to (1) keep all the qualitative features and (2) reproduce the variation of the island density with temperature.We were able to start from the low temperature T = 200 K where the intermixing is negligible, in order to essentially refine the parameter α as well as the energy E Si,in Si (h ≥ 1) .In particular, we have increased the energy barriers E Ag,in Si (h ≤ 0) and E Si,in Si in order to stabilize the Si atom insertions that form in the first layer of the substrate for T ≥ 300 K, and slightly refined E Si,out Ag .Eventually, we obtained the optimized parameter set given in the Table 1.It is worth noting that these energy barriers are the crucial parameters of the out-of-equilibrium statistical physics on which this model is based, and of thermodynamics analysis 26,[36][37][38]41 . Ther values are here deduced from experiment rather than induced by atomistic approaches that could, for example, be obtained by calculations coming from the density functional approximation.Ab-initio results available in the literature were incorporated as initial constraints in the search for a set of parameters consistent with experiments.Energy barrier calculations require the exploration of all diffusion pathways, not just local energy minima, and their reliability requires careful analysis [42][43][44] .While we do not have the full calculation of all the coefficients of the model, we argue that the process of deducing parameters from the out-of-equilibrium analysis is compulsory in order to find a set of barriers that rationalize experiments (Supplementary Information). Discussion We plot in Fig. 4 the morphologies resulting from the kinetic Monte-Carlo simulations of the model exposed above with the optimized parameter set.It reproduces qualitatively well the features of the experimental anomalous growth of Si on Ag(111) with: (1) two-dimensional islands of Si that sit on the substrate at low temperature, T = 200 K, with compact and irregular shapes, and a continuous decoration of the step edges; (2) islands that grow in the first layer of the substrate for T = 300 K and above, with a density much higher than that of the islands at low temperature; (3) a density of inserted islands that decreases with temperature; (4) an accumulation of Ag at the step edges above T = 300 K, see Fig. 5, which forms dendrites at 300 K and a continuous stripe at 400 K.In addition, the island densities (either sitting on the substrate for T = 200 K, or inserted for T ≥ 300 K) are shown Table 1.Optimized parameter set that reproduces both the qualitative morphological features of the Si/ Ag(111) epitaxy (i.e.islands shapes, step edge decoration, island insertion in the first layer only for T ≥ 300 K etc), and the quantitative evolution of the island density as a function of temperature.www.nature.com/scientificreports/ in Fig. 2 (the results are obtained on a 400×400 system and averaged over 10 realizations of the dynamics with different random number seeds; Similarly to the experiments, we only compatibilized islands larger than 1nm×1nm ).They reproduce quantitatively well the experimental densities with a 10-fold increase in island density between 200 and 300 K, and then a decrease in the inserted island densities at higher temperature as the diffusion length increases and coarsening occurs.The agreement is almost perfect for T between 200 and 440 K, and remains approximate at very high temperature for T ≥ 474 K.For these very high temperatures, the density of islands is small and finite size effects are introduced in the simulations.Moreover, the measure of the high temperature introduces a higher uncertainty on the results, in particular with regard to its dependence on step density.Yet, apart from these very high temperatures, the quantitative agreement between the experiments and the model can be considered as very good and validates the parameters incorporated in the model.These results validate the relevance of the effects incorporated in the model used.We looked for a minimal model describing the basic and unavoidable mechanisms of epitaxial growth (deposition, diffusion, attachmentdetachment), while adding the intermixing observed experimentally, and simple surface effects (with two configurations h ≤ 0 ) and h ≥ 1 ).It is noticeable that surface effects (or wetting effects) were found to be unavoidable www.nature.com/scientificreports/ to reproduce qualitatively the growth modes.Altogether, intermixing and wetting appear to be the keys to the abnormal growth patterns of Si on Ag(111).The energy barriers described in the Table 1 thus reproduce these results well and are constrained by this statistical analysis and the experimental results.The number of parameters leaves some latitude on the choice of precise values of these parameters, which must therefore be understood with some uncertainty.Nevertheless, as described in the parameterization of the problem, even small variations of 0.01 eV for some features (e.g.E Ag,in Si ) or 0.05 eV for others (e.g.E Ag,in Ag ), can lead to results qualitatively incompatible with the experiments, thus limiting the uncertainties.Finally, it is notable, see Fig. 2, that the simulation results show a linear dependence for the Arrhenius plot of the density as a function of temperature for high temperature ( T ≥ 400 K), with a typical activation energy of 0.53 ± 0.08 eV.This result cannot be compared with the experimental results because of the smaller number of points measured. Conclusion The growth of Si on Ag(111) revealed experimentally an anomalous growth pattern where islands of Si insert into the first layer of the substrate at high-enough temperatures even though Si and Ag are not miscible in the bulk.To rationalize this new growth mode, we derived a kinetic lattice model describing the main effects of epitaxy (deposition, diffusion, attachment-detachment) while incorporating intermixing and wetting effects.The growth dynamics of the model was computed by kinetic Monte-Carlo simulations.The parametrization of the model was derived using an approach combining meticulous analysis of atomic microscopy images and statistical properties as a function of temperature, and allowed to reproduce both qualitatively and quantitatively the experimental results, thus validating the main ingredients of the model at the origin of the growth mode.This first study allows us to lay the foundations for the study of the growth of thicker deposits which have revealed possible islands buried under the substrate, as well as Si or Ge on other metals such as Ag(100) or Au(111) which are the subject of current experimental investigations. Figure 1 . Figure 1.STM investigation of the growth of 0.3 ML of Si on Ag(111) at different temperatures T = 198 K (a), 300 K (b) and 440 K (c).In (a), the darker zone correspond to the substrate, and lighter zones, to the Si island growing on top of the substrate.In (b) and (c), the darker zones stand for inserted Si atoms, while lighter zones stand for the Ag substrate. Figure 2 . Figure 2. Island density as a function of temperature for a deposit of 0.1 ML of Si on Ag(111), for a given deposition flux of 0.1 ML/h, for T = 198, 300, 398, 440, 474, 478 K (experimental values of Ref.17 ).The black points correspond to experimental values, while the orange points for Kinetic Monte-Carlo simulations (with the two extra temperatures 350, 460 and 500 K) with the optimized parameter set described in the following. Figure 3 . Figure 3. Geometry of the system: Si atoms (in blue) are deposited with a constant flux F onto a Ag(111) substrate (light grey atoms) with a step edge (dark grey).Deposited atoms first attach as adatoms (light blue), but may undergo intermixing and thence incorporate in the substrate (dark blue) with an energy barrier E absinter .An inserted Si adatom (dark blue) may also exchange with a Ag adatom (light grey) with an energy barrier E des inter , and become a Si adatom.The contribution to the energy barrier of a given atom of species σ due to its in or out- of-plane neighboring atoms of species τ is E σ ,in/out τ (by convention, out-of-plane links are always with the σ atom above the τ atom).The Si and Ag lattices are supposed pseudo-morphic and follow the substrate fcc (111) lattice. Figure 4 . Figure 4. Simulations of the epitaxy of 0.1 ML Si on Ag(111) for T = 198 K (a), 300 K (b) and 440 K (c).The deposition flux is 0.1 ML/h.The atoms in grey are the Ag atoms, those in brown/cyan, the Si atoms on the substrate and finally those in dark blue, the Si atoms inserted in the first layer of the substrate.The simulation box is 400×400 in lattice units on a honeycomb lattice.Colored insets show the experimental morphologies found at these conditions in 17 scaled up to the same size. Figure 5 . Figure 5. Vicinity of a step edge (left) for the simulation of the Si/Ag(111) epitaxy at T = 300 K and conditions of Fig. 4 and (right) as found in the experiments in 17 at this temperature.Ag atoms expelled from the first layer of the substrate by the insertion of Si atoms into the first layer of the substrate, accumulate at the walking edge, and form dendrites at this temperature.The Si islands are inserted in the first layer of the substrate.These two characteristics are consistent with the experimental results.The experimental inset represents a large scale scan where light blue dendritic shapes stand for Ag atoms accumulating at a step edge.
2024-01-31T06:17:07.470Z
2024-01-29T00:00:00.000
{ "year": 2024, "sha1": "5f50334bf8a19cf39b7cc537371c92b7f7e342e1", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "aaca24ef7eebc4b7cbfb958e9474e39f03550c09", "s2fieldsofstudy": [ "Materials Science", "Physics" ], "extfieldsofstudy": [ "Medicine" ] }
12279834
pes2o/s2orc
v3-fos-license
VSL#3 probiotic modifies mucosal microbial composition but does not reduce colitis-associated colorectal cancer Although probiotics have shown success in preventing the development of experimental colitis-associated colorectal cancer (CRC), beneficial effects of interventional treatment are relatively unknown. Here we show that interventional treatment with VSL#3 probiotic alters the luminal and mucosally-adherent microbiota, but does not protect against inflammation or tumorigenesis in the azoxymethane (AOM)/Il10−/− mouse model of colitis-associated CRC. VSL#3 (109 CFU/animal/day) significantly enhanced tumor penetrance, multiplicity, histologic dysplasia scores, and adenocarcinoma invasion relative to VSL#3-untreated mice. Illumina 16S sequencing demonstrated that VSL#3 significantly decreased (16-fold) the abundance of a bacterial taxon assigned to genus Clostridium in the mucosally-adherent microbiota. Mediation analysis by linear models suggested that this taxon was a contributing factor to increased tumorigenesis in VSL#3-fed mice. We conclude that VSL#3 interventional therapy can alter microbial community composition and enhance tumorigenesis in the AOM/Il10−/− model. VSL#3 is a commercially available probiotic cocktail (Sigma-Tau Pharmaceuticals, Inc.) of eight strains of lactic acid-producing bacteria: Lactobacillus plantarum, Lactobacillus delbrueckii subsp. Bulgaricus, Lactobacillus paracasei, Lactobacillus acidophilus, Bifidobacterium breve, Bifidobacterium longum, Bifidobacterium infantis, and Streptococcus salivarius subsp. Thermophilus. Orally administered VSL#3 has shown success in ameliorating symptoms and reducing inflammation in human pouchitis 20,21 and ulcerative colitis 22,23 . Preventive VSL#3 administration can also attenuate colitis in Il10 2/2 mice 24 and ileitis in SAMP1/YitFc mice 25 . Nonetheless, several reports indicate no beneficial effects of VSL#3 on murine dextran sulfate sodium (DSS) colitis 26 , established ileitis in TNF DARE/1 mice 27 , or human pouchitis 28 . The mechanisms underlying the beneficial functions of VSL#3 are not well understood, but likely involve direct effects of probiotic activity on the mammalian host as well as indirect effects by altering the population dynamics of the gut microbiota as a whole. Indeed, pre-treatment with VSL#3 altered community composition, prevented DSS colitis, and diminished adenoma and adenocarcinoma formation in the AOM/DSS model 29,30 . In the rat TNBS model of inflammation-associated CRC, early administration of VSL#3 altered microbial diversity and delayed the transition from inflammation to dysplasia 31,32 . Thus it is clear that VSL#3 probiotic has the capability to prevent inflammation and carcinogenesis when used as a preventative strategy. However, the therapeutic potential of VSL#3 probiotic administered after the onset of inflammation remains unknown. The goal of the present study was to assess the ability of probiotic cocktail VSL#3 to alter the colonic microbiota and decrease inflammation-associated cancer when administered as interventional therapy. We report that therapeutic administration of VSL#3 probiotic did not protect against the development of colitis-associated CRC or inflammation in AOM/Il10 2/2 mice. In fact, we observed increased tumorigenesis in one of two cohorts of VSL#3-fed mice, relative to VSL#3-untreated mice. Illumina sequencing of this microbiota revealed that microbial community composition of the stool and mucosally-adherent compartments were significantly altered by VSL#3 administration. Depletion of a bacterial taxon assigned to genus Clostridium from the mucosally-adherent microbiota was identified as a mediator of VSL#3-enhanced tumorigenesis in this cohort. Together these findings demonstrate that therapeutic administration of VSL#3 does not protect against tumorigenesis in AOM/ Il10 2/2 mice and may promote the loss of potentially protective microorganisms from the mucosally-adherent microbiota. Results VSL#3 does not protect against tumorigenesis in the AOM/Il10 2/2 model of colitis-associated CRC. To investigate the impact of probiotic intervention on development of colitis-associated CRC, we conventionalized germ-free (GF) Il10 2/2 mice with specific pathogen free (SPF) microbiota for seven weeks to establish intestinal inflammation. Conventionalizing mice born and raised in GF conditions negates the confounding factor of familial transmission of the microbiota. Tumorigenesis was then initiated with 6 weekly i.p. injections of AOM (10 mg/kg), at which point daily oral administration of VSL#3 (10 9 CFU/animal/day) was begun ( Fig. 1). Based on previous reports showing that preventive administration of VSL#3 can reduce dysplasia and tumorigenesis in experimental models of inflammation-associated CRC 30,32 , we hypothesized that interventional treatment would reduce tumorigenesis in the AOM/ Il10 2/2 model. However, macroscopic assessment of tumor penetrance revealed no significant effect of VSL#3 on tumorigenesis, assessed seventeen weeks after the initial AOM injection (Fig. 2a). Microscopic scoring of neoplasia revealed high-grade dysplasia in 18/19 mice, with no significant difference in dysplasia score between VSL#3-treated and VSL#3-untreated animals (Fig. 2b). There was a potential trend towards greater depth of tumor invasion in VSL#3treated animals that did not reach significance (Fig. 2c, chi-squared test p 5 0.095). To gain a better insight into VSL#3 effects on colitis-associated CRC in respect to microbial composition, we repeated the experiment using a second cohort of Il10 2/2 mice. Seventeen weeks after the initiation of tumorigenesis with AOM, miniature endoscopy of live anesthetized mice revealed large macroscopic lesions in VSL#3treated animals that were rarely seen in VSL#3-untreated animals (Fig. 3a). Macroscopic examination of harvested colon tissue revealed that VSL#3-treatment induced visible tumors in 91% of mice, a significantly greater amount than the 38% observed in VSL#3-untreated mice that developed macroscopic lesions (Fig. 3b, c). Tumor multiplicity was also significantly enhanced by VSL#3, interventional administration. Germ-free Il10 2/2 mice were transferred to SPF conditions where they were colonized with SPF commensal bacteria for seven weeks. Tumorigenesis was then initiated with six weekly injections of AOM (10 mg/kg), and VSL#3 was orally administered (10 9 CFU/mouse/day) to mice daily throughout the remainder of the experiment. After twenty-four weeks, mice were sacrificed and tissues were harvested for assessment of microbiota, inflammation, and tumorigenesis. with an average of 1.64 macroscopic tumors per mouse vs. 0.33 tumors per mouse in VSL#3-untreated animals (Fig. 3d). Tumor size, however, remained unaffected by VSL#3 treatment (Fig. 3e). Histologic analysis of colonic sections revealed a significantly enhanced dysplasia score in VSL#3-treated animals (Fig. 3f). As observed in the first cohort, the cancer in VSL#3-fed animals was highly penetrant, with nearly all animals exhibiting advanced neoplasia (Fig. 3g). The majority of the VSL#3-fed mice harbored invasive adenocarcinomas (79% of mice), in contrast to only 13% of VSL#3untreated mice (Fig. 3h, i). These results indicate that interventional treatment with VSL#3 does not protect against, and has the ability to enhance, tumorigenesis and tumor invasion in the AOM/Il10 2/2 model of colitis-associated CRC. VSL#3 treatment does not impact colonic inflammation in the AOM/Il10 2/2 model. Chronic inflammation has been recognized as an important risk factor for CRC. We next assessed histologic inflammation in the distal and proximal colon of our second cohort of AOM/Il10 2/2 mice treated with or without VSL#3. We observed no significant difference in inflammation scores (Fig. 4a b), demonstrating that intervention with VSL#3 does not alter colonic inflammation. To gain greater insight into the inflammatory environment of the colon, we measured inflammatory cytokine mRNA in colon tissue biopsies using an Inflammation PCR Array. There were no significant differences in the expression of common IBD-and CRC-associated cytokines including Ifng, Il18, Tnf, Il1b, Il23a, and Il6 (Fig. 4c). Together these results suggest that VSL#3 enhances tumorigenesis without augmenting host inflammatory status. VSL#3 alters community composition of both the luminal/stool and mucosally-adherent microbiota. Previous studies indicate that IBD and CRC patients have a dysbiotic microbiota 6,9,10,12,13 . In addition, we have previously demonstrated a bloom of Enterobacteriaceae in AOM/Il10 2/2 mice 8 . To determine the extent to which VSL#3 alters the microbiota, we utilized Illumina HiSeq2000 highthroughput sequencing. We extracted DNA from stool samples to assess the luminal microbiota and from colon tissue biopsies to assess the mucosally-adherent microbiota and sequenced the hypervariable V6 region of the 16S rRNA gene 8 . We sequenced the luminal microbiota of all animals, and one representative animal per cage for the mucosally-adherent microbiota. We found that VSL#3 treatment altered both the luminal (ANOSIM R 5 0.381, P 5 0.032) and mucosally-adherent (ANOSIM R 5 0.406, P 5 0.024) microbiota ( Figure 5a). As expected, the composition of the luminal compartment differed from that of the mucosally-adherent microbiota (ANOSIM R 5 0.466, P , 0.001). Diversity of the luminal and mucosally-adherent microbiota (Figure 5b and c) were not significantly altered by VSL#3 treatment, although there was a trend toward lower diversity in the luminal microbiota of VSL#3-fed animals. VSL#3 alters the abundance of specific bacterial taxa of the luminal/ stool and mucosally-adherent microbiota. To gain greater insight into the microbial changes elicited by VSL#3 administration, we used BLASTn 36 to map the operational taxonomic unit (OTU) consensus sequences to the Silva database (http://www.arb-silva.de/) and classified the reference sequences using the RPD classifier 37 . In both the luminal/stool and mucosally-adherent compartments, VSL#3 administration altered the phylum-level distribution of the microbiota, with differences achieving significance at a false discovery rate (FDR) , 10% ( Figure 6). VSL#3-treated animals exhibited a significant contraction of Verrucomicrobia and expansion of Proteobacteria in the mucosally-adherent microbiota. Within the luminal/ stool microbiota, the abundance of Bacteroidetes was significantly decreased by VSL#3 treatment while Proteobacteria remained unaffected. To determine which specific microbial groups may be altered by VSL#3 treatment, we assessed the family-level distribution of the luminal/stool and mucosally-adherent microbiota. We were able to classify , 99% of reference sequences mapped to our OTUs to family level. At both the luminal/stool and mucosally-adherent compartments, VSL#3 treatment induced a contraction of Porphyromonadaceae and in the adherent compartment, also a contraction of Verrucomicrobiaceae ( Figure 7). These changes were significant at P , 0.05 but not , 10% FDR, likely due to the large number of microbial families surveyed. Depletion, rather than expansion, of specific microbial groups in response to VSL#3 treatment suggests that a loss of protective microbes contributed to the significantly enhanced penetrance and severity of tumorigenesis in this model. Depletion of a Clostridium bacterial group is associated with VSL#3 administration and tumorigenesis. To determine if microbial alterations in the mucosally-adherent microbiota mediate the effects of VSL#3 on tumorigenesis, we performed mediation by linear models analysis 40 (Figure 8a and Methods section). In this analysis, we first tested the effect of VSL#3 on dysplasia (model 1). In agreement with earlier results presented in Figure 2, we found that VSL#3 was significantly associated with dysplasia score (P , 0.0008). We next applied model 2 to test the association of the relative abundance of all OTUs of the mucosally-adherent microbiota with dysplasia. We found that only a single taxon of the mucosal niche, OTU#199, had a significant association with dysplasia score (Benjamini and Hotchberg FDR corrected p 5 0.0518). The consensus sequence of OTU#199 has an exact match to 38 distinct Clostridium taxa in the Silva database, and all of these full-length sequences were classified as Clostridiaceae I by RDP with a confidence score of 99% and Clostridium sensu stricto with a confidence score of 52%. We next applied model 3, which indicated that VSL#3 treatment was associated with a significant (,16 fold) decrease in the relative abundance of OTU#199 (FDR corrected P 5 0.0525; Figure 8b). Although the relative abundance of one other OTU of the mucosally-adherent microbiota was reduced in VSL#3 treated mice (FDR corrected p 5 0.00073), model 2 analysis revealed that this OTU (best classified as Blautia sp., accession GQ493997.1.1354) was not associated with dysplasia (FDR corrected p . 0.10). From these combined results, we concluded that depletion of a mucosally-adherent Clostridium group was a potential mediator of the effects of VSL#3 on dysplasia/tumorigenesis. To further evaluate this possibility, we applied model 4. We tested the effect of VSL#3 on dysplasia score without the influence of OTU#199, and found a high P-value (0.9399), which indicates that VSL#3 treatment is not directly associated with dysplasia. In contrast, when we tested the effect of OTU#199 on dysplasia score without the influence of VSL#3, we found a low P-value (FDR corrected P 5 0.0692), supporting that the abundance of OTU#199 is associated with dysplasia. Taken together, the combined results of this mediation analysis by linear models indicated that depletion of OTU #199, a bacterial group of the genus Clostridium, is a potential mediator of the effects of VSL#3 on dysplasia/tumorigenesis in our experiment. These results demonstrate that VSL#3 may enhance dysplasia/tumorigenesis through depletion of Clostridium bacteria (OTU#199) at the mucosal niche. Discussion Intake of probiotics as a means to maintain health and alleviate disease symptoms, especially gastrointestinal disorders, has gained tremendous popularity worldwide. Although some of the beneficial impacts of probiotics have been scientifically documented, mechanisms of action and impact on established disease remained unclear. Introduction of billions of bacteria in a confined ecosystem such as the intestine is likely to impact the microbial community. Earlier work employing molecular fingerprinting analyses, including terminal restriction fragment length polymorphism (TRFLP) and denaturing gradient gel electrophoresis (DGGE), have revealed that administration of VSL#3 probiotic can alter microbial community composition of mice and rats 23,29,43,44 . To better understand the impact of interventional VSL#3 administration on the microbial community during development of colitis-associated CRC, we utilized high throughput Illumina 16S sequencing of the intestinal microbiota of AOM/Il10 2/2 mice. Interventional administration of VSL#3 altered the composition of the luminal and mucosally-adherent microbiota with phylum level changes in the luminal microbiota (decrease in Bacteroidetes) and mucosally-adherent microbiota (decrease in Verrucomicrobia and increase in Proteobacteria). Depletion of specific bacterial groups at the family level, with Verrucomicrobiaceae in the adherent compartment and Porphyromonadaceae in the luminal and adherent compartments, was observed in VSL#3-exposed mice. Although we detected a non-significant increase in the abundance of VSL#3 bacterial groups Firmicutes and Lactobacillaceae, the major differences were detected in non-VSL#3 bacterial groups, indicating that administration of VSL#3 can exert pressure to induce broad changes in the microbial population of the intestine. These findings clearly demonstrate that VSL#3 administration markedly changed the composition of both the luminal and mucosally-associated microbiota. An unexpected consequence of interventional administration of VSL#3 probiotic was the lack of a protective effect on tumorigenesis and the propensity to enhance tumor invasion in AOM/Il10 2/2 mice. We have previously demonstrated that inflammation in Il10 2/2 mice alters the composition of the intestinal microbiota with expansion of Proteobacteria, and this impacts the development of CRC 8 . In the current study, VSL#3 was therapeutically administered after the onset of inflammation and in the context of a dysbiotic microbiota. In contrast to other studies where prophylactic VSL#3 administration ameliorated colitis and colitis-associated CRC 24,29,30,32,45 , our results have demonstrated that administration of VSL#3 beginning after the onset of inflammation and dysbiosis can enhance tumorigenesis. This was not associated with changes in inflammation or inflammatory cytokine profiles. Instead, enhanced tumorigenesis was associated with shifts in the composition of the intestinal microbiota in both the luminal/stool and mucosal niches. The initial composition of the microbial community likely influences probiotic effects on the host, as we also observed that prophylactic administration of VSL#3 beginning 3 days after transfer of Il10 2/2 mice from germ-free to SPF housing did not ameliorate inflammation nor early dysplasia (data not shown). In fact, we observed unexpected mortality in this treatment group, and terminated the experiment early at fourteen weeks post-transfer to SPF, six weeks after beginning AOM treatment (SPF vs. prophylactic VSL#3, log rank test p 5 0.040). Altogether, these results suggest that probiotics may be ineffective and even detrimental to host health in the context of an immature or dysbiotic microbiota. It also appeared that administration of VSL#3 predominantly induced depletion of bacterial groups, suggesting that tumorigenesis was enhanced through the loss of protective commensal bacteria. To determine if specific bacterial groups (classified into OTUs of 97% similarity) of the mucosally-adherent microbiota mediated the effects of VSL#3 administration on tumorigenesis, we utilized a statistical approach termed mediation by linear models 40 . This approach first revealed that at a FDR corrected cutoff of P , 0.1, only one OTU of the mucosally-adherent microbiota, belonging to genus Clostridium, was a potential mediator of VSL#3 effects on tumorigenesis in our model. Clostridium sensu stricto, the sub-genus to which RDP classified this OTU with 52% confidence, is the largest sub-genus of Clostridium and comprises approximately 73 species [46][47][48] . Several of these species matched the consensus sequence of OTU#199 at 100% identity in the Silva database. Clostridium species have recently been shown to provide protection by inducing IL-10-producing intestinal T-regulatory cells, however, this trait is mainly attributed to Clostridium cluster IV and XIVa 49 . A major mechanism by which T-regulatory cells maintain homeostasis in the intestine is through the production of IL-10 50,51 . This protective mechanism is not intact in Il10 2/2 mice, thus it is unlikely that depletion of IL-10-producing T-regulatory cells is responsible for enhanced tumorigenesis in VSL#3-treated AOM/Il10 2/2 mice. An IL-10-independent mechanism of protection in the intestine is consistent with early work in which colitis was prevented in Il10 2/2 mice by prophylactic administration of VSL#3, which was associated with an enhancement of epithelial barrier function 24 . At this time, the impact of VSL#3 and depletion of Clostridium species on epithelial barrier function in our colitis-associated CRC model is unknown. In addition, future studies will be necessary to determine if VSL#3 induces the depletion of Clostridium species in other CRC models and in the absence of disease in healthy WT mice. Many Clostridium can ferment dietary fiber and host mucins to produce short chain fatty acids such as butyrate. Butyrate serves as a primary energy source for healthy colonocytes and regulates gene expression, inflammation, differentiation, and apoptosis in the intestine [52][53][54] . Butyrate and butyrate-producing bacteria have anti-inflammatory and anti-carcinogenic activities [55][56][57][58] , and butyrate-producing bacteria are depleted in human IBD and CRC 6,59,60 . Thus depletion of Clostridium OTU#199 from the mucosally-adherent microbiota may reduce the production of butyrate and its anti-carcinogenic effects, resulting in enhanced tumorigenesis in AOM/Il10 2/2 mice. Butyrate and other short chain fatty acids are notoriously difficult to measure in vivo 61 , however, if this hypothesis is correct, restoring the availability of butyrate in the intestine via prebiotic and/or novel probiotic supplementation with Clostridium could reveal a novel means by which to protect against IBD and CRC. Future studies will be necessary to determine if supplementation with any of the numerous bacteria represented by OTU#199 can promote protection or prevent VSL#3-induced enhancement of tumorigenesis in the AOM/Il10 2/2 model. Together, our findings highlight several important concepts. First, probiotic bacteria may fail to protect against tumorigenesis in an environment with established dysbiosis. Earlier studies have revealed that VSL#3 probiotic can provide protection against colitis and colitis-associated CRC when VSL#3 is administered prior to the onset of inflammation 24,25,29,30,32,43,62 . VSL#3 bacteria can mediate protection in part through induction of IL-10 63 , which is absent in our system and also in a subset of human IBD patients [64][65][66] . Thus it is possible that IL-10-independent effects of VSL#3 administration are not sufficient to overcome the pro-inflammatory and pro-carcinogenic effects of established dysbiosis in the AOM/Il10 2/2 model. Second, probiotic bacteria can alter the composition of the luminal and mucosally-adherent microbiota and induce the depletion of endogenous beneficial microbes at the mucosal niche. In the current study, we observed a detrimental effect that was potentially mediated by the loss of Clostridium bacteria. Given the complexity in the microbiota, it is not surprising to find evidence that certain bacteria may protect against inflammation while others protect against the progression of neoplasia. Undoubtedly, greater accessibility of new 'omics technologies will allow for a better understanding of the tremendous metabolic capacity of the microbiota and how microbial activities can be modulated to promote health and protect against disease. Methods AOM/Il10 2/2 model and probiotic administration. Il10-deficient 129/SvEv mice were born and raised in germ-free (GF) isolators and then transferred directly to our specific pathogen free (SPF) facility (age 7-12 weeks, housed 2-4 mice per cage) for the immediate initiation of our colitis/colorectal cancer experiment (Fig. 1). In this model, mice naturally acquire the microbiota from their cage/room microenvironment. This method of conventionalization negates any confounding familial or maternal transmission of the microbiota. We have recently characterized the microbiota of this model 38 . After 7 weeks of colonization with SPF bacteria, mice received 6 weekly i.p. injections of azoxymethane (AOM, 10 mg/kg). On the day of the first AOM injection, one group of mice was orally administered VSL#3 probiotic (Sigma-Tau Pharmaceuticals, Inc.), 10 9 CFU per animal each day during the week, with two days off on weekends, until the end of the experiment. The second group was not administered VSL#3 probiotic. After 17 weeks (24 weeks post-transfer from GF housing), miniature endoscopy was performed to visualize tumor development in live mice 31 . Mice were sacrificed, stool and tissue were collected, and colons were examined macroscopically for tumors then swiss-rolled and fixed in formalin for paraffin embedding and histology 31 . Histology was scored for inflammation 33 expert animal histopathologist. Dysplasia scoring was as follows: 0 5 no dysplasia; 1 5 mild dysplasia characterized by aberrant crypt doci (ACF), 10.5 for multiples; 2 5 gastrointestinal neoplasia (GIN), 10.5 for multiples; 3 5 adenoma, non-invasive severe or high grade dysplasia restricted to the mucosa; 3.5 5 adenocarcinoma, invasive through the muscularis mucosa; 4 5 adenocarcinoma, fully invasive through the submucosa and into or through the muscularis propria. Two independent experiments were conducted and data from both experiments are presented and discussed. All animal protocols were approved by the Institutional Animal Care and Use Committee of the University of North Carolina at Chapel Hill. DNA extraction. Stool samples and mucosal biopsies were collected to assess luminal and mucosally-adherent microbiota respectively. Colon biopsies (2 3 10 mm) were collected after flushing the colon with PBS. Samples were immediately stored at 280uC. DNA was extracted from between 50-200 mg of stool or 100 mg colon tissue as described 8 . Illumina V6 16S library construction. The V6 hypervariable region of the 16S rRNA gene was amplified using a two-step PCR strategy 8 35 ) search using the Gold reference database was used to detect the presence of chimeras in our OTUs. UCHIME reported 6 (all), 3 (LS) and 2 (MA) chimeras representing a negligible fraction of the total consensus sequences that were incorporated into OTUs. All reported chimeras were excluded from further analyses. To facilitate taxonomic classification and to compensate for the short read length of the generated OTUs, we used BLASTn (v. 2.2.261 36 ) with an expectation value threshold of e-5 to map the OTU sequences to the Silva database (release 108, http:// www.arb-silva.de/). Next, we utilized the standalone version of the RDP classifier (v. 2.5 37 ) to classify the full-length Silva sequences with the best BLASTn match to the OTU sequence requiring an RDP confidence score of $ 80%. Given the short read length of our Illumina V6 amplicon (,75 basepairs), we were unable to rigorously classify all OTUs, including #199, to species. In the Silva database, there were 38 full-length 16S sequences with an exact match to the consensus sequence of OTU #199. All were classified to genus Clostridium. RDP classified the consensus sequence for OTU#199 as Clostridiaceae I with 99% confidence and the sub-genus Clostridium sensu stricto with 52% confidence. Longer amplicons will be required to narrow this taxonomic assignment in future work. For microbiota analyses, OTU abundances were standardized to relative abundance, log-transformed, and analyzed with a Bray-Curtis similarity matrix using PRIMER v. 6 (PRIMERe, Inc). Analysis of Similarity (ANOSIM) was used to test for dissimilarity in community composition, and ordination plots were generated by multidimensional scaling (MDS). ANOSIM was nested on cage for the luminal microbiota; cage was not a factor for the mucosally-adherent microbiota as we assessed just one representative animal per cage. Before performing ANOSIM, we removed OTUs represented in , 25% of samples. Diversity was assessed by calculating Shannon diversity index, Margalef richness index and Pielou evenness index. For calculation of P-values for diversity in the luminal microbiota, the mean from all mice within a cage was utilized as an observational replicate. To compare the abundance of individual taxa between groups, we performed pairwise comparisons within the mucosally-adherent microbiota by t test and in the luminal/stool microbiota by fitting a mixed effect linear model of the form: where T represents the Log Normalized Count; V represents treatment (VSL#3 or non-VSL#3); G represents the Cage; i is the intercept; c 1 and c 2 are coefficients for independent variables; e is the residual. This model takes into account the fact that multiple cages are involved and could contribute to the variation detected between samples 38 . VSL#3 was treated as a fixed effect while cage nested within VSL#3, the term G(V) in the model, was considered a random effect. All P-values were corrected for multiple testing using the Benjamini and Hochberg procedure 39 ((n*p)/R where n 5 # taxa, p 5 test P-value, R 5 rank by raw P-value) at a false discovery rate (FDR) threshold of 10%. Before performing pairwise comparisons, we excluded low abundance taxa that contributed , 1%. Baron & Kenny's four steps approach [40][41][42] , the most widely used method to assess causal mediation, was used to evaluate each OTU in the mucosally-adherent microbiota for its potential mediation effect between VSL#3 and dysplasia. This analysis was performed through four linear models using SAS v. 9.2. Model 1 tests that VSL#3 is significantly associated with dysplasia score. Model 2 tests that the relative abundance of the OTU is significantly associated with dysplasia. Model 3 tests the VSL#3 effect on the relative abundance of the OTU (for OTUs found to have a significant effect on dysplasia in model 2). Model 4 tests whether the bacterial taxon (i.e. OTU#199) affects dysplasia without the influence of VSL#3, and whether VSL#3 affects dysplasia without the influence of OTU#199. Equations: Model 1, D 5 i 1 1 c 1 V 1 e 1 ; Model 2, D 5 i 2 1 c 2 T 1 e 2 ; Model 3, T 5 i 3 1 c 3 V 1 e 3 ; Model 4, D 5 i 4 1 c 4 T 1 c 5 V 1 e 4 . D represents dysplasia score; V represents treatment (VSL#3 or non-VSL#3); T represents the log-normalized relative abundance of a particular taxon (OTU); i 1 , i 2 , i 3 , i 4 are intercepts; c 1 , c 2 , c 3 , c 4 , c 5 are coefficients for independent variables; e 1 , e 2 , e 3 , e 4 are residuals. OTUs were considered as potential mediators if they met the three criteria proposed by Baron & Kenny: (i) P-values from models 1-3 are significant, (ii) The P-value for the VSL#3 effect in model 4 is less significant than in model 1 and (iii) The P-value for the OTU relative abundance effect in model 4 remains significant. The Benjamini and Hotchberg procedure was applied for multiple comparisons and FDRcorrected P , 0.10 was considered significant. As all four models belong to one analysis, a single P , 0.10 cutoff was applied. Each taxon was tested separately for its mediation effect. Inflammation PCR arrays. Inflammatory gene expression was evaluated using SABiosciences array as described by us 8 . Briefly, RNA was extracted from distal colon biopsies (4 per group) using phenol:chloroform, DNase treatment (Qiagen) and RNeasy kit (Qiagen). 1 mg of RNA was transcribed to cDNA using RT2 First Strand kit (Qiagen/SABiosciences). Expression of inflammatory mediators was assessed using inflammation array PAMM-077 (Qiagen/SABiosciences). Data analysis was performed with SABioscience RT2 Profiler PCR Array Data Analysis version 3.4 (Qiagen/SA Biosciences: http://pcrdataanalysis.sabiosciences.com/pcr/ arrayanalysis.ph) using the DDCt method after normalizing to five housekeeping genes. Data are presented as fold changes for four individual VSL#3-treated mice, relative to the mean of four SPF mice.
2016-05-04T20:20:58.661Z
2013-10-08T00:00:00.000
{ "year": 2013, "sha1": "222ee603b868aea2ca00f2cdaeb1b0611e3af947", "oa_license": "CCBYNCND", "oa_url": "https://www.nature.com/articles/srep02868.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "222ee603b868aea2ca00f2cdaeb1b0611e3af947", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
18771864
pes2o/s2orc
v3-fos-license
Non-perturbative methods The current theoretical understanding of processes involving many weakly interacting bosons in the Standard Model and in model theories is discussed. In particular, such processes are associated with the baryon and lepton number violation in the Standard Model. The most interesting domain where the multiplicity of bosons is larger than the inverse of small coupling constant is beyond the scope of perturbation theory and requires a non-perturbative analysis. Introduction The vastness of the field of non-perturbative methods in high-energy physics inevitably compels me to focus on a specific topic among those which attract a considerable interest and where a non-trivial development is likely in the near future. One such topic, which also has dominated the parallel session on non-perturbative methods, is related to multiboson phenomena in the electroweak physics and, more generally, in models with weak coupling. These phenomena are interesting because of two basic reasons. One is that it is with multiboson processes is associated the violation of the sum of the baryon (B) and the lepton (L) numbers in the Standard Model [1]. Therefore such processes determine the evolution of (B+L) at high temperature in the early universe [2]. Also as initially envisioned in early works [3,4] and indicated by specific calculations [5,6] the processes with (B+L) violation and production of many electroweak bosons might be in principle observable in high energy collisions in the multi-TeV energy range. The other, theoretical, reason is related to the old-standing problem of the factorial divergence of perturbation theory series, which dates back to the work of Dyson [7]. This problem looks to be a matter of a purely theoretical concern as long as the quantities under discussion are such that they appear at low orders, like the anomalous magnetic moment. For such quantities the inability in principle to find the exact result by the perturbative expansion, though disappointing, does not prevent from calculating in few first orders with an accuracy required by practical measurements or greater. However the problem of the inherent divergence of perturbative series becomes quite acute as soon as one considers processes at energies such that a large number of interacting particles can be produced, i.e. the processes which occur starting only from a high order of the perturbation theory, where the expansion becomes unreliable. At present there is a general understanding that multiparticle electroweak processes with many bosons both in the initial and the final state (many → many scattering), including those with (B+L) violation, are not suppressed at high temperature and thus they indeed determine the (B+L) history of the universe. On the other hand the understanding of the processes, in which many bosons are produced by two or few initial particles with high energy (few → many scattering), is far from complete and there are arguments both pro and con the idea that at sufficiently high multiplicity of final particles such processes can have an observable cross section. The contribution of the many → many scattering at high temperature is described within the WKB technique by expansion around non-trivial classical solutions of the field equations: sphalerons [8,9] and, more generally, periodic instantons [10]. It is also believed that the few → many scattering can be fully described by applying a WKB technique using special classical configurations of the field. However, this is still a conjecture, and a specific method of a full WKB analysis of the latter scattering has yet to be developed. (B+L) violating electroweak processes As is known [1], the electroweak interaction in the Standard Model does not conserve the sum of the baryon and the lepton numbers as a result of the triangle anomaly. The amount of the (B+L) violation in a process is determined by the change of the winding number N CS of the electroweak gauge fields: However, changing the winding number by one or several units requires presence (at least in the intermediate state) of the W and Z field configurations with energy of order m W /α W . This is usually illustrated by the sketch of the dependence of minimal energy of the field with a given N CS as a function of N CS shown in figure 1, and the field configuration, corresponding to the top of the barrier, is the so-called sphaleron [8] with energy E Sp about 10 TeV [9]. If the energy E available in a process is much less than E Sp then the only way in which (B+L) can be violated is due to quantum tunneling, which at E = 0 is described by the instanton [11] solution to the Euclidean field equations, whose action is S i = 2π/α W . The amplitude of such process then contains the WKB tunneling factor exp(−2π/α W ) ∼ 10 −80 , which thus makes the process unobservable by any practical measure. The plot in the figure 1 however invites the suggestion that once the available energy is close to or larger than E Sp the suppression of (B+L) violating processes should weaken or disappear altogether. The two relevant situations where large energy is available in individual processes are high temperatures and high-energy particle collisions. (B+L) violation at high temperature As first realized by Kuzmin,Rubakov and Shaposhnikov[2] the rate with which the system traverses the sphaleron barrier in thermal equilibrium at a temperature T < E Sp is determined by the Boltzmann factor exp(−E Sp (T )/T ) and may become unsuppressed at temperatures larger than E Sp . The dependence of E Sp on the temperature arises through the temperature dependence of the vacuum expectation value of the Higgs field, which sets the electroweak energy scale. In particular the v.e.v. vanishes at the phase transition temperature T c , above which the electroweak symmetry is restored. Thus at T > T c the sphaleron barrier is absent and the (B+L) violating processes may go without an exponential suppression. As a result [10, 12 -14] the rate of change of the (B+L) density is given by where C 1 and C 2 are constants. In particular, the prefactor C 1 for the 'low' temperature rate is determined by fluctuations of the fields near the sphaleron configuration. The initial calculations [15 -17], which considered only the contribution of bosonic fluctuations, were most recently reanalyzed and extended [18] to include also the fermionic determinant. The contribution of the heavy top quark is found to significantly suppress the prefactor C 1 , which enables to somewhat relax the upper bound on the mass of Higgs boson in the minimal Standard Model, following from the requirement [10,14,19] that the (B+L) violating processes in the early universe immediately after the electroweak phase transition (i.e. just below T c ) do not wash out completely the baryon asymmetry, independently of the mechanism by which it was created before or during the phase transition. Using their result for C 1 and m t = 174 GeV, Diakonov et.al. [18] find this upper bound to be m H < 66 GeV, which is only slightly higher than the lower bound m H > 58.4GeV [20] from a direct search at LEP. Therefore an improvement in the experimental search can either find the Higgs boson or close the gap of compatibility of the minimal model with the observed baryon asymmetry of the universe. The shape of the sphaleron barrier in the presence of heavy top and the evolution of the energy levels of a heavy fermion were considered in detail in the contributed papers [21] and [22] respectively. Multi-sphaleron configurations are considered in [23] and electroweak strings, viewed as "stretched sphalerons" in [24]. However the role of the latter configurations in thermal equilibrium is yet to be clarified. (B+L) violation in high-energy collisions The sphaleron energy scale E Sp is within (hopefully) reachable energies at prospective colliders. Therefore a most intriguing question arises as whether the exponential suppression of the (B+L) violating processes vanishes at an energy of order E Sp in collision of two leptons or quarks.The difference between the high-temperature (B+L) violation and the processes induced by just two or few energetic particles is that in the former case the dominant contribution to the rate comes [13] from processes in the thermal bath, in which many soft particles with total energy E > ∼ E Sp scatter into a final state of also soft particles with different (B+L), while in the latter case a coupling between the hard initial particles and soft modes of the field with a non-trivial topology is required. Following the conjecture [13] that an enhancement of the cross section of (B+L) violating scattering may be associated with multiparticle final states, Ringwald [5] and Espinosa [6] pursued a calculation of a generic instanton-induced process of the type where n W (n H ) is the multiplicity of produced gauge (Higgs) bosons and f stands for a quark or a lepton. (The presence in the instanton-induced scattering of twelve fermions: nine quarks and three leptons, one from each electroweak doublet is mandated by the anomaly condition in eq.(1), i.e. by the number of fermionic zero modes of an instanton.) The amplitude of the scattering (3) was found to factorially depend on the multiplicity of bosons: A ∼ n W ! n H ! exp(−S i ), which lead to the argument [25] that the factorial enhancement may beat the exponential suppression at n W,H > O(1/α W ) i.e. at energy larger than O(E Sp ). In fact the growth with energy of the total cross section for (B+L) violating processes, observed in the early calculations [5,6,25], suggested [25] that this cross section may become strong: reach its unitarity limit at energies in the multi-TeV range. By quite general scaling arguments [26,27] the total cross-section of instanton-induced scattering should obey the scaling behavior where E 0 ∼ E Sp ∼ m W /α W . The function F (ǫ) is often termed as "holy grail" function. At ǫ = 0 one has F (0) = 1, while the initial enhancement [5,6] of the cross section due to opening multi-boson channels corresponds [28,29] to the first non-trivial term in the expansion in ǫ: TeV. The expansion in fact goes in powers of ǫ 2/3 , and by now two next terms are known [30 -34]: The latter two terms are determined by interaction between soft final particles. Starting from the term of order ǫ 10/3 the "holy grail" function is also contributed by interactions between hard initial and soft final particles and by interaction between the initial hard particles [35 -37]. Unfortunately, any finite number of terms in the expansion of F (ǫ) does not allow to assert the behavior of the function at finite ǫ ∼ O(1). Therefore it is not known yet, whether the function F (ǫ): i) goes to zero at finite ǫ (finite energy), ii) goes to zero as ǫ → ∞, or iii) is bounded from below by a positive value. Certainly, the most interesting phenomenologically is the first possibility, since then the cross section with (B+L) violation and multiboson production becomes observably large at a finite energy, while the most discouraging would be the last case, since then the cross section would stay exponentially suppressed at all energies. The possibility iii was advocated [38 -40] in terms of the so-called "premature unitarization" [40]. The argument is based on considering the interplay in the s-channel unitarity of the processes few → many and many → many. The former processes are argued to be still weak (exponentially suppressed) when the processes many → many are at the unitarity limit, which effectively shuts off the further growth of the few → many cross section. A somewhat simplified picture of this behavior is shown in figure 2, where the total cross section is represented as imaginary part of a 2 → 2 forward scattering amplitude through an instanton (I) -antiinstanton (I) configuration. According to the model of "premature unitarization" [40] the total amplitude is given by summation over instanton -antiinstanton chains iterated in the s-channel, i.e. where all the total energy flows through the additional (anti)instantons. Each additional I − I pair brings in the factor e −2 S i B(E), where the "bond function" B(E) is the multi-boson enhancement of the one-instanton-induced cross section observed in [5,6,25]. The summation over the I − I chains (figure 2) gives where η = O(1) is a rescattering factor. If given by eq.(6), the cross section reaches its maximum when B(E)e − S i = O(1) and its value at the maximum is of order e − S i , which corresponds to the lower bound of 1/2 for the function F (ǫ). The presented reasoning is however oversimplified: it assumes that all the (anti)instantons in the chains have same fixed size. Relaxing this assumption leads [40] to a lower bound for F (ǫ), which is generally different from 1/2. At still higher energies the formula (6) gives a falling cross section. However this regime is unphysical: initial particle can shake off energy by emitting one or few hard bosons, so that the energy in the collision gets back to the one corresponding to the maximum. (Emission of hard bosons suppresses the cross section by a few powers of the coupling constant, while the gain in the non-perturbative amplitude is exponential.) If indeed the "holy grail" function has a minimum at some energy, this would imply that above that energy the process can not be described by semiclassical methods, since emission of hard quanta becomes essential. It turns out however, that the "premature unitarization" and thus the simple picture of the s-channel iteration of instanton-antiinstanton correlations is not mandatory and apparently depends on specifics of the theory. The known examples of simplified models, where the "holy grail" function is indeed bounded from below by 1/2 are the Quantum Mechanical problem with a double well potential [41,42] and the soft contribution to the scattering through a bounce [43,44] in a (1+1) dimensional model of one real field with metastable vacuum [45]. (It has been pointed out [46,47] that in the latter model there is also a hard contribution to the bounce-induced scattering, for which the "holy grail" function goes to zero at the analog of the sphaleron energy.) Another example, where the "holy grail" function is bounded by a value, smaller than 1/2, namely 0.160, is the problem of catalysis of false vacuum decay in (3+1) dimensions by collision of two (or few) particles [48]. In this problem the semiclassical probability reaches maximum at the top of the energy barrier. Rubakov -Tinyakov approach. The main difficulty in developing a semiclassical approach to the few → many scattering is the presence of hard quanta in the initial state, which state is thus not a semiclassical one. It has been suggested [49,50] to circumvent this difficulty by considering a scattering, where a finite small number of particles in the initial state is replaced by n i(nitial) = ν/g 2 , where g is the coupling constant in the theory and ν is a parameter. For a finite ν the initial state of this kind can be treated semiclassically, and in the end the limit of the probability at ν → 0, or ν → const/g 2 is to be considered in order to relate to the process few → many. Within such setting the "holy grail" function depends on ν: F (ǫ, ν) and it is conjectured that its limit at ν → 0 is smooth, which conjecture is supported by high-order perturbative calculations around the instanton [51]. The central point of this approach is that the function F (ǫ, ν) is determined from a solution to a well-defined boundary value problem [52] for classical field equations, although in essentially complex time. Figure 3: Contour in the complex time plane and boundary conditions for a classical solution, describing the scattering of a semiclassical initial state into multiparticle final states [52]. The classical solution that describes the path of largest probability in a model with one real field φ is evolving along the contour in the complex time plane shown in figure 3. At Re t → +∞ the solution is required to be real, thus its momentum components while at Re t → −∞ the positive frequency part is rescaled by the parameter e θ : φ(k) = f k e −i ω k t + e θ f * −k e i ω k t . The parameter θ in the boundary condition and the parameter T of the contour (cf. figure 3) are Legendre-conjugate of respectively the multiplicity n i and the total energy E of initial particles. Namely, if i S is the classical action on the whole contour (thus S is defined in the way that it is real in the Euclidean space), one finds [52] Furthermore the "holy grail" function, entering the WKB estimate of the total cross section as σ tot ∼ exp (−g −2 F (ǫ, ν)) is given by the Legendre transform of the action: 1 Quite naturally, the formulated classical boundary value problem is not easily solvable, and a sufficiently good approximation to the solution is known only in a few models [53 -55]. In particular the model, considered in [54], describes one scalar field in (1+1) dimensions with the potential Figure 4: The "holy grail" function normalized to F (0, ν) = 1 in the (1+1) dimensional model [54] with exponential interaction. where v and λ are dimensionless constants, which both are assumed to be large. The parameter 1/v is the small coupling constant of the perturbation theory in the vacuum φ = 0. The negative sign of the interaction term implies that the energy is unbounded from below at large φ, thus the vacuum φ = 0 is metastable, and is separated from the decreasing part of the potential by a barrier located at φ ≈ v, provided that λ ≫ 1. Beyond the maximum the potential rapidly goes down, so that the potential essentially is a quadratic well with a "cliff" [54]. The metastability of the perturbative vacuum at φ = 0 does not show up in calculations of the scattering amplitudes to any finite order of the perturbation theory, and it only arises through a non-perturbative effect: unitary "shadow" from the false vacuum decay, which makes this contribution analogous to instanton-induced scattering amplitudes in a Yang-Mills theory [45,56]. The analog of the sphaleron energy is the height of the barrier separating two phases: E Sp = const·mv 2 . At large λ the potential (9) contains a sharp matching of the quadratic part (free field) and a steep exponential "cliff", which enables [54] to solve the boundary value problem in the leading order in 1/λ and also to clarify the contribution of multi-instanton (multibounce) configurations. It has been found [54] that the multi-instanton configurations in this model are still not important when the one-instanton contribution becomes large. As a result the "holy grail" function, as shown in figure 4, reaches zero at finite energy, which energy increases when the semiclassical parameter of the initial state multiplicity ν = n i v 2 decreases. In figure 4 is also shown the behavior corresponding to the periodic instanton, which maximizes over n i the rate of tunneling through the barrier in the processes n i → n f at given energy E [10]. Prospects for QCD hard processes. It has been argued [57 -60] that a manifestation of instanton-induced scattering in a weak coupling regime can be observed in hard processes in QCD. The suggestion is to search for final states in hadron collisions, which contain a large number of minijets, each with a typical invariant mass µ, such that α s (µ) is sufficiently small, e.g. µ ≈ 4 GeV, so that α s (µ) ≈ 0.25. An instanton-induced process should involve production of typically n j ≈ 4π/α s (µ) ≈ 50 such jets, which requires energy in a parton -parton collision of somewhat higher than n j µ ≈ 200 GeV. The prospects of observing the instanton induced hard processes in QCD are certainly more phenomenologically attractive, since, unlike the electroweak case, the energy range is hopefully within the reach of LHC and also the cross section can be of a more encouraging magnitude, even if it is suppressed by an exponential factor, like exp(−2π/α s (µ)) ∼ 10 −11 as suggested by the "premature unitarization" models. However the reality of observing these possible non-perturbative hard processes in QCD is still under discussion. 3 Multi-particle production in topologically trivial sector The growth of the rate of the instanton-induced processes is associated with production of multiboson final states until at high multiplicity n f ∼ 1/g 2 the final state becomes not tractable perturbatively. A similar problem in fact arises [61,62] at those high multiplicities in processes, which do not require contribution of field configurations with non-trivial topology, and thus are allowed in perturbation theory. This is related to the well known factorial growth of coefficients in the perturbation theory series [7]. Namely in the perturbation theory the total cross section for production of n bosons interacting with a weak coupling g is given, modulo the phase space suppression at finite energy, by At small n, naturally, the cross section is decreasing with multiplicity. However at n ∼ 1/g 2 the growth of n! becomes faster than the decrease of (g 2 ) n and the behavior (10) would imply that the cross section starts to grow with multiplicity. Therefore the question: "If there is enough energy to produce ≫ 1/α W W, Z, H bosons, will they be actually produced with non-negligible cross section?" does not seem to be entirely paradoxical or idle in view of eq.(10). The difficulty of answering this question is in that the lowest order equation (10) becomes inapplicable already at n ∼ 1/g, since the loop corrections to σ n are governed by the parameter n 2 g 2 . The latter can be seen from the number of rescatterings between the final particles: O(n 2 ), each having strength g 2 . In what follows we will discuss several steps that have been attempted toward answering the above question. It also may well be that a solution of the multiboson problem without the topological complications will provide an insight into the problem of (B+L) violation in high-energy collisions. One of simplest models, where the development of non-perturbative dynamics in multiboson amplitudes can be studied, is that describing one real scalar field with the λ φ 4 interaction, whose potential is given by If m 2 is positive the field has one vacuum state at 0|φ|0 = 0 and the symmetry under sign reflection: φ → −φ is unbroken, while at negative m 2 there are two degenerate vacua 0|φ|0 = ±v with v = |m|/ √ λ, which situation describes the spontaneous symmetry breaking (SSB). Multiboson amplitudes at zero energy and momentum. The simplest problem concerning multiboson amplitudes is, perhaps, that of calculating connected n-boson off-shell scattering amplitudes A n , in which all the external particles have zero energy and momentum [63,64]. The amplitude A n can be written in terms of the connected part of the Euclidean-space correlator: where σ(x) is the deviation of the field φ from its vacuum mean value σ(x) = φ(x) − In a theory with unbroken symmetry the quantum loops modify E(j) according to the Coleman-Weinberg potential [65] thus shifting and modifying the singularity in the j plane. However these corrections neither eliminate the singularity nor bring it to j = 0. The shift of the position can be absorbed in normalization of λ and m, while the modification of the type of the singularity only affects sub-leading in n factors, so that the leading behavior in eq. (14) is not modified by quantum corrections in a theory with unbroken symmetry. The situation with quantum effects in a theory with SSB is drastically different: non-perturbatively the point j = 0 is in fact a branch point of the vacuum energy E(j) for either of the vacua. Indeed, if, for definiteness, one choses to consider the amplitudes A n in the 'left' vacuum: 0|φ|0 = −v with v = |m|/ √ λ, and follows the dependence of its energy on j, one finds that this state is stable for real j < 0 and is metastable at arbitrarily small positive j. Thus at j > 0 the energy E(j) acquires an imaginary part given by the decay rate of the metastable vacuum. In this situation the Taylor expansion of E(j) is asymptotic and the coefficients are determined by the decay rate in the presence of an infinitesimal positive source term. In this situation the calculation [66] of the false vacuum decay rate in the thin wall approximation is applicable exactly. Thus one can readily find the exact non-perturbative asymptotic behavior of the amplitudes A n at large n in a theory in d space-time dimensions [64]: The factorial behavior in eq.(15), if interpreted in terms of loop graphs in perturbation theory, corresponds to contribution of graphs with n/(d − 1) loops. The considered off-shell amplitudes A n are not physical. However one can draw from the described exercise at least two, possibly important, theoretical conclusions about multiboson amplitudes: • the n! behavior suggested by the tree-level analysis is not necessarily eliminated and may even be enhanced in the exact result, and • the large n behavior of multiboson amplitudes does not have to be universal and may in fact be very sensitive to details of the theory. Production of on-shell multiparticle states at and above threshold. Tree graphs. More physical, than the previously discussed off-shell amplitudes, are the amplitudes of processes, where n on-shell bosons are produced by a highly virtual field φ (1 → n process): a n = n|φ(0)|0 . (These e.g. can be related to the reaction e + e − → n H.) As will be explained, it turns out that one can explicitly find the sum of all tree graphs and all one-loop graphs for these amplitudes at any n, provided that the final bosons are exactly at rest in the c.m. system. Also the summation of two-and higher-loop graphs is in principle possible for this kinematical arrangement, however a calculation with a finite number of loops is inevitably plagued by the breakdown of the perturbation theory at large n. Thus far three methods have been used in calculation of the threshold amplitudes of the 1 → n processes: the Landau WKB method, the recursion equations, and the functional technique. Landau WKB method [67,68] is used in Quantum Mechanics for calculating transition matrix elements between strongly different levels. (For a field theory derivation of this technique see [69].) In the tree graphs for the threshold 1 → n amplitudes all the external and internal lines carry no spatial momentum. Thus the problem is reduced to dynamics of only one mode of the field with spatial momentum p = 0, i.e. to a Quantum Mechanical problem. This approach yields the result [70] for the sum of the tree graphs at the threshold with accuracy up to terms O(1/n 2 ) at large n. (Application of the Landau WKB technique in the problem of multiboson amplitudes is also discussed in [71,41,72,42].) Recursion equations [64] for the amplitudes a(n) arise from inspecting the construction of Feynman graphs. For the simplest case of tree graphs in λφ 4 theory the algebraic form of the equations is where the sum runs over odd n 1 and n 2 as well as n is odd, since due to the unbroken sign reflection symmetry the parity of the number of particles is conserved. Also the mass m in eq.(16) is set to one, since it can be restored in the final result from dimensional counting. The solution to the equations (16) reads as [64] a(n) = n! (λ/8m 2 ) which can be found by applying the regular method of generating functions [73]. For the theory with SSB the recursion equations are modified by the presence of cubic vertices. The result for the amplitudes in the theory with SSB is [73] a(n) = −n! (2 v) 1−n (18) The recursion method can be extended to other theories [74] as well as to loop graphs [75,76] and to an analysis of higher loops [77]. However a more convenient method for further analysis is the one suggested by Brown [78] and is based on a functional technique. Before proceeding to discussing this method and its further applications we report on estimates of the tree amplitudes above the threshold and thus of the total probability of the processes 1 → n at a high energy E. Lower bound for cross section at the tree level. The tree graphs for the processes 1 → n in a λφ 4 theory all have the same sign [61,62]. The decrease of the amplitude above the threshold is thus determined by the increasing virtuality of the propagators in those graphs, which depends on the kinematics of the final state. One can thus find a lower bound on the tree amplitudes above the threshold in a restricted part of the final-particle phase space [79 -81], which gives a lower bound on the total probability of the process. In particular, if the kinematical restriction is chosen [79,80] as the condition that the c.m. energy of each individual particle in the final state does not exceed ω, then in this region of the phase space the tree amplitude A(1 → n) is larger than the threshold amplitude a(n) in which the physical mass M of the scalar boson is replaced by 9 8 ω in the theory without SSB (in which case M = m) and by 4 3 ω in the theory with SSB (where M = √ 2 |m|). The cut off energy ω is then optimized for each value of the total energy E and multiplicity n in order to find the largest lower bound on the total probability which is given by the integral over the n particle phase space τ n . As a result the lower bound on σ n is found [80] in the scaling form where ν = n M/E is the ratio of the multiplicity n to its maximal possible value E/M, ǫ = E/E 0 with E 0 being an analog of the 'sphaleron' energy: E 0 = 4π 2 c M/λ, and the constant c in these formulas is c = 9 (c = 8/3) for a theory without (with) SSB. The calculated [80] behavior of the function f (ǫ, ν) is shown in figure 5, which thus illustrates and quantifies the interplay between the n! and the power of small coupling constant, discussed in connection with eq. (10). The function f (ǫ, ν) displays a normal perturbative maximum at zero multiplicity. However, as energy grows, and production of high multiplicity states becomes unsuppressed kinematically, this function develops a second maximum, which at larger energies eventually crosses zero with an apparent violation of unitarity. It is interesting to notice that the kinematical suppression of multiparticle final states is quite essential and shifts the energy, at which the tree graphs violate unitarity significantly higher than one would guess from a simple estimate E crit ≈ 4π M/λ. If applied to a multi-Higgs production in the Standard Model the lower bound (20) breaks unitarity at E crit ≈ 15.5 (32π 2 M H /λ) ≈ 1000 GeV(200GeV/M H ). It should be also mentioned that it is quite likely that the scaling behavior (20) at a given large multiplicity n also holds for the actual cross section, which point is recently no secondary maximum (ǫ = 1), the secondary maximum just developed (ǫ = 10), the secondary maximum becomes global and is just above the unitarity limit (ǫ = 15.5) [80]. strongly emphasized in [82]. The function f (ǫ, ν) can thus be called differential in n "holy grail" function. Tree level. A more convenient and more conceptually transparent technique for dealing with treelevel threshold multiboson amplitudes was suggested by Brown [78] and was later extended to calculation of one-loop [83,84] and higher quantum effects [85,86] in these amplitudes. The technique is based on the standard reduction formula representation of the amplitude through the response of the system to an external source ρ(x), which enters the term ρφ added to the Lagrangian. the tree-level amplitude being generated by the response in the classical approximation, i.e. by the classical solution φ 0 (x) of the field equations in the presence of the source. For all the spatial momenta of the final particles equal to zero it is sufficient to consider the response to a spatially uniform time-dependent source ρ(t) = ρ 0 (ω) e iωt and take the on-mass-shell limit in eq.(21) by tending ω to m. The spatial integrals in eq.(21) then give the usual factors with the normalization spatial volume, which as usual is set to one, while the time dependence on one common frequency ω implies that the propagator factors and the functional derivatives enter in the combination where coincides with the response of the field to the external source in the limit of absence of the interaction, i.e. of λ = 0. For a finite amplitude ρ 0 of the source the response z(t) is singular in the limit ω → m. The crucial observation of Brown [78] is that, since according to eq.(22) we need the dependence of the response of the interacting field φ only in terms of z(t), one can take the limit ρ 0 (ω) → 0 simultaneously with ω → m in such a way that z(t) is finite: Furthermore, to find the classical solution φ 0 (x) in this limit one does not have to go through this limiting procedure, but rather consider directly the on-shell limit with vanishing source. The field equation with zero source in λφ 4 theory without SSB is For the purpose of calculating the matrix element in eq.(21) at the threshold one looks for a solution of this equation which depends only on time and contains only the positive frequency part with all harmonics being multiples of e imt , which condition is equivalent to requiring that φ(t) → 0 as Im t → +∞. The solution satisfying these conditions reads as [78] φ According to equations (22) and (21) the n-th derivative of this solution with respect to z gives the matrix element n|φ(0)|0 at the threshold in the tree approximation: which reproduces the result in eq. (17). For the case of theory with SSB the solution reads as which reproduces the tree amplitudes in eq. (18). In this case z(t) = e iM t , where M = √ 2 |m| is the mass of physical scalar boson. One-loop level. To advance the calculation to the one-loop level one has to calculate the first quantum correction φ 1 (t) to the classical background field φ 0 . This amounts [83] to evaluating the tadpole graph of figure 6, where both the Green's function and the vertex are calculated in the external background field φ 0 . The green's function G(x; x ′ ) satisfies the equation in which the differential operator in the Minkowski time contains explicitly complex field φ 0 (cf. eq.(25) or eq. (27)). A straightforward rotation to the Euclidean time, i t → τ , is problematic, since the background field then develops a pole at a real τ . The acceptable solution is achieved by simultaneously rotating and shifting the time axis in eq.(28) in such a way that −λ z(t) 2 /8m 2 → exp(2mτ ) for the theory without SSB, and −z(t)/2v → exp(Mτ ) for the theory with SSB. In terms of thus defined τ the background field has the form φ 0 (τ ) = i 2/λ m/ cosh(mτ ) (no SSB) and φ 0 (τ ) = v tanh(Mτ /2) (with SSB). In both cases the term φ 0 (t) 2 in equation (28) is real and non-singular. After applying the standard decomposition of the Green's function over the conserved in the background φ 0 (t) spatial momentum k: one arrives for the case of no SSB at the well-known in Quantum Mechanics equation with ω = √ k 2 + 1 and the mass m set to one. (For the theory with SSB one gets the same equation with a rescaled ω [84].) Thus the problem of finding the first quantum correction φ 1 (t) to the background field is completely solvable on the τ axis, and the solution can then be extended to the whole complex plane of t by analytical continuation. For the theory with no SSB the result [83] for the amplitudes a(n) at the one-loop level reads as where The analog of this result for the SSB case is [84] a(n) 0+1 = a(n) 0 1 + n(n − 1) Nullification of threshold amplitudes. The equations (31) and (33) display a remarkable feature: in spite of the presence of an intermediate state with two bosons in one-loop graphs, their contribution to the amplitudes in the case of SSB is real, while the factor F in eq.(32) is an easily recognizable threshold factor for the 2 → 4 process with no indication of presence of other thresholds. Using the unitarity relation for the imaginary part of the loop graphs, one immediately concludes that this can only be if the tree amplitudes of the on-shell processes 2 → n are all zero at the threshold for n > 4 in the theory without SSB [83,87] and for n > 2 in the theory with SSB [84]. This can be traced to the special properties of the reflectionless potential −6/(cosh τ ) 2 in equation (30) and generalized [88,76] to other theories, where the problem of the 2 → n scattering is reduced to finding the Green's function in the reflectionless potential −N(N + 1)/(cosh τ ) 2 with integer N. The known additional cases are the following: • Linear σ model (N = 1): the tree-level amplitudes of the scattering π π → n σ are all zero at the corresponding thresholds for n > 1 . • Fermions with Higgs-generated mass: if 2m f /m H = N (integer), then all tree-level amplitudes of f f → n H are zero at threshold for all n ≥ N. • Vector bosons with Higgs-generated mass: if 4m 2 V /m 2 H = N(N + 1), then all treelevel amplitudes with transversal vector bosons of V T V T → n H are zero at threshold for all n > N. For longitudinal vectors the same amplitudes of V L V L → n H are zero for n = N [89] and for n > N + 1 [76]. All these cases, except the one with longitudinal vectors, stem [88] from the generic interaction of two fields φ and χ of the form ξ 2 φ 2 χ 2 and the self-interaction of the field φ as described by the potential (11). Then if the ratio of the coupling constants satisfies the relation 2ξ/λ = N(N + 1) with N integer, the tree-level threshold amplitudes of the processes 2χ → n φ are all zero for n > N in a theory with SSB and for n > 2N in a theory with unbroken symmetry. This behavior somewhat resembles the nullification of inelastic amplitudes in the Sine-Gordon theory, where it is a consequence of a symmetry and is a deep property of the theory. In the theories considered here this is a much weaker property, which holds only at threshold and, generally, only at the tree level [90]. However the nullification in this case can be a consequence of a hidden symmetry, which holds at the classical level and/or has a more restricted scope. Thus far such symmetry has been revealed [91] only for the case of N = 1, where it can be traced to the symmetry of a system of two anharmonic oscillators, described by the potential If the frequencies ω 1 and ω 2 were equal, the model would have an O(2) symmetry, corresponding to conservation of the angular momentum Q =ẋy − xẏ. However even for ω 1 = ω 2 the symmetry persists [91] corresponding to conservation of the invariant . It should be also noted that if the ratio 2ξ/λ does not satisfy the above mentioned condition, the threshold amplitudes of the processes 2χ → n φ display [92] a 'normal' factorial growth with n. Non-perturbative analysis. The n 2 λ behavior of the loop corrections in the equations (31) and (33) convinces us that the perturbation theory is of little help in finding the amplitudes at large n and a true non-perturbative analysis is required. It turns out that to a certain extent such analysis can be performed for the λφ 4 theory with SSB. In terms of the variable τ the problem of calculating the threshold amplitudes a(n) = n|φ(0)|0 reduces [85] to a well defined Euclidean-space problem of calculating the quantum average Φ(τ ) of the field with the kink boundary conditions, i.e. φ → ±v as τ → ±∞. The average field then expands at τ → −∞ in the series c n e nM τ (36) and the threshold amplitudes are given by a(n) = n! c n /c n 1 , where the coefficient c 1 describes the one-particle state normalization: c 1 = 1|φ(0)|0 . Due to the fact that the classical kink solution provides the absolute minimum for the action under specified boundary conditions, the path integrals in eq.(35) are well defined (no negative modes) and thus the average field Φ(τ ) is real in any finite order of the perturbation theory. Thus the coefficients c n of the expansion (36) are real too. Therefore the amplitudes a(n) are real to any finite order in λ. As we have seen at the one loop level this implies the nullification of the tree amplitudes of 2 → n for n > 2. In higher loops this implies a relation between the amplitudes of the processes k → n with different k. The only exception is the particular case of n = 3, for which the imaginary part of the a(3) can be contributed only by the two-boson intermediate state. Since the imaginary part is vanishing, one concludes [85] that the 2 → 3 amplitude is vanishing at the threshold in all orders in λ. The function Φ(τ ) given by the expansion (36) is manifestly periodic: Φ(τ +2iπ/m) = Φ(τ ). Using this property and the boundary conditions at τ → ±∞ one finds [85] that the exact function Φ(τ ) necessarily has a singularity at a finite τ . Thus the expansion (36) has a finite radius of convergence, and thus the exact threshold amplitudes a(n) grow at least as fast as n!. In other words, the quantum effects do not eliminate the factorial growth of a(n). As is indicated by the n 2 λ parameter of the perturbation theory for the coefficients c n , the saddle point (SP) for the action S[φ], given by the x-independent 'domain wall' with the kink profile is not the correct SP for calculating the coefficients c n at large n. It has been argued [86] that the correct SP configuration is given by a spatially inhomogeneous field configurations in which the 'domain wall' is deflected towards negative τ by a maximal amount h 0 . Then at a large negative τ the coefficients c n are given by where A is the area of the domain wall with deflection, A 0 is the same for the undistorted flat wall, and µ = |m| 3 /3λ is the surface tension of the wall. Finding the extremum of the 'effective action' in the exponent in eq.(37) exactly corresponds to finding the equilibrium configuration of a (d − 1) dimensional membrane with a force equal to nM applied at the point of maximum deflection. In general this problem has no real solution (the film gets punched). However a solution exists, where a part of the trajectory of the domain wall resides in the Euclidean space, and a part is in the Minkowski space [86]. The Minkowski-space part of the trajectory corresponds to evolution of a bubble made of a domain wall and having energy E = n M. The amplitudes a(n) are then found as a sum of resonant contributions of the quantized levels of the bubble: where f (d) is a positive coefficient depending on the space-time dimension, and I(E) = p dr is the action of the bubble over one period of oscillation. Clearly the amplitudes a(n) in eq. (38) have poles at the values of E satisfying the Bohr-Sommerfeld condition I(E) = 2πN. The result in eq. (38) can be interpreted as that the growth of a(n) is due to a strong coupling of the bubble states to the multi-boson states with all particles being exactly at rest. However, it is known from the numerical studies of mid-70s [93 -95] that the lifetime of the bubbles is of order one in units of their period. Thus one should conclude [86] that there arises a non-perturbative form factor, which cuts off the integral over the phase space of the final bosons and makes the total probability of a moderate value, inspite of the extremely large value of the coupling to exactly static bosons. The total probability of the process 1 → n in this picture is given by the probability of creating a bubble with energy E by a virtual field φ. This can be estimated [86] by the Landau WKB method and is found to be exponentially small: where f (d) is the same as in eq. (38). Thus one concludes that in the theory with SSB the total cross section of non-perturbative multiparticle production is extremely likely to be exponentially small at high energy. However because of the usage of special properties of the theory with SSB it is not clear, whether this conclusion can be generalized to other theories, in particular, to the multi-boson production in the Standard Model. Conclusions. Problems The problem of multi-particle processes in theories with weak interaction is one of most challenging in the quantum field theory. In solving this problem we are most likely to find new methods of non-perturbative analysis of the field dynamics. As it stands now, there are mostly problems facing us, some of which are: • It is not clear, to what extent the exponential suppression of the (B+L) violation in particle collisions is lifted at high energy: by a factor 1/2 in the exponent, by a different factor, or completely. All these types of behavior are observed in simplified models. • The n! behavior of the amplitudes for production of n bosons survives the quantum effects, at least in some models. However this does not necessarily imply a catastrophic growth of the cross section. • Peculiar zeros are observed in threshold amplitudes of multi-boson production. However it is not clear, whether they signal some deep properties or this is a mere coincidence. • The classical field configurations give rise to multiparticle amplitudes. However their rôle in high-energy collisions is yet to be understood. This work and the author's participation in the conference are supported, in part, by the DOE grant DE-AC02-83ER40105. V. Kuvshinov, Minsk: It seems we have here new mechanisms for multiparticle production. For example, it can give contributions to the intermittency phenomenon. Is multiplicity important for B + L violation? Or is only n! important? M. Voloshin: The multiplicity is important through the n!, or, possibly, a stronger factor.
2021-09-28T15:40:11.531Z
1994-09-17T00:00:00.000
{ "year": 1994, "sha1": "3a1e32c999b17b5d0bd935906e86ead4c009da7d", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "dfb9b946ccee7e5c8ebd5bbfe8c39047ef8c9035", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
55048952
pes2o/s2orc
v3-fos-license
Relationships between electron density , height and subpeak ionospheric thickness in the night equatorial ionosphere The development and decay of the southern equatorial anomaly night-time peak in electron density as seen at a number of ionosonde reflection points extending from New Guinea and Indonesia into northern Australia was examined in terms of the characteristic rise and fall in height associated with the sunset ionisation-drift vortex at the magnetic equator. The observations relate to measurements made in November 1997. Following sunset, the ionospheric profile was observed to narrow as the maximum electron density increased during a fall in height that took the peak of the layer at Vanimo and Sumedang down to some 240 km. The fall was followed by a strong rise in which the electron density sub-peak profile expanded from a slab width (as given by POLAN) of 20 km to over 100 km with no corresponding change in peak electron density. The post-sunset equatorial fall in height and associated changes in profile density and thickness continued to be seen with diminishing amplitude and increasing local time delay in moving from the anomaly peak at Vanimo to the southernmost site of observation at Townsville. Secondary events on a lesser scale sometimes occurred later in the night and may provide evidence of the multiple vortices suggested by Kudeki and Bhattacharyya (1999). Doppler measurements of vertical velocity as seen at Sumedang in Java are compared with the observed changes in electron density profile in the post-sunset period. The normal post-sunset variation in ionospheric parameters was disrupted on the night of 7 November, the night before a negative ionospheric storm was observed. Introduction An upward vertical drift of ionisation characteristically occurs at equatorial latitudes around sunset at F2 heights in the ionosphere.A post-sunset reversal from upward to downward vertical drift follows later in the night (Fejer, 1991;Fejer et al., 1995).There is an accompanying lift and fall in F2 layer height associated with the changing drift pattern.At Southeast Asian longitudes, the phenomenon is usually most evident near the equinoxes.More recently, incoherent scatter radar measurements at Jicamarca (Kudeki and Bhattacharyya, 1999) and satellite measurements (Eccles et al., 1999) have produced a more detailed picture of changing plasma drifts with height through the sunset period and into the night.These measurements show that the upward and downward drifts at F2 heights pre-midnight are part of a larger vortex pattern centered at some 250 km in height and around 20 LT in time.As originally suggested by Tsunoda, 1981, the base of the vortex is formed by a westward drift in the E region which corresponds to an eastward drift at F2 heights.The existence of a vortex in E×B drift implies a divergent field structure with E field vectors pointing to a negatively charged core region (Kudeki and Bhattacharyya, 1999). While the vertical component of drift associated with sunset has long been known to be the result of E×B forces, a full theoretical explanation has been more difficult to determine.The upward drift which produces the daytime equatorial anomaly is produced by eastward electric fields generated in the E region.The resurgence of the equatorial anomaly at night associated with an increase in vertical drift around sunset and its subsequent reversal is a more complex phenomenon involving both E and F region dynamos.Theories to explain the observed vortex motion are discussed by Eccles (1998).An empirical global equatorial model of F region vertical drift has been prepared by Scherliess and Fejer (1999) based on radar and satellite measurements.The dependence of post-sunset vertical drift magnitude on solar flux is discussed by Whalen (2004). In daytime, ionisation lifted by E×B forces at the magnetic equator diffuses down the field lines where it accumulates forming the equatorial anomaly which consists of twin peaks of electron density in the F2 region symmetrically disposed around the magnetic equator.Although the magnitude of the night-time equatorial anomaly still depends on the strength of the post-sunset vertical drift (Whalen, 2004), the build-up of the anomaly itself occurs mainly during the post-sunset reversal to downward drift associated with the eastern side of the sunset vortex.Balan and Bailey, 1995 refer to this as the reverse plasma fountain and illustrate the subsequent development of the night-time anomaly as given by the Sheffield University plasmasphere-ionosphere model. The post-sunset period is also marked by the development of equatorial bubbles produced by the Rayleigh-Taylor instability.These bubbles are often seen as Equatorial Spread F (EQSF).The occurrence of such bubbles reaches a maximum when the sunset terminator aligns with the Earth's magnetic field lines (Tsunoda, 1985).This alignment occurs near the equinoxes where the Earth's magnetic field declination is small as at Southeast Asian longitudes.Outstanding problems in the relationship between the post-sunset height rise and fall and the occurrence of equatorial spread F are discussed by Abdu (2001). This paper examines the relationships between foF2, virtual and true height, sub-peak slab thickness and Doppler shift in reflected radio signals in the post sunset ionosphere as seen by ionosondes near the peak of the night equatorial anomaly.The ionosondes were situated at various sites in and to the north of Australia.These latitudes cover an area where variations in observed ionospheric characteristics at any given time are largely associated with the development and magnitude of the day and night equatorial anomalies which in turn are driven by changes in ionisation drift at the magnetic equator. Observing sites Sites and paths of vertical and oblique ionosondes discussed in this paper are shown in Fig. 1.The vertical ionosondes at Vanimo and Townsville were operated by the Australian Ionospheric Prediction Service (IPS).Oblique ionosondes were operated by the Australian Defence Science and Technology Organisation (DSTO) between a transmitter at Vanimo and receivers at Darwin and Townsville.All other oblique path ionograms were obtained with DSTO equipment.A KEL IPS71 ionosonde at Sumedang (also called Tanjungsari) in Java was operated by the Indonesian National Institute for Aeronautics and Space (LAPAN).This ionosonde has the additional unique feature of being able to record high resolution Doppler ionograms. Latitudinal variation in foF2 In early November, 1997, unusually large surges in postsunset values of foF2 were observed at Vanimo.Data from other sites were sought in order to obtain some idea of the latitudinal and longitudinal extent of these extreme variations.Ionograms over the oblique paths were converted to equivalent verticals from which foF2 could be obtained. As seen in Fig. 2, post-sunset surges in foF2 were strongly present at Vanimo and over the oblique path Vanimo-Darwin.The peak values in foF2 at such times greatly exceeded foF2 values observed at any other time of day.Such surges, with diminishing magnitude, continued to be seen at reflection points further south.The surge on the night of 5 November is particularly notable for being clearly visible at all sites.While post-sunset surges in foF2 were also present at Sumedang during this time, variations in the magnitude of the post-sunset surges at Sumedang were uncorrelated with those at the eastern sites. It should be pointed out that while the post-sunset surges in foF2 seen at Vanimo and discussed here represent extreme examples, surges of lesser magnitude are normally present at The program POLAN (Titheridge, 1988) was used to convert virtual height to true height.Virtual and true height electron density profiles (expressed as a plasma frequency) are shown in Fig. 4 as obtained at Vanimo on the night of 4 November.Here a sharpening of the ionospheric profile is seen to be associated with an increase in maximum electron density as the ionosphere falls.An abrupt recovery to a broader profile with an increase in peak height occurs at the end of the fall in height.This post-reversal recovery in peak height and profile thickness was not associated with any particular change in foF2. There is a new fall in ionospheric height with an associated sharpening of the profile and subsequent recovery in the period 01-03 LT which will be discussed further in Sect.7.While much weaker than the previous surge, this lesser event exhibits the same characteristics in all parameters and is typical of such events occasionally observed to occur in the postmidnight period. A selection of maximum ionospheric electron density values (measured as a critical frequency) and base (h f) or true height (hmF2) values is given in Figs.5a-e as a function of time.Figures 5a, d and, e show the calculated results of POLAN for both the magnitude and variation in true height of the maximum electron density value as well as the equivalent slab thickness W of the sub-peak ionosphere.Here the slab thickness W uses the definition given by POLAN as the integrated sub-peak electron density divided by the peak electron density.As such it is a measure of the profile width between the base of the F2 layer and the layer peak. As seen at Vanimo in Fig. 5a (4 November 1997), The fall in ionospheric peak height (hmF2) begins at 19:00 LT and continues over a 3.8-h period at a rate increasing from some 7 m/s to 14 m/s (12 m/s average).An abrupt recovery to a broader profile with an increase in peak height commences at 23:30 LT.This recovery lasted some 25 min with an average rise velocity of 67 m/s.The fall in height of the peak F2 electron density from 400 km to 240 km is accompanied by a reduction in sub-peak slab thickness W by a factor of three.A similar variation was seen at Sumedang on 1 November 1997, although here the maximum increase in electron density is seen to be less.Both Vanimo and Sumedang lie near the peak of the southern equatorial night-time anomaly.The fall in peak layer height at both Vanimo and Sumedang ended at a height around 240 km.At this time, the difference between base and peak F2 layer height at Vanimo was about 50 km.The maximum in foF2 values occurred as the layer peak fell below some 250-270 km.The subsequent reduction in foF2 can be attributed to the rapid increase in recombination rate at these low heights.Thus the exact time at which the foF2 values peak can be expected to occur may depend on the rapidity with which the height falls.There was a lesser though similar variation of all three parameters with a shift in occurrence to later in the night on 4 November 1997 at the southernmost site of Townsville (Fig. 5d). True height profiles for the oblique path from Vanimo to Townsville and at Darwin were not available but variations in maximum frequency and minimum time delay (Fig. 5b, 4 November 1997) and base F2 virtual height h F (Fig. 5c, 5 November 1997) are included to show that there was a similar relationship between falling height and surging maximum frequency at these ionospheric reflection points. Latitudinal time variation A time shift is evident between Vanimo and Townsville surge events on 4 November 1997 shown in Fig. 5. Verification of this feature as a consistent result was obtained by superimposing a number of base ionospheric height variations (h F) measured at Vanimo, Darwin and Townsville over the same nights with results shown in Fig. 6. Peak ionospheric base height at Vanimo is seen to occur within two hours of ground-sunset.The subsequent fall in height ends just before local midnight.At Darwin, the peak height is reached some three to four hours after local groundsunset and the fall in height finishes just after local midnight.At the southernmost site of Townsville, the peak height is reached slightly later than at Darwin and the subsequent fall ends between one and two hours after local midnight. Doppler observations at Sumedang The Kel IPS71 ionosonde at Sumedang recorded Doppler ionograms at a 15-min rate.At each sweep frequency a Doppler Spectrum was taken and the Doppler frequency with the maximum amplitude value was recorded.These Doppler values were converted to equivalent virtual vertical velocity (v) values using the usual formula where c is the speed of light, ν the observed Doppler frequency shift from the transmitted frequency and f is the operating frequency.Plots of foF2 and virtual vertical velocity are given in Fig. 7 as measured on 1 November 1997.Vertical motion in this diagram is defined as being positive for upward velocities and downward for negative.The colour code for all vertical velocities may be read from the bottom panel of Fig. 7.Each vertical line in all three panels represents the contents of one ionogram. The fall in peak layer height at Sumedang on the night of 1 November 1997 commenced at 20:00 LT and, as measured from Fig. 5e, averaged 12 m/s over a period of 3.5 h.The downward drift then reversed to give an upward velocity of 39 m/s averaged over the following 30 min.These values are consistent with the velocities shown in Fig. 7 as calculated from the concurrent Doppler frequency shift measurements. If the F2 layer moved uniformly as a whole, a constant velocity would be observed independent of frequency throughout the ionosonde sweep.However this rarely happens as is indicated in Fig. 7 by the spread in virtual velocity within each ionogram, seen most clearly in the bottom panel.The observed Doppler shift at each sounding frequency is an integrated function of ionisation change up to the reflection height (see discussion by Prabhakaran Nayar and Sreehari, 2004).In the present case, the variations in Doppler frequency and thus velocity appear to be dominated by changes in layer height.In Fig. 7, positive velocity values are associated with the sunset pre-reversal height rise and the recovery from the pre-midnight fall occurring after midnight.This pattern repeated on successive nights whenever the postsunset surge occurred.In the upper panel of Fig. 7, the frequency at which the peak velocity occurs is seen to move down from the critical frequency to the lowest observed frequency during the period of post-sunset height rise between 18 to 20 LT.During the subsequent fall in height over the period 21-22 LT, the greatest downward (negative) velocity was present at the highest frequency (peak of the layer). The lower panel of Fig. 7 shows a slight decrease in the downward velocity between 22-24 LT.This corresponds to a temporary reduction in the rate of fall of ionospheric base height at this time as seen in the middle panel of Fig. 7.The Doppler observations of Fig. 7 are consistent with those of the more limited Doppler measurement of Prabhakaran Nayar and Sreehari (2004) made on the magnetic equator at Trivandrum.The velocities observed during the post-sunset rise and subsequent fall and recovery in height were the highest values of velocity observed over the 24-h period except for those associated with sunrise. Post-midnight events In recent years much theoretical work has gone into explaining the post-sunset height rise and the following reversal.Until recently it has been taken for granted that these changes are necessarily and uniquely associated with sunset conditions at the magnetic equator.However Fig. 5a provides an example of a Vanimo event commencing and finishing after midnight which is identical except in scale to the changes associated with sunset on this day.For this secondary event, the average velocity of the fall in peak height over a 1.4-h period from 01:30 LT was some 15.5 m/s with a subsequent average rise velocity over 30 min of 64 m/s.are essentially the same as those seen in the preceding sunset associated rise and fall.Although the increase in foF2 accompanying the fall in ionospheric height and in ionospheric thickness is small in absolute terms with respect to the sunset event at Vanimo, it is still large in relative terms (about a 20% increase in foF2) because of the lower values of electron density at this time.Apart from its lesser magnitude, this event is identical to the much stronger event associated with sunset which in turn represents the normally expected build-up and decay of the southern peak of the equatorial anomaly. The post-midnight event on this night at Vanimo is also visible at Townsville (Fig. 5d) and, less clearly, on the path from Vanimo to Townsville (Fig. 5b).The time delay in the commencement of this event increased with latitude with the Townsville event occurring some 45 min after the event at Vanimo.The Vanimo-Townsville event occurred with an intermediate time delay consistent with its location.Such a strong correlation between all three measured parameters at all three sites over a distance of 2000 km in latitude would seem to require a major rise and fall in vertical ionisation drift at the southern peak of the equatorial anomaly with resultant delayed changes in distributed ionisation from the anomaly peak to its southernmost edge.Clearly such a possible correlation requires further investigation over a larger number of events. Examination of a large number of Range-Time plots of h F taken from low latitude ionosonde sites over a number of years indicate that such secondary phenomena are far from unusual.An example of true height data given by POLAN is shown in Fig. 8 as observed at Vanimo on the night of 24 February 1999.The post-sunset event follows the usual decrease in height, decrease in slabwidth, increase in foF2 and subsequent recovery over the period 20:00-24:00 LT, as described above.But this event is immediately followed by another which is identical in form though diminished in scale over the period 00:00-02:00 LT. Ionospheric storm effects 6-7 November 1997 The patterns of foF2 and virtual height variation shown in Fig. 4 were consistent up to the night of 7 November.This was the night preceding a major low latitude negative ionospheric storm observed during daylight hours on the following day of 8 November.This storm was one of several discussed by Lynn et al. (2004).The negative storms there described are attributed to chemical changes produced by auroral up-welling reaching the equatorial region as driven by equatorward winds. Observations of the sunset period over three days centred on the 7 November, 1997 at Vanimo are shown in Fig. 9.The normal post-sunset variations in h f and foF2 occurred at Vanimo on the 6th and 8th.However no increase and subsequent decrease in post-sunset virtual height occurred on the evening of 7 November.Instead, following sunset, a slow rise in height occurred over a period of several hours during which the values of foF2 declined steadily with no sign of post-sunset surging.This may well be the diurnal variation when the night anomaly is absent and the usual zonal drift has stopped or reversed as a precursor to the negative ionospheric storm effect.On the following negative storm day, the day-time anomaly also failed to develop indicating a failure of the normal electrojet and fountain effect.However the ionosphere showed a partial recovery to its normal post-sunset variation on the following night. Discussion This paper shows the relationship between the four parameters foF2, hmF2, sub-peak ionospheric slab thickness W and Doppler velocity during the build-up and decay of the night-time equatorial anomaly.Moreover the same relations in Fig. 6 are seen to be exhibited with diminishing intensity and increasing time delay as the latitude of the ionosonde increases from the night equatorial anomaly peak (Vanimo and Sumedang) to higher latitudes (Darwin and Townsville).The post-sunset surges in peak ionospheric critical frequency and the associated fall in ionospheric height continue to play their part in modifying the night ionosphere at and near the equinoxes as far south as Northern Australia.These latitudes are near the southern limits of the area dominated by a strong night-time equatorial anomaly.The low-latitude vertical drift model of Scherliess and Fejer (1999) indicates that the post-sunset time variation in vertical drift velocity at the magnetic equator has little or no longitudinal dependence.Comparing their results with the observations of Fig. 3 shows that the fall in height at the lowest latitude site of Vanimo, commencing in the period 19:00-20:00 LT, corresponds well with a change to downward drift at the magnetic equator at this time as given by Scherliess and Fejer (1999). Arecibo in Puerto Rico is comparable to Townsville in terms of magnetic latitude and dip.Many papers from Arecibo refer to the "midnight collapse" of the ionosphere at this location.An example is given by Crary and Forbes (1986, their Fig. 2) in which the fall in peak height, the narrowing of the electron density profile and subsequent height rise is evident.The late occurrence of the fall in height is consistent with the time variation shown here for Townsville suggesting that there is a consistent delay with increasing latitude from the time the post-reversal fall in height occurs at the magnetic equator and that this delay is present at all longitudes. This paper shows post midnight events which mimic the height, fof2 and ionospheric thickness variations associated with the sunset ionospheric vortex.The existence of this vortex now seems well-established.Kudeki and Bhattacharyya (1999) provide direct examples of the vortex drift in ionisation associated with sunset at Jicamarca.They also provide some evidence for the subsequent development of secondary vortices and discuss possible causes.In contrast, Nicolls et al. (2004) provide an example of a night at Arecibo for which large oscillations in F2 layer height were recorded.Each fall in height associated with the first two oscillations is accompanied by a surge in foF2 similar to that described here.The relationship of these oscillations with the normal "midnight collapse" is not directly commented upon.The second and larger rise and fall seems to have occurred at the expected time of such a collapse.Nicolls et al. (2004) attribute these oscillations to large-scale travelling ionospheric disturbances propagating equatorward from the auroral zone.However as noted in Sect.7, the post midnight event seen on 4 November 1997 showed an increasing universal time delay with latitude and thus cannot be attributed to an equatorward TID unless it crossed the equator from the northern hemisphere.. Cécile et al. (1996) show an example of a large scale postmidnight variation in ionospheric height at an equatorial site in Africa.It may be that there are a number of possible sources of such events and that irrespective of the cause of a rise and fall in ionospheric height at these latitudes, the observed results can be the same in terms of the parameters discussed here. The relationship between falling height and a temporary rise in foF2 is a common feature in day-time as well as at night and is probably a generic feature of large-scale TIDs.As pointed out by Nicolls et al. (2004) in relation to such TIDs, if the F2 layer falls (poleward wind), the peak density increases due to downward flux along dipping field lines and the layer also narrows due to recombination at the lower heights.The same result is produced by the sunset vortex although the latter is driven by electric fields.Difficulties of interpretation can thus arise when more than one phenomenon can produce a similar effect.In the light of these observations, more attention should be directed to the causation of large scale height variations in the post-midnight period at equatorial and low latitudes. Some confusion also arises as to whether the pre-reversal height rise occurs after or before sunset depending on the definition of sunset used.If ground sunset is taken as the criterion, the pre-reversal height rise occurs mainly after sunset.However sunset at F2 peak heights occurs some hours later and at Vanimo often corresponds to the time of maximum height rise.In this respect, the pre-reversal height rise occurs before sunset at the peak height of the F2 layer at Vanimo. The strong Equatorial Spread F (EQSF) associated with the rise of post-sunset bubbles produced by the Rayleigh-Taylor instability were not obvious on any of the nights here described.Such bubbles typically develop at or shortly after the peak height of the post-sunset ionosphere is reached.Major EQSF usually associated with bubbles was observed to occur at Vanimo closer to the equinoxes than the nights examined here although the magnitude and duration of the preand post-reversal height variations were similar.This suggests that the occurrence of bubbles is more closely tied to the period in which the sunset terminator parallels the field lines.The occurrence of the pre-and post-reversal of sunset height variations seems to be less sensitive to this requirement. This paper raises a number of questions for further experimental investigation at low latitudes and Southeast Asian longitudes.These include 1. whether the abrupt recovery of the electron density profile in slab thickness and maximum height following the post-sunset reversal is present at the magnetic equator as well as at the southern anomaly peak and beyond 2. the seasonal variation of the post-sunset height rise and fall and associated anomaly peak electron density 3. the sunspot cycle dependence. 4. the occurrence and ionospheric behaviour associated with major height rises and falls occurring later in the night 5. the relationship between ionosonde and satellite TEC measurements of the night-time equatorial anomaly These matters and others are under current investigation.Theoretical questions remain relating to the latitudinal and time-dependence of the observations discussed here. Fig. 4 .Fig. 4 . Fig.4.Ionograms (upper) and the derived true height profiles given by POLAN (lower) measured on the night of 4 November 1997 and plotted as a function of time. Fig. 5 . Fig.5.Fc, hmF2 and sub-peak slab width W measured at Vanimo, Sumedang and Townsville.Fmax and time delay observed over the oblique path Vanimo-Townsville and FoF2 and h'F observed at Darwin. Fig. 5 . Fig. 5. Fc, hmF2 and sub-peak slab width W measured at Vanimo, Sumedang and Townsville.F max and time delay observed over the oblique path Vanimo-Townsville and foF2 and h F observed at Darwin. Fig. 6 . Fig. 6.Superimposed values of h F (base ionospheric F2 virtual height) as a function of time showing the delay in the post sunset fall in height with increasing distance from the equator as observed over a number of days.Median values are shown in red. Fig. 7 . Fig. 7. Doppler derived vertical velocity plotted as a function of time (bottom), as a function of ionogram frequency (top) and virtual height (middle).The colour code for velocity used throughout can be read from the bottom panel in relation to the time dependence of velocity.Data from Sumedang as observed on the 1 November 1997. Fig. 8 . Fig. 8. Ionospheric critical frequency F c, peak height hmF2 and sub-peak slab width W observed at Vanimo on 24 February 1999 showing multiple similar events. Fig. 9 . Fig. 9.Variations in base ionospheric height h F and foF2 at Vanimo over the period 6-8 November 1997.Normal sunset vortex behaviour on the 6 November was stopped on the night of the 7 November preceding a negative ionospheric storm day.A partial recovery occurred on the following night.
2018-12-14T20:32:04.055Z
2006-07-03T00:00:00.000
{ "year": 2006, "sha1": "9cc44292457a6f3347e45d1a7eb48f9575a0a05f", "oa_license": "CCBY", "oa_url": "https://angeo.copernicus.org/articles/24/1343/2006/angeo-24-1343-2006.pdf", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "9cc44292457a6f3347e45d1a7eb48f9575a0a05f", "s2fieldsofstudy": [ "Environmental Science", "Physics" ], "extfieldsofstudy": [ "Geology" ] }
18869971
pes2o/s2orc
v3-fos-license
Structural Biology and Crystallization Communications Structure of Fumarate Hydratase from Rickettsia Prowazekii, the Agent of Typhus and Suspected Relative of the Mitochondria Rickettsiae are obligate intracellular parasites of eukaryotic cells that are the causative agents responsible for spotted fever and typhus. Their small genome (about 800 protein-coding genes) is highly conserved across species and has been postulated as the ancestor of the mitochondria. No genes that are required for glycolysis are found in the Rickettsia prowazekii or mitochondrial genomes, but a complete set of genes encoding components of the tricarboxylic acid cycle and the respiratory-chain complex is found in both. A 2.4 A ˚ resolution crystal structure of R. prowazekii fumarate hydratase, an enzyme catalyzing the third step of the tricarboxylic acid cycle pathway that ultimately converts phospho-enolpyruvate into succinyl-CoA, has been solved. A structure alignment with human mitochondrial fumarate hydratase highlights the close similarity between R. prowazekii and mitochondrial enzymes. Introduction Typhus epidemics have been recurrent in human history; the pattern of infection was such that the bacterium Rickettsia prowazekii, the agent of typhus, could arguably determine the outcome of war, with outbreaks after World War I resulting in around three million deaths (Raoult et al., 2004). Although hecatombs of this scale remain exceptional, typhus continues to ravage populations in areas of conflict, with mortality rates among infected patients of as high as 20% without antibiotics (Center for Biosecurity of UPMC; http:// upmc-biosecurity.org). Despite its biological characteristics (environmental stability, small size, aerosol transmission, persistence in infected hosts, low infectious dose, high morbidity and substantial mortality), R. prowazekii may not be a primary bioweapon candidate because of its dependency on its eukaryotic host for propagation (Azad, 2007), although this view remains disputed (Walker, 2009). Nonetheless, the Center for Disease Control and Prevention (CDC) ranks R. prowazekii as a Category B biological agent and the Department of Health and Human Services (DHHS) classifies it as a top priority for the development of medical countermeasures, thus further encouraging efforts to understand the mechanism of action of this pathogen. The complete genome of R. prowazekii contains only 834 proteincoding genes, a very small number compared with the 5000 genes found in the model bacterium Escherichia coli, highlighting similarities between R. prowazekii and mitochondrial genes as well as the absence of the genes required for anaerobic glycolysis. It has been suggested that ATP production in Rickettsia is the same as that in mitochondria (Andersson et al., 1998). Despite the difference in size between the Rickettsia genome (over 1 000 000 bp) and that of human mitochondrial DNA (16 000 bp), the results of phylogenetic studies are consistent with an -proteobacterial ancestry of the mitochondrial genome (Gray et al., 2001). However, comparisons at the protein level reveal a far more complex picture, since 90% of the mitochondrial proteins are encoded in the nucleus (Gray et al., 2004). One such example is fumarate hydratase, a mitochondrial enzyme from the citric acid cycle, which is encoded on nuclear chromosome 1 in humans (Craig et al., 1976). The tricarboxylic acid cycle (TCA; also known as the Krebs cycle and the citric acid cycle) is a pathway that Tyler described in 1992 as 'so crucial to the metabolism of living cells that any significant defect is incompatible with life' (Tyler, 1992). The cycle is constituted by a series of biochemical reactions that lead to the progressive oxidative decarboxylation of acetyl-CoA (see Fig. 1). The step that converts fumarate to l-malate has recently been the target of studies of tumorigenesis in humans (King et al., 2006) and pathogenicity in bacteria ( van Ooij, 2010). Two classes of enzymes, class I and class II fumarate hydratase (fumarase; FumC), reversibly convert fumarate to l-malate and have no detectable sequence similarity (Woods et al., 1988). Class I fumarases (FumA and FumB enzymes) are homodimeric, thermolabile, iron-sulfur-containing enzymes of approximately 120 kDa. Class II fumarases (FumC enzymes) are homotetrameric, thermostable, iron-independent enzymes with a molecular mass of 200 kDa. The amino-acid sequences of mitochondrial class II FumCs are highly conserved in eukaryotes and are most closely related to the -proteobacterial homologues (Schnarrenberger & Martin, 2002). Defects in human FumC are the cause of fumarase deficiency, a disease characterized by progressive encephalopathy, developmental delay, hypotonia, cerebral atrophy and lactic and pyruvic acidemia (Coughlin et al., 1998). Heterozygous germline mutations of FumC were found in patients with multiple cutaneous and uterine leiomyomas (MCUL). A further set of mutations is the cause of hereditary leiomyomatosis and renal cell cancer (HLRCC). Research to elucidate the mechanisms that lead to enhanced glycolysis in tumours has shown that FumC and succinate dehydrogenase (SDH) are tumour suppressors, demonstrating for the first time how mitochondrial enzymes and their dysfunction are associated with tumorigenesis (King et al., 2006). A dedicated online database of FumC gene mutations lists all reported FumC sequence variants (Bayley et al., 2008). Besides its involvement in human tumorigenesis, the TCA cycle has been targeted for its role in pathogenicity. In particular, FumC was found to be one of nine in vivo-induced virulence factors in Listeria (Wilson et al., 2001) and to bind PdhS, an essential cytoplasmic histidine kinase involved in differentiation, in Brucella (Mignolet et al., 2010). A recent paper further shows that the TCA cycle signals the switch between a pathogenic state and a mutualistic state when the Photorhabdus bacterium changes hosts (Lango & Clarke, 2010). To this day, the SSGCID project is the sole depositor of Rickettsia structures in the Protein Data Bank. Here, we present the highresolution structure of R. prowazekii FumC and compare it with that of its human mitochondrial homolog. Chemical reaction pathway of the tricarboxylic acid cycle (TCA; also known as the Krebs cycle and the citric acid cycle); catalytic enzymes are indicated in pink boxes, with fumarase, the subject of this study, highlighted in red. This figure was prepared with CellDesigner (Funahashi et al., 2003). independent cloning (LIC; Aslanidis & de Jong, 1990) expression vector pAVA0421 encoding an N-terminal hexahistidine affinity tag followed by the human rhinovirus 3C protease cleavage sequence (MAHHHHHHMGTLEAQTQGPGS-ORF). Protein expression and purification The construct encoding the gene for FumC was transformed into chemically competent E. coli BL21 (DE3) Rosetta cells. An overnight culture was grown in LB broth at 310 K and was used to inoculate 2 l ZYP-5052 auto-induction medium, which was prepared as described by Studier (2005). FumC was expressed in a LEX bioreactor in the presence of antibiotics. After 24 h at 298 K, the temperature was reduced to 288 K for a further 60 h. The sample was centrifuged at 4000g for 20 min at 277 K and the cell paste was flash-frozen in liquid nitrogen and stored at 193 K. During the purification process, the frozen cell pellet was thawed and completely resuspended in lysis buffer (20 mM HEPES pH 7.4, 300 mM NaCl, 5% glycerol, 30 mM imidazole, 0.5% CHAPS, 10 mM MgCl 2 , 3 mM -mercaptoethanol, 1.3 mg ml À1 protease-inhibitor cocktail and 0.05 mg ml À1 lysozyme). The resuspended cell pellet was then disrupted on ice for 15 min with a Branson Digital 450D Sonifier (70% amplitude, with alternating cycles of 5 s pulse-on and 10 s pulse-off). The cell debris was incubated with 20 ml Benzonase nuclease at room temperature for 40 min. The lysate was clarified by centrifugation with a Sorvall RC5 at 10 000 rev min À1 for 60 min at 277 K in a F14S Rotor (Thermo Fisher). The clarified solution was syringe-filtered through a 0.45 mm cellulose acetate filter (Corning Life Sciences, Lowell, Massachusetts, USA). The lysate was purified by IMAC using a HisTrap FF 5 ml column (GE Biosciences, Piscataway, New Jersey, USA) equilibrated with binding buffer (25 mM HEPES pH 7.0, 300 mM NaCl, 5% glycerol, 30 mM imidazole, 1 mM TCEP) and eluted with 500 mM imidazole in the same buffer. The eluted FumC was concentrated and further resolved by size-exclusion chromatography (SEC) using a Superdex 75 26/60 column (GE Biosciences) equilibrated in SEC buffer (20 mM HEPES pH 7.0, 300 mM NaCl, 5% glycerol and 1 mM TCEP) attached to an Ä KTA FPLC system (GE Biosciences). Peak fractions were collected and pooled based on purity-profile assessment by SDS-PAGE. Concentrated pure protein was flash-frozen in liquid nitrogen and stored at 193 K. The final concentration (39.5 mg ml À1 ) was determined by UV spectrophotometry at 280 nm using a molar extinction coefficient of 33 015 M À1 cm À1 and the final purity (>97%) was assayed by SDS-PAGE. Crystallization Crystallization trials were set up according to a rational crystallization approach (Newman et al., 2005) using the JCSG+ and PACT sparse-matrix screens from Emerald BioSystems and Molecular Dimensions. Protein (39.5 mg ml À1 , 0.4 ml) in SEC buffer (20 mM HEPES pH 7.0, 300 mM NaCl, 5% glycerol and 1 mM TCEP) was mixed with an equal volume of precipitant and equilibrated against an 80 ml reservoir in sitting-drop vapor-diffusion format in 96-well Compact Jr plates from Emerald BioSystems at 289 K. Within six weeks, crystals grew in the presence of 2.4 M sodium malonate (JCSG+ condition F9). A gradient optimization screen was designed based on this condition and crystals grew from this screen after about six weeks in 1.4 M sodium malonate pH 6.0. Data collection and structure determination A crystal was harvested, cryoprotected with a solution consisting of the precipitant supplemented with 20% glycerol and vitrified in liquid nitrogen. A 2.4 Å resolution data set was collected at the Advanced Light Source (Andersson et al., 1998) and manual rebuilding in Coot (Emsley & Cowtan, 2004). The final model consisted of residues Asn3-Glu457 with no internal gaps for protomer A, residues Asn3-Pro316 and Met321-Leu406 for protomer B, 198 water molecules, two malonate molecules (one bound to each protomer) and a sodium ion assigned based on the crystallization conditions (sodium malonate), B factors and coordination distances of $2.5 Å (Zheng et al., 2008). The structure was assessed and corrected for geometry and fitness using MolProbity (Chen et al., 2010). Discussion The R. prowazekii FumC structure was determined in complex with the product analog malonate. Like the first reported FumC structure from E. coli (Weaver et al., 1995), R. prowazekii FumC crystallized as a homodimer containing two subunits of the normally tetrameric enzyme (see Fig. 2), in which each chain forms an elongated central four-helix bundle capped by two compact domains at the N-and C-termini. Fig. 3 shows the tetrameric assembly predicted by the PISA quaternary-structure tool (Krissinel & Henrick, 2007), including the malonate ligand in the active site. Structure alignment of the R. prowazekii FumC monomer with the human enzyme using MultiProt (Shatsky et al., 2004) Two views of the tetrameric assembly predicted by PISA from the 3gtd coordinates (http://www.ebi.ac.uk/msd-srv/prot_int/pistart.html) showing the schematic backbone trace of the four subunits modelled for dimer 1 chain A (blue) and B (yellow) and dimer 2 chain A (magenta) and B (green). (a) The side view of each chain bound to the ligand malonate shown in CPK. (b) The two sodium ions at the interface of each dimer can be seen near the central axis of symmetry. jF obs j À jF calc j = P hkl jF obs j. The free R factor was calculated using 5% of the reflections omitted from the refinement ). ‡ Chen et al. (2010. Table 3 Best pairwise backbone C r.m.s.d. between the FumC structure from Rickettsia and those from human, E. coli and S. cerevisiae (yeast) calculated using MultiProt (Shatsky et al., 2004). Table 3 for pairwise r.m.s.d.s). The largest deviation is found in the C-terminal region; otherwise the backbone structure is remarkably conserved, including the active site (see Fig. 4). The residues located within 6 Å of the ligand in the Rickettsia structure, Thr96, Ser98, Ser139, Ser140, Asn141, Ala231 and Leu358, are 100% conserved in the three other species and adopt almost identical conformations, even in the unbound structures: the r.m.s.d. for all atoms over those eight residues is 0.83 Å from the human structure, 1.09 Å from that from E. coli and 1.15 Å from that from S. cerevisiae. The only visible difference between the human and Rickettsia pockets is the tilting of the Ser140 hydroxyl group away from the active site in the human structure (see Fig. 5). FumC displays some essential features of a good drug target: it is clearly involved in a crucial biological pathway, is functionally well characterized and possesses a druggable binding site. However, the structural evidence obtained in the present study strongly indicates that this enzyme is an unsuitable target for therapeutic intervention against Rickettsia owing to the very high degree of conservation between the human and R. prowazekii structures in terms of both the global fold and the binding site. Superposition of the backbone C traces of the FumC monomers from Rickettsia (PDB entry 3gtd; red), human (PDB entry 3e04; green; Structural Genomics Consortium, unpublished work), E. coli (PDB entry 1kq7; blue; Esté vez et al., 2002) and S. cerevisiae (PDB entry 1yfm; yellow; Weaver et al., 1998) showing the conserved overall fold and the deviations at the C-terminus at the bottom left region of the structure. The ligands for the Rickettsia and E. coli structures, malonate (red) and citric acid (blue), respectively, are represented in CPK colors. Figure 5 Schematic representation of the ligand environment in the Rickettsia FumC monomer complexed with malonate superimposed on the corresponding residues in the human structure (PDB entry 3e04). The backbone (gray for Rickettsia, magenta for human) and side chains of residues located within 6 Å of the ligand are shown. Hydrogen bonds of less than 3 Å are shown as dashed lines. Residues are numbered according to the Rickettsia FumC numbering system and residues from the second protomer that comprise the active site are identified with the 0 notation (Thr187 0 and His188 0 ). The 2|F o | À |F c | electron-density map is shown in light blue mesh contoured at 1.0. Note that the human structure is apo and thus Lys371 (equivalent to Lys324 in Rickettsia) appears disordered. This figure was generated using PyMOL (DeLano, 2002).
2018-05-08T18:39:58.486Z
0001-01-01T00:00:00.000
{ "year": 2011, "sha1": "8ed6f1ad7bbfeabd7c7c4631270f1c6b9880e8e5", "oa_license": "CCBY", "oa_url": "http://journals.iucr.org/f/issues/2011/09/00/en5468/en5468.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "8ed6f1ad7bbfeabd7c7c4631270f1c6b9880e8e5", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [] }
258271931
pes2o/s2orc
v3-fos-license
Large-Scale Preparation of Carboxylated Cellulose Nanocrystals and Their Application for Stabilizing Pickering Emulsions Cellulose nanocrystals (CNCs) with varied unique properties have been widely used in emulsions, nanocomposites, and membranes. However, conventional CNCs for industrial use were usually prepared through acid hydrolysis or heat-controlled methods with sulfuric acid. This most commonly used acid method generally suffers from low yields, poor thermal stability, and potential environmental pollution. Herein, we developed a high-efficiency and large-scale preparation strategy to produce carboxylated cellulose nanocrystals (Car-CNCs) via carboxymethylation-enhanced ammonium persulfate (APS) oxidation. After carboxymethylation, the wood fibers could form unique “balloon-like” structures with abundant exposed hydroxy groups, which facilitated exfoliating fibril bundles into individual nanocrystals during the APS oxidation process. The production process under controlled temperature, time period, and APS concentrations was optimized and the resultant Car-CNCs exhibited a typical structure with narrow diameter distributions. In particular, the final Car-CNCs exhibited excellent thermal stability (≈346.6 °C) and reached a maximum yield of 60.6%, superior to that of sulfated cellulose nanocrystals (Sul-CNCs) prepared by conventional acid hydrolysis. More importantly, compared to the common APS oxidation, our two-step collaborative process shortened the oxidation time from more than 16 h to only 30 min. Therefore, our high-efficiency method may pave the way for the up-scaled production of carboxylated nanocrystals. More importantly, Car-CNCs show potential for stabilizing Pickering emulsions that can withstand changeable environments, including heating, storage, and centrifugation, which is better than the conventional Sul-CNC-based emulsions. INTRODUCTION Cellulose, as the most abundant natural biopolymer on the earth, is extracted from various sources including wood fibers (hard or soft wood), nonwood fibers (seed, bast, cane, leaf, straw, or fruit), bacterial, algae, and even some tunicates. 1 Within the family of cellulose nanoscale derivatives, cellulose nanocrystals (CNCs) are especially attractive owing to their unique features, such as renewability, biocompatibility, biodegradability, and ease of chemical modification. 2−6 Therefore, CNCs have been widely used in the industry and research areas, such as pickering emulsion, papermaking, food packaging, and biomedical engineering, etc. 7−12 In order to extract CNCs, the intra-and intermolecular hydrogen bonds between the cellulose chains that consist of linear molecules β-D-glucopyranosyl units must be overcome by chemical reactions, mechanical defibrillation, enzymatic hydrolysis, ionic liquids, deep eutectic solvents, or a combination of these techniques, etc. 13−16 Among them, sulfuric acid hydrolysis is a low energy-and time-consuming method and easy to destroy the amorphous portions' hydrogen bonds of natural cellulose, producing highly stereo-regular and crystalline sulfated cellulose nanocrystals (Sul-SCNCs). 17 Moreover, the sulfuric acid hydrolysis can introduce abundant negative-charged sulfate half-ester groups on the surface of cellulose chains, which promote the CNC's suspension dispersion and avoid aggregation. However, some shortcomings also need to be noted, such as the requirement of large amounts of H 2 SO 4 (9 kg H 2 SO 4 per kilogram CNCs), low yields of less than 30%, potential environmental pollution, and poor thermal stability of the CNCs. 9,11,14,18−21 Alternatively, the ammonium persulfate (APS) oxidation method was considered as a potential route to improve acid hydrolysis. APS, as a green chemical reagent with low cost, low long-term toxicity, and high solubility, is able to offer excellent thermal stability to CNCs. 9 Leung et al. first attempted the isolation of carboxylated CNCs by one-step APS oxidation from hemp, flax, and triticale. 22 During the oxidation process, once heated, the persulfate (HSO 4 − ), peroxide free radicals (SO 4 ·− ), and hydrogen peroxide (H 2 O 2 ) were generated immediately to disintegrate CNCs and eliminate non-cellulosic components. Another advantage of this method is that the asprepared CNCs exhibited a more homogeneous structure with a diameter of 5 nm and a length of 150 nm. 23 Since 2011, a number of articles about the preparation of carboxylated CNCs by one-step APS oxidation from various lignocellulosic sources, such as bleached northern softwood kraft pulp, 24 bleached birch kraft pulp, 25 kapok fibers, 26 lemon (Citrus limon) seeds, 27 cotton linters, 11,28 cotton pulp, 29 tunicate, 30 jute fiber, 18 sugarcane bagasse, 31 lyocell fibers, 32 have been published. However, the high time-consuming oxidation procedures (16− 24 h) have reduced the production efficiency and hindered large-scale applications. 9,18,29 Therefore, it is still a challenge to develop an effective and scalable APS approach to produce high-performance CNCs. As is known, both 2,2,6,6-tetramethylpiperidine-1-oxyl radical (TEMPO)-mediated oxidation and carboxymethylation pretreatment have been used for improving the cellulose nanofibrillation efficiency via breaking the inter/intramolecular hydrogen bonds between cellulose chains and converting the alcohol hydroxy groups to sodium carboxylate groups. 33,34 The isolation of nanocellulose by TEMPO-oxidation pretreatment is energy-saving with less energy consumption demand, but the oxidation capacity is still limited since it can only positionselectively oxidize C 6 -primary alcohol groups of cellulose while the hydroxyl groups at C 2 and C 3 kept intact. 35 Moreover, TEMPO is a toxic catalyst that will cause a negative footprint on the environment. Compared to TEMPO-mediated oxidation, carboxymethylation pretreatment has relatively less pollution and higher hydroxyl conversion capacity. During carboxymethylation, the C 6 , C 2 , and C 3 positions on the cellulose fiber carried out nonselective reactions, thus leading to a high carboxyl content, reducing adhesion between the fibrils, and facilitating the microfibril separation. 36−38 So far, to the best of our knowledge, no study has investigated the isolation of CNCs by combining carboxymethylation pretreatment and the APS oxidation method. In this study, the goal was to make up for the disadvantage of one-step APS oxidation and further improve the preparation efficiency of CNCs. First, we introduced a scalable carboxymethylation pretreatment step, which could facilitate the swelling of wood cell walls and expose more C 6 hydroxy groups. Then, APS oxidation was used to exfoliate fibril bundles into individual nanocrystals. The effects of APS oxidation conditions, such as time, temperature, and concentration on the yield were systematically studied to optimize the preparation process and product quality. A series of characterizations were conducted to investigate surface morphology, crystal structures, and thermal stability. Besides, compared to the conventional Sul-CNCs, our resultant carboxylated cellulose nanocrystals (Car-CNCs) exhibited high preparation efficiency and excellent performance, demonstrating great potential for scalable production. Another appealing feature of our Car-CNCs is the higher zeta potential than conventional Sul-CNCs, which can stabilize the Pickering emulsions without adding additional metal salts to adjust the electrostatic repulsion effect. The emulsions stabilized by Car-CNCs can withstand changeable environments, including heating, storage, and centrifugation, superior to the Sul-CNC-based emulsions. Thus, the Car-CNCs were explored as promising candidates for Sul-CNCs for preparing stable Pickering emulsions. Preparation of Cellulose Nanocrystals. 2.2.1. Preparation of Carboxymethylated Cellulose Fibers. Carboxymethylated cellulose (CMC) fibers with sodium carboxylate groups (CMC-COONa) were fabricated according to our previously described methods. 39 Briefly, 25 g of bleached softwood pulp (raw fibers) was mixed with 384 mL of ethanol in a high concentration-stirred reactor for 20 min and then NaOH solution (12.5 g) dissolved in 317 g of ethanol was added into the mixture. After that, the mixture was heated to 65°C and followed by adding 30 g of MCA solution (w MCA / w water = 1:1). Subsequently, the above dispersion with ≈4 wt % of pulp consistency was stirred for 40 min at 65°C. Finally, the CMC-COONa slurry was obtained by washing it with water in a 600-mesh filter. The carboxylate content and zeta potential value of the CMC-COONa slurry were 1.5 mmol/g and −37.6 mV, respectively. Besides, the carboxymethylation process in this study can be scalable and has been applied in the factory, as shown in Figure S1. Preparation of Car-CNCs. Car-CNCs were extracted from the CMC-COONa slurry with APS. In brief, 10 g of dried CMC-COONa slurry was added to 500 mL of APS aqueous solution (0.25, 0.5, 0.75, and 1.0 mol L −1 ) and reacted at the setting temperature (T = 50−90°C) for different oxidation periods (15,30,60,90, and 120 min). Subsequently, the suspension was centrifuged for 10 min at 10,000 rpm for several cycles until the supernatant turned turbid. Then, the obtained cellulose nanocrystal suspension was poured into dialysis membrane tubes (molecular weight cut off 10,000) and dialyzed for 1 week. Finally, the suspension was collected and denoted Car-CNCs-X, where X represents the different oxidation parameters, such as concentrations of APS, reaction temperatures, and oxidation periods. The detailed information is shown in Table S1. Fabrication of Sul-CNCs by Acid Hydrolysis. For comparison, the nanocrystal suspension with sulfate half ester groups (Sul-CNCs) was prepared as previously reported methods. 40,41 Bleached softwood pulp was added to the 64% w/w sulfuric acid (the ratio of w fibers /V acid = 1:10 g/mL) under vigorous agitation and reacted at 45°C for 70 min. Then, the mixture was diluted and centrifuged. For further purification, the obtained Sul-CNC suspension was also dialyzed for 1 week. After dialysis, the Sul-CNC suspension was collected and stored at 4°C for further use. Preparation Strategy and Mechanism of the Car-CNCs. In our study, we combined a carboxymethylation pretreatment and the APS oxidation process to improve the preparation efficiency of CNCs. As shown in Figures 1a,b and S1, our first step was to introduce a scalable carboxymethylation swelling pretreatment, which can expose more hydroxy groups and provide more oxidation sites for further APS oxidation process. We measured the optical microscopy images and corresponding size distributions of softwood pulp and carboxymethylated cellulose fibers (CMC-COONa), respectively. Figures 1d and S2a,b show that the raw fibers exhibit a spindly hollow structure with an average length of 2501 μm and a width of 29.4 μm. After carboxymethylation treatment, the CMC-COONa exhibited balloon-like and exfoliated structures (Figures 1e and S3). Meanwhile, the length of CMC-COONa reduced to 1415 μm, while the width expanded to 45.1 μm ( Figure S2c,d). Most disintegrated fibrils (stretched out from the S1 and S2 layers) are contained inside the balloons and tend to form parallel stripes across the balloons. This change of alternating microfibril arrangements caused by full swelling behavior could facilitate the subsequent extraction of CNCs effectively. Then, APS oxidation was used to exfoliate fibril bundles into individual nanocrystals (Figure 1f). The formation mechanism and dimensional change of the balloon structures could be explained in Figure 1g. As the carboxylic anion accumulated on the fiber surface, the electrostatic repulsive force become stronger and led to the ballooning or breakage of the fiber cell walls. As a result, the disordered and partially ordered domains of native cellulose were both disrupted to a varying extent. Compared with the crystalline region, the amorphous region of cellulose is looser and has a larger molecular distance; thereby, it can be readily accessible by the reactants and destroyed effectively. 38,42 Subsequently, when the carboxymethylated slurry was exposed to the APS system, balloon-like structures would play a crucial role to improve production efficiency. As shown in Figure 1h, persulfate (HSO 4 − ), peroxide free radicals (SO 4 ·− ), and hydrogen peroxide (H 2 O 2 ) were generated during the APS oxidation process. SO 4 ·− free radicals and H 2 O 2 could penetrate balloon-like structures readily, oxidate the rest C 6 hydroxyl groups to the carboxyl groups, and form stronger electrostatic repulsion between fibrils, resulting in the rapid formation of Car-CNCs under mild conditions. Therefore, owing to the swelling of carboxymethylation, the oxidation time of APS reduced significantly. 3.2. Morphology, Dimensions, Yields, and Zeta Potentials of Cellulose Nanocrystals. After carboxymethylation pretreatment, we further studied the effects of APS oxidation time, reaction temperature, and APS concentration on the morphology, dimensions, yields, and zeta potentials of Car-CNCs. For comparison, the nanocrystal suspension with sulfate half-ester groups (Sul-CNCs) was also investigated. The detailed data were summarized in Table S1 and the inset of AFM images. As shown in Figure 2, the obtained Car-CNCs and Sul-CNCs exhibited typical structures with a nano-scale length and diameter, suggesting we have successfully prepared the individual CNCs. AFM images were analyzed using ImageJ V. 1.8.0 software to investigate the morphological features and dimensions of cellulose nanocrystals ( Figures S4−5). Under 45°C and 70 min, the average length and width of obtained Sul-CNCs were 118.2 ± 55.5 and 17.9 ± 3.0 nm (Table S1), respectively. While the morphology and dimension of Car-CNCs were strongly affected by extraction conditions. The average width of Car-CNCs ranged from 11.4 to 18.4 nm, and the corresponding average lengths ranged from 74. Table S1 and the inset of AFM images). The successive decrease in the length of Car-CNCs was mainly due to the constant breakdown of disordered or even crystalline domains by hydrolyzing the 1,4β-glycosidic bonds of the cellulose chains during APS oxidation. The yields of Car-CNCs and Sul-CNCs were calculated by eq S1 and the results were summarized in the inset of AFM images and Table S1. Interestingly, the as-prepared Sul-CNCs exhibited the lowest yield, suggesting that a large number of cellulose chains were degraded under harsh sulfuric acid. 43 On the contrary, the yields of as-prepared Car-CNCs by our milder APS oxidation were all higher than that of Sul-CNCs. Moreover, with the increasing of reaction time, the yield increased first and then decreased. When the reaction time was 30 min, it had the maximum value. Due to the short reaction time (less than 30 min), CNCs were not fully extracted, and large unreacted fibers remained in the reaction system. When the reaction time was prolonged than 30 min, the reaction intensity was excessive, and the isolated CNCs were degraded during constant oxidation. Similar trends were also observed with the increasing of reaction temperature or APS concentration. It was worth noting when the oxidation time was 30 min, APS concentration was 0.5 mol L −1 , and reaction temperature was 70°C, the Car-CNCs-30 min has the highest yield of 60.6%. Obviously, the carboxymethylation swelling pretreatment not only shortened the oxidation time of APS but also kept a high yield. Besides, these results also demonstrated carboxymethylation collaborative APS oxidation was superior to the previously reported conventional APS oxidation approaches (Table S2). In addition, the zeta potential was also investigated to evaluate the dispersion of Car-CNC and Sul-CNC suspensions. Generally, a higher absolute zeta potential value shows a more stable dispersion due to electrostatic repulsion. 27 As displayed in the inset of AFM images and Table S1, the Sul-CNCs showed the lowest zeta potential value (−51.3 mV), which is associated with the abundant sulfate groups. As for Car-CNCs, after carboxymethylation pretreatment, native cellulose fibers converted to CMC-COONa with the zeta potential decreasing from −15.6 to −37.6 mV, indicating the −OH groups of C 2 , C 3 , and C 6 positions on the cellulose surface partly converted to −COONa groups. Furthermore, when the APS oxidation temperature was lower than 80°C or the reaction time was less than 90 min, the zeta potential of the Car-CNCs showed a higher value than that of CMC-COONa. This phenomenon could be relevant to the electronegativity of −COOH and −COONa groups. In the suspension, −COONa groups were completely ionized to −COO − and sodium ion hydrate, which would increase the electrostatic repulsive force between the cellulose chains. However, the −COOH groups were partially ionized to −COO − and H + , which would restrict the electrostatic balance among cellulose chains. 44 On the contrary, as the temperature was above 80°C or reaction time was prolonged to 120 min, the zeta potential of Car-CNCs decreased significantly and was lower than CMC-COONa. That's because excessive oxidation would enhance the amount of −COOH groups and provide stronger electrostatic repulsion force. 3.3. Chemical Structure Analysis. The chemical structures of the raw fibers, CMC-COONa, Car-CNCs, and Sul-CNCs were confirmed by the FT-IR spectra. As shown in Figure 3a,b, all samples exhibited similar characteristic peaks at ≈3400, 2900, and 1060 cm −1 , which corresponded to the O− H stretching, C−H vibration, and C−O−C pyranose ring stretching vibration in cellulose, respectively. These results indicated the crystal structures of CMC-COONa, Car-CNCs, and Sul-CNCs were not changed after carboxymethylation, APS oxidation, and sulfuric acid hydrolysis, respectively. Compared with the original fibers, a new peak that occurred at ≈1600 cm −1 was found in the spectra of carboxymethylated cellulose slurry (Figure 3a), which corresponded to the C�O stretching of sodium carboxylate groups. The results indicated that part hydroxyl groups at the C 6 , C 2 , and C 3 positions successfully converted to the sodium carboxylate groups. However, after APS oxidation, the intensity of C�O absorption peak at 1600 cm −1 disappeared and red-shifted to the peak at ≈1737 cm −1 (Figures 3b and S6a,b), which could be attributed to that the sodium carboxylate groups were all converted to free carboxyl groups and part of the rest unreacted primary alcohol groups on C 6 position were selectively oxidized to carboxyl groups. 9 For the case of Sul-CNCs, a new band at 1202 cm −1 could be also observed in Figure 3a, suggesting the introduction of sulfate groups. Thus, these characteristic peaks demonstrated the successful isolation of cellulose nanocrystals from the original bleached softwood fibers. 3.4. Crystallinity Analysis. The XRD measurement was performed to evaluate the crystal structure and crystallinity of the raw fibers, CMC-COONa, Car-CNCs, and Sul-CNCs. As shown in Figures 3c,d and S6c,d, all samples exhibited similar diffraction peaks at around 2θ = 14.9, 16.5, 22.7, and 34.6°, which were corresponding to the 1−10, 110, 200, and 004 crystal planes of cellulose I β , respectively, suggesting the crystal structure of cellulose did not change after carboxymethylation, APS oxidation, and sulfuric acid hydrolysis, 4 respectively. The crystallinity index (CrI) is a key factor that affects the mechanical and thermal properties of the cellulose samples, which can be calculated by eq S2. As shown in Table S3, the CrI of original native fibers is 73.4%. After treating the native fibers by carboxymethylation, we observed a significant decrease in the crystallinity of CMC-COONa (61.1%). The results indicated carboxymethylation and mechanical shear stress in the swelling process were nonselective and might destroy both amorphous and crystalline regions of cellulose. For Car-CNCs, however, all their CrI (65.8−74.9%) values were higher than those of CMC-COONa (61.1%) (Table S3), which was ascribed to the further decomposition of amorphous regions of carboxymethylated cellulose fibers during APS oxidation. In addition, we also observed an increase in CrI for Sul-CNCs (82.5%) compared to the original fibers (73.4%), which was mainly attributed to that the hydronium ions could penetrate into the more accessible amorphous regions and promote the hydrolytic cleavage of glycosidic bonds, thereby liberating the individual crystallites. 45 With the increasing oxidation time from 15 to 120 min, the CrI of Car-CNCs gradually increased from ≈70 to 74.9%, and then decreased to 70.2% (Figure 3d and Table S3). That's because the disordered domains of cellulose mainly conducted sufficient decomposition within 90 min, while the crystalline regions might be partially destroyed as the reaction time was further prolonged (120 min). 9,18 When the APS concentration increased from 0.25 to 1.0 mol L −1 , the CrI of Car-CNCs increased from 69.3 to 72.4% (Table S3 and Figure S6c), attributed to the large decomposition of amorphous regions of cellulose. Moreover, the effect of oxidation temperature on crystallinity is similar to that of the oxidation time, where the CrI of Car-CNCs increased from 65.8 to 73.2% and then decreased to 71.7% with the increasing temperatures from 50 to 90°C (Table S3 and Figure S6d). These results demonstrated that the CrI was not only affected by the isolation approaches but also the isolation conditions. 3.5. Thermal Stability. Figures 3e,f and S6e,f show TG curves of the raw fibers, CMC-COONa, Car-CNCs prepared under different oxidation conditions, and Sul-CNCs obtained from acid hydrolysis. With the temperature ranging from 40 to 120°C, we observed an initial weight loss (3−9%). The weight loss at this stage was owing to the vaporization of the moisture that existed in the cellulose fibrils; the curve differences of samples were caused by different initial moisture contents. 46 The main weight loss occurred in temperatures from 200 to 400°C, which was owing to the thermal degradation of cellulose. 4 The thermal properties (including the onset and maximum degradation temperatures (T onset and T max ) and residual mass) are also listed in Table S3. Compared with original fibers (T onset = 330.9°C), carboxymethylated cellulose fibers degraded at a lower initial temperature of T onset = 287.3°C , which was owing to the introduction of −COONa groups. 39 In addition, T onset (319.5°C, 291.7°C) and T max (346.6°C, 330°C) of Car-CNCs-50°C and Car-CNCs-60°C, respectively, were higher than that of CMC-COONa (T onset = 287.3°C, T max = 307.1°C) (Table S3 and Figure 3f). This result could be attributed to the fact that −COONa groups have completely converted to −COOH groups under mild conditions. In addition, the cellulose nanocrystals with −COOH groups were more stable than those with −COONa + groups. 23 However, as the APS concentration, oxidation time, and reaction temperature further increased, the thermal stability of Car-CNCs decreased and was lower than CMC-COONa. One reason could be owing to the smaller fiber dimensions (length and diameter) as compared to the macroscopic carboxymethylated slurry. Another reason was that the fiber dimensions became smaller with the increasing intensity of APS oxidation, which would lead to higher surface areas exposed to heat. 21 Fortunately, our Car-CNCs all retained high thermal stability until 250°C, which can satisfy the general processing temperature (above 200°C) in thermoplastic applications. 11 Nevertheless, Sul-CNCs using sulfuric acid hydrolysis tended to decompose when the temperature was only ≈191.2°C, 47 which were easier degraded than our samples at a high temperature. More importantly, when at the oxidation condition (at 0.5 mol L −1 , 50°C, and 30 min), the Car-CNCs exhibited the highest thermal stability. Its T onset and T max could be as high as 319.5 and 346.6°C, respectively, which was better than other reported CNCs that were prepared directly using APS oxidation. 9,18 3.6. Pickering Emulsion Stability. Recently, oil-in-water emulsions (O/W) have attracted more attention in many fields (such as food science, cosmetics, energy storage, etc.), due to their immiscible systems of oil droplets in the water phase. Pickering emulsions stabilized by cellulose-based particles have been widely reported, especially, CNCs have been proven very efficient in stabilizing oil/water interfaces. Compared with other conventional emulsifiers, CNCs are biocompatible, biodegradable, sustainable, nontoxic, rigid, and hydrophilic. 48,49 Although the Pickering emulsions stabilized by CNCs were very effective, they could destabilize when subjected to a changeable environment. The stabilities of emulsions were strongly dependent on the different shapes, sizes, or surface charges of CNCs and emulsion viscosity, etc. 50−53 In this study, to investigate CNCs' emulsifying abilities, Car-CNCs (selecting Car-CNCs-30 min) and Sul-CNCs were used as stabilizers, and liquid paraffin (LP) was chosen as the oil phase to prepare oil-in-water emulsions. The emulsion stability (ES) was evaluated from three aspects: physical stability, thermal stability, and storage stability. As shown in Figure S7a,e, the pickering emulsions stabilized by Car-CNCs and Sul-CNCs showed almost similar droplet sizes. The surface-weighted mean diameters (D (3,2) ) of them were ≈0.8 and ≈0.9 μm (Figure 4a and Table S4), respectively. This phenomenon could be associated with the almost similar CNCs size (Car-CNCs: L = 118.1 ± 43.4 nm, D = 14.2 ± 3.1 nm, and an aspect ratio of 9.2 ± 2.1; Sul-CNCs: L = 118.2 ± 55.5 nm, D = 17.9 ± 3.0 nm, and an aspect ratio of 8.2 ± 1.9) 27,54 (the inset of Figure S8a,b) and the identical CNCs loadings. 53 Moreover, the surface coverage of the droplets was calculated according to eq S4, which were 81.8% for Car-CNC-based emulsions and 48.7% for Sul-CNC-based emulsions, respectively. The higher surface coverage of Car-CNC-based emulsions than Sul-CNC-based emulsions indicated that the Car-CNCs had better emulsifying properties. It may be due to the lower zeta potential of Sul-CNCs (−51.3 mV) than Car-CNCs (−35.6 mV), which results in stronger electrostatic repulsion and prevents CNCs partitioning to the oil−water interface ( Figure S8c). 55−57 Additionally, we also measured the molar surface charges of Sul-CNCs and Car-CNCs by conductometric titration according to previously reported methods, 58 as shown in Figure S9. The consumed volume of NaOH solution for Sul-CNCs was about 2.6 mL, which was higher than that of Car-CNCs (≈1.5 mL). The results exhibited molar surface charges of Sul-CNCs were much higher than that of Car-CNCs, which could form stronger electrostatic repulsion between Sul-CNC particles and affect the ES adversely. Thus, the conclusion drawn from the conductometric titration test was consistent with the abovemeasured Zeta potential. Emulsion physical stability. We used centrifugation to assess the physical stability of the emulsions. Centrifugation could speed up the creaming process, concentrate the droplets by excluding excess water from the emulsions and create a closedpack emulsion. 59 After the centrifugation process, the ES value of the emulsions was calculated by eq S5, which were 100% for Car-CNC-based emulsions and 21.1% for Sul-CNC-based emulsions, respectively (Table S4). These results demonstrated that the emulsions stabilized by Car-CNCs were more stable than Sul-CNC-based emulsions. As shown in Figures 4b,f, S7a,b,e,f, and Table S4, there was no emulsion coalescence/separation, and few changes of droplet size and size distribution were found in the Car-CNC-based emulsions before and after centrifugation. However, Sul-CNC-based emulsions showed an increase in emulsion size, a large variation in droplet size distribution, and large-scale phase separation after centrifugation. These phenomena may be associated with low surface coverage of droplets. Sul-CNCs with a lower zeta potential (that is, a higher charge density) could induce detrimental electrostatic repulsions and prevent CNCs from being absorbed in oil/water interfaces, resulting in poor surface coverage. 60 The droplets with a low and insufficient surface coverage could be more prone to coalesce and destabilize when subjected to centrifugation. 61−64 Emulsion thermal stability. As shown in Figures 4, S7, and Table S4, there were almost no changes in droplet size and size distribution found in all as-prepared emulsions. The ES values of Car-CNC-based and Sul-CNC-based emulsions were both 100%. These results suggested that all as-prepared emulsions could be well resistant to thermal processing. It was due to the temperature might not have a major influence on the strong steric or electrostatic repulsion between the emulsion particles. 65 Emulsion storage stability. After 14 days of storage, the ES value, droplet size, and size distribution of emulsions were measured. The corresponding data are presented in Table S4, Figures 4, and S7. The Car-CNC-based emulsions showed almost no variation in emulsion size, size distribution, or morphology, although the oil creaming phase and aqueous phase stratified. In general, an emulsion can be considered stable as long as no coalescence occurs; that means the size and size distribution should not change. 59 The stratification was mainly due to the effect of gravity and the different densities between LP and water during long-term storage 66,67 (0.835 g/ cm 3 for LP and 1.0 g/cm 3 for the water). Besides, a similar stratification phenomenon was also observed in the Sul-CNC stabilized emulsions. However, Sul-CNC-based emulsions showed an increase in droplet size distribution and particle size after 14 days of storage. Medium viscosity has also been found one-factor affecting ES. 68 We further measured the viscosity of the emulsions prepared by Sul-CNCs and Car-CNCs. As shown in Figure S10, the freshly prepared emulsions based on Sul-CNC and Car-CNC displayed almost similar viscosities, primarily due to the similar morphologies of CNCs, identical quantities of CNCs, and the same amounts of LP in emulsions. 53,66 Interestingly, our above experimental results demonstrate that Sul-CNC-based emulsions possessed weaker stability than Car-CNC-based emulsions. It was obvious that viscosity had little effect on the ES in our system. However, the two emulsions exhibited different trends in viscosity change after storage for 5 days. Car-CNC-based emulsions retained almost the same viscosity as the original emulsions, while Sul-CNC-based emulsions decreased greatly compared with freshly prepared Sul-CNC-based emulsions. A decrease in viscosity after long-term storage might be caused by lamination and demulsification due to insufficient surface coverage. The results showed that Sul-CNCs and Car-CNCs did not affect the viscosity of fresh emulsions in our system, but their different zeta potentials affected their long-term stability. Above all, the Car-CNC-based emulsions show excellent stability and can withstand centrifugation, thermal processing, and long-term storage, superior to the Sul-CNC-based emulsions. Therefore, the pickering emulsions stabilized by our Car-CNCs could offer promising potentials in practical applications such as thermal energy storage, food science, the cosmetic industry, etc. CONCLUSIONS In this study, we developed a high-efficient and scalable method for preparing Car-CNCs by carboxymethylation pretreatment-assisted APS oxidation. As expected, the carboxymethylation pretreatment facilitates the swelling of wood cell walls and exposed more C 6 hydroxy groups, thus improving the APS oxidation efficiency and shortening the preparation time from 16−24 h to only ≈30 min. Moreover, we can adjust the yield, micromorphology, crystallinity, and thermal stability of Car-CNCs by controlling different APS oxidation conditions. Among them, the Car-CNCs-30 min exhibited a maximum yield of 60.6% with an average length of 118.1 ± 43.4 nm and diameter of 14.2 ± 3.1 nm. More importantly, the yield, thermal stability, and the preparation efficiency of our Car-CNCs were superior to the conventional CNCs prepared from sulfuric acid hydrolysis and conventional APS oxidation. Therefore, our two-step collaborative process offered a promising and economical strategy for preparing CNCs on a large scale. Additionally, our Car-CNCs also demonstrated excellent stabilization for Pickering emulsion preparation, which has promising potential in numerous fields.
2023-04-22T15:04:10.604Z
2023-04-20T00:00:00.000
{ "year": 2023, "sha1": "b53a71ae563a7d7387a74350a70b9e47fce7b5b5", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.1021/acsomega.2c08239", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "8e4b97567cdc493f0b43e91a8c159fa3a30dd092", "s2fieldsofstudy": [ "Materials Science" ], "extfieldsofstudy": [ "Medicine" ] }
225672630
pes2o/s2orc
v3-fos-license
Translating Subtitles of Becoming Jane Film: A Pragmatic Approach DOI: http://dx.doi.org/10.313 32/lkw.v6i1.1766 Subtitling is an effective way to provide dialogues or narrative for a movie. The benefit is for people to enjoy the film even though its different from their native language. They enjoy movies over the world with different countries and styles by the dialogues translated. This research aims to know the strategy of responding to arguing in the file's subtitle titled Becoming Jane, to find out the translation techniques used by the translator, and to assess the translation quality in terms of the accuracy, acceptability, and readability. The research method is descriptive qualitative method with "Becoming Jane" movie and its subtitles as the data. Data were collected from document analysis and focus group discussions with the score of the accuracy, acceptability, and readability. The results indicate that the character used a strategy of agreeing, persisting, and complying. While six methods of translation were found, namely literal, modulation, established equivalence, borrowing, and adaptation. The quality of the translation of the strategy of responding to speech acts has been categorized as less accurate, less acceptable, and moderate in terms of readability. Introduction Amid the increase of advanced technology, Indonesian films grow to achieve the quality of the translation of the subtitles in the movie. Behrens & Parker, (2010) emphasize Films with subtitles evolve an ordinary in the entertainment business, western dubbed films are prevalent, of which subtitles translations are essential to the movie industry for the audience. Bai (2018) emphasizes that movie titles express the main idea to the audience to draw their attention. This requires the translation of movie titles to be precise and reflect the economic values. However, Bai (2018) reminds that translating a film in the language (source) to another (target) is a complicated task because of the differences in languages system, conversation and writing styles and sometimes also the gap on gender roles. It is not surprising that the translation often confuses the audience of the target language. Consequently, it is argued that there are no standardized strategies to be an applicable guide for any kind of film translation projects (Xinya, 2016). Bai (2018) have defined characteristics of a movie title to attract the audience for two reasons. First, a beautiful title affects adding the finishing touch, appealing to the audience and giving the viewers the soul. Second, movie titles indicate the main idea of movies and attract the audience with concise and unfamiliar form. Bai (2018) suggests that the first thing viewers see about the movie is the title, so the translation of English film title is crucial. The central part of the movie can be seen from the perfect translation of the title that demonstrates the main thought of the movie and attracts the audience's desires. This study explores two main concerns of investigation: techniques to which a translator finds the similar meaning in the dialogues supplied in the subtitles of a film Becoming Jane (henceforth, BJ) from English into Indonesian and the speech acts used in the talks. Drawing the translation perspectives, this study emphasizes on the translation techniques elaborating the approach of audiovisual translation. Definitely, in the exploration of the translation process, the author adapted the translation techniques of intra-and inter-lingual translations. Meanwhile, the entertained rewritings in a film subtitle are some of the academic focus (Baoxuan, 2011;Lv & Li, 2013). Besides, the speech acts theory from Austin (1975) will be drawn in this study to analyze kinds of speech acts indicated in the dialogues. Dries (1995) suggests that each country has a different tradition of translating films indicating one of the two primary modes: dubbing and subtitling. The decision to choose the technique is arbitrary, and factors affecting the choice vary from historical circumstances, traditions, the cost, target and the source cultures in an international context, and conventional translation techniques (Szarkowska, 2005(Szarkowska, , 2019. Theories on subtitles translation stem from Dries, (1995) from where Szarkowska, (2005Szarkowska, ( , 2019 highlight dubbing and subtitling as the core models and each interfere with the original text to a different extent. Dubbing modifies the source text to a large scale. It thus makes it familiar to the target audience through domestication, which means "the foreign dialogue is adjusted to the mouth and movements of the actor in the film" (Dries, 1995:9). The aim is to make the audience feel as if they were listening to actors speaking the target language (Shuttleworth & Cowie, 1997:45). In addition, subtitling is to translate the spoken SL dialogue into TL using synchronized captions at the bottom of the screen. It aims to enable the target audience to understand the entire contents of the films clearly (Szarkowska, 2005). Translation as communication is different from daily communication. The spoken language applies to screenwrite. The syntax of spoken language is generally less structured than the written one. A speaker usually is more implicit than a writer. The former uses an abundance of somewhat generalized vocabulary; he often produces lots of prefabricated 'fillers' (Brown & Yule, 2012;Yule, 1996). Besides, Dick (1990) elaborates that a standard movie advances the plot in two ways: verbally, through dialogue, and visually through action. As the visual is seen to be efficient than the verbal, the author must minimize the verbal play. The economy is an essential principle in screenwriting. Therefore, the dialogue often interrupted-that is, sentences are frequently left incomplete, and speech is just fragmentary. Thus, the language on the screen is very much like an actual conversation in daily life. Definition of translation varies as an act to transfer from the SL into TL (Foster, 1958), the process and methods used to convey the meaning of the SL into the TL (Ghazala, 1995). This definition focuses that meaning is an essential element in translation. When translating, a translator should understand well the meaning of SL to have the appropriate equivalent in TL. Meaning is translated concerning grammar, style and sounds (Ghazala, 1995). Catford (1965) emphasizes translation is the replacement of textual material in one language (SL) by equivalent textual content in another language (TL). It shows that translation is a process in the sense that translation refers to an activity. A translation is also a product, that is it provides us with other different cultures, to ancient societies and civilization life when the translated texts reach us (Hussein & Salih, 2019). There are strategies or techniques of translation. Adachi (2012) divides general methods of translation as liberal and literal translation, each of which has subcategories as appears in table 1. Kemppanen (2012) defines foreignization and domestication as translation techniques. Domesticating translation is coloured with naturalness of syntax, unambiguity, the modernity of the presentation and linguistic consistency, ethnic and ideological features, and fluent target culture that is fluent. A domesticating translation has transparency -to avoid non-idiomatic expressions, archaisms, jargon and repetition. Foreignization is defined as a translation practice where elements foreign to the target culture are given particular stress. Likewise, a foreignizing translation has the linguistic, ethnic and ideological features o f the source culture that is resistant to the norms of fluency and by unmaskedness of the translator expression (Derrida & Venuti, 2001). The study by Xinya, (2016), in her research, concluded that the translation theory related problems is related to cultural differences, word games, style and translation omission. Meanwhile, the linguistic issues, cultural and stylistic obstacles in the translation process are resolved by the relevant theory, by which the translator has more space to recreate his expression rather than being trapped in the "meaning and form" dilemma. The translator can render the meaning of a text into another language in the intended the text (Newmark, 2001), especially in the movie subtitle translation of the film to adjust with the unique text and the audience characteristics. It is the job of the translator to see us as possible the settings and contexts that influence speech act between characters in the film. The author has reviewed the work of Szarkowska (2005) and the author agrees on the new perspective of a translation for a film in a global era. In the global age, the existence of mass communications and multimedia experiences serve audiences demand the right to share the latest text, such as film, song, or book simultaneously across cultures (Choi et al., 2017). This way, the issue of power in translation is prominent and applicable to contemporary cinema. Perceived from the pragmatic approach, translation scholars identify that translation does not take place between words but rather between cultures. The text is not conceived as a specimen of language but an integral part of the world (Yuan, 2015). As a result, the translation process involved a cross-cultural transfer, prestige the source and target cultures, and the reciprocal relations (Szarkowska, 2005). In addition, the choice of translation is not only based on money. Szarkowska (2005) has identified that the translating strategy largely depends on the attitude of the target culture, the source culture, and political factors that determine the chosen mode. For example, Western European countries are not openly against American productions. In Arabic countries, a strong resistance exists adopting the norms and habits of the (American) adversary. In addition, the view of Indian cinematography, Bollywood, campaigns healthy anti-American attitudes. In summary, globalization is not merely another word for Americanization; now the expansion of the Indian entertainment industry has proved it (Powell, Furlong, de Bézenac, O'Sullivan, & Corcoran, 2019;Szarkowska, 2005Szarkowska, , 2019. The relevant translation techniques to the globalization may vary. Specifically, Rudasingwa (2019) emphasizes that whether a translator used domesticating or foreignizing approach, any form of audiovisual translation ultimately plays a unique role in developing both national identities and national stereotypes. The transmission of cultural values in screen translation has received very little attention in the literature and remains one of the most critical areas of research in translation studies. Drawing the pragmatic approach on the translation of BJ film, the author argues that pragmatic view the language from the procedures, process and products of translation to see the intended meaning to which cultures are involved inside and outside the text (Culpeper & Gillings, 2019). It is to say that a film translator must have a thorough knowledge of language, culture, discourse, scientific, transfer and psychological competence of SL and TL (Bell & Candlin, 1991). Besides the excellency of BJ receiving great applause, recent studies on JB film using the pragmatic approach, however, is in restriction. BJ translation is considered a film that is worthy of analysis of speech acts, especially speech acts of responding to arguing. In Ireland, the BJ film has improved the economy and contributes the significant income for the country (Minister for Arts, Sport and Tourism Ireland, 2019). A study available online by Grandi (2015) has indicated that BJ is productive of ways in which speech-act theory can illuminate the worlds internal in the novel by Jane Austen. The performative speech in the BJ has been the empirical evidence that Austen's work is not only caused by language but also that language in itself is an event of an act. This paper argues that translation of the speech acts in the film encompasses cultural differences between the SL and TL to convey responding of arguing to which the speech act theory is addressed. Arguing expresses physiological conditions to the listener or interlocutor and is categorized as an expressive speech act (Yule, 1996), it indicates the illocutionary speech acts referring as acts of doing something (John, Brooks, & Schriever, 2019;Searle, 1985;Searle, 1985). Various researches regarding BJ have been done, but most of them focus on the novel and literary work analysis. This study brings an insight into translation techniques of BJ that concentrate on online translation techniques. The focus has not been made on the BJ before so that this current research will contribute a new academic record. In addition, to give the context of translation, this research analyzes the expressive speech acts in the dialogues that involve the translating of culture. This focus provides benefits that a combination of translating subtitle in a film based on the pragmatic approach has much relevance to the studies of translation. This study determines its purposes to see; strategies to demonstrate responding pf arguing acts by the actors of BJ and subheading translation techniques used in the dialogues in BJ. Methods This research was a descriptive qualitative approach by which content analysis was devoted (Corbin & Strauss, 2014). As the qualitative approach, data were collected in terms of words, sentences, the logic and argumentations from where themes were the base of the analysis. It is content analysis as the research addressed recorded film of BJ to investigate the dialogues on screen. Related documents about BJ from various sources, such as journal articles, comments and critics were analyzed together with the recorded data. To analyze the data, thematic analysis was employed (Santosa, 2017;Spradley & Observation, 1980). The primary data of this research was a BJ corpus represented in the film record obtained from YouTube. The scene that was accompanied the dialogue texts presented in the bottom screen were transcribed. Each plot and individual dialogue in the screen were identified and recorded as the primary data. Secondary data, including a synopsis of BJ, journal articles and comments on BJ, were analyzed by the primary data. Data, on the whole, were collected by the researcher himself as he played a crucial instrument in this research (Bogdan & Biklen, 2007). First of all, the researcher collected initial information through the written secondary data to achieve insights on the dialogues in the BJ. Based on the initial data, the researcher came to see the film of BJ on the YouTube screen. Second, the researcher identified the text on the screen where translation from English into Indonesian was served on the bottom of the screen. The researcher recorded each dialogue then identify the translation techniques. Together with the identification, the researcher determined the kinds of speech acts of which responding actions. Third, the researcher put categories of the data based on types of speech acts and techniques of the translation used in each utterance. Finally, the researcher analyzed the data in terms of frequency that indicated how many time some data appeared, the translation techniques and speech act category. Thematic analysis by Spradley (1980) was employed to analyze the data. Table 2 suggests that there are three strategies, namely agreeing, persisting and acquiescing, each of which has their classifications. The agreeing strategy consists of 6 types, namely, to criticize, to insult, to threat, to challenge, to compliment, and to argue. The persisting strategy expressed by the characters reveals three categories that are to justify, to request, and to apologize. The last strategy, the acquiescing, covers two kinds of verbal opt-out and non-verbal opt-out. The choice of a strategy to select a particular type of utterance in responding to speech acts is influenced by the closeness of the speaker and the speech partner. Opt-out verbal 6 12 Strategies of Responding Opt-out non-verbal 6 Total 100 To support the findings of the frequency, the following quotes are presented in scripts below. Script 1 shows that Jane responded to the speech act by refuting by saying the deficiencies of Mr. Lefroy. Mr. Lefroy, who began the conversation hastily, said things he did not Lefroy who started the conversation, openly said something that he didn't like about Jane. Script 2 ST: Mr. Lefroy: Is this conduct commonplace in the natural history of Hampshire? Your ignorance is understandable since you lack ... What shall we call it? Jane The history? Propriety commands me to ignorance. The persisting strategy with justifying utterance type used by Jane use when responding to speech acts denied Lefroy who accused him of being arrogant and asked whether this was the habit of the Hampshire residents who in their story met face to face in the forest when Mr. Lefroy came from a lost city but Jane pretended did not see it. Jane judges herself by saying it is precise because of ignorance that makes her more polite not to say hello. TT: Jane: Saya telah membaca buku anda dan tidak setuju. Mr. Lefroy: Tentu saja anda lakukan. The acquiesce strategy is found in the sample data above, wherein, in this strategy, Mr. Lefroy responded to Jane's rebuttal to his opinion of the book he shared by agreeing without the slightest rebuttal. In the data above, their closeness has turned into friends who share reading books before they finally become lovers. The proximity affects the way to respond to the rebuttal delivered. Sub-heading translation techniques This study reveals 6 translation techniques in the dialogues of BJ. The 6 techniques include: literal 95 (65.1%), modulation 4 (2,74%), established equivalence 36 (24.66%), borrowing 5 (3.4%), adaptation 6 (4.12%) and deletion 1 (0.68%). See table 3. The table above shows the six translation techniques used by translators in the Becoming Jane subtitles, namely literal, modulation, established equivalence, borrowing, and adaptation. The literal technique is found the most in the movie subtitles of Becoming Jane. The literal technique is a technique translating per word from the source language to the target language. Common matching method is translation technique characterized using terms or expressions that are commonly used daily or based on a dictionary. Adaptation is a technique replacing the cultural elements of the source language with the target language. While borrowing is a borrowing technique from the source language used in the target language. The following scripts indicate dialogues from where translation techniques are identified. Script 4 ST: Eliza: What do you make of Mr. Lefroy? We're honoured by his presence. Jane: You think? He does, with his preening, prancing, Irish-cum-Bond-Streetairs. Eliza: Jane! Lefroy: Well, I call it very high, indeed, refusing to dance when there are so few gentlemen. The application of the literal technique is seen in the translation "Well, I call it very high" which is translated to "Yah, saya menyebutnya sangat tinggi memang" which causes the quality of the translation to be moderate in terms of readability. If the readability level is high, a score of 3 is obtained, if it is 2, and if it is low 1. Likewise, the scores on the instruments of accuracy and acceptance assessment. The deletion technique is seen in Mr. Lefroy's uncle or translated as a judge because of his profession. He opposed Lefroy's marriage to Jane because Jane had nothing and said "What can we put by must go to your brothers. You will have nothing unless you marry" but in the translation do not translate the first sentence. Just translating the phrase "You will have nothing unless you marry" which translates to "Menikah adalah satu-satunya cara untukmu memiliki sesuatu". The technique used to translate the second sentence is a modulation technique is still with the same meaning but by changing its perspective. Deletion techniques make translation inaccurate, unacceptable, and low in terms of readability. In modulation techniques, a translator still does accurate translations; they are less acceptable and are being readable. This happens because of a shift in perspective caused by a change of the structure of sentences or words or from active to passive sentences that makes the audience accept the emphasis on different meanings from the source language. These findings found that subtitle translation is very influential for movie lovers. As globalization allows us to exchange knowledge and culture with other countries, it impacts that in translation, customs and language styles are part of the culture (Hoed, 2006). Evidence shows that the use of three strategies of responding to express agreeing, persisting, and complying is close to the culture in digital communication. Characteristics of the modern communication are reflected well in the word choice, diction, and attitude to criticize, insult, threat, challenge, compliment, argument, justify, request, acquiesce, verbal opt-out, non-verbal opt-out each of which domestication and foreignation are closed (Crawford, 1995). In terms of the use of the strategy in responding to speech acts denied closeness influenced it. In the case of the use of policy in responding to speech acts, it is denied that they are influenced by the proximity and responding of arguing speech acts translated with literal techniques to produce accurate, acceptable translations. Still, the readability that is being caused by the results of the translation seems stiff and does not look at the context. Likewise, the modulation translation technique gives an accurate, unacceptable, and moderate impact on legibility. This is influenced by modulation, which is a translation technique with different points of view but with the same meaning. This is also in line with the research of (Pranoto, 2018) who found modulation as a technique that causes a decrease in the quality of readability. While similar translation techniques commonly produce high accuracy, acceptability, and readability. From the six techniques found in this study, it can be seen if the most widely used is literal techniques, then followed by the usual equivalent techniques, adaptation, borrowing, and modulation. The tendency of literal technique which shows more the quality of Becoming Jane's subtitle film translation, has accurate, less acceptable, and moderate-quality in terms of readability. This study contradicts with what have been found by some researchers (Caimi, 2006;Lv & Li, 2015;Szarkowska, 2019) because in their research they did not see the impact of choosing subtitle translation techniques on the quality of their translations. Similarly, to translate various strategies responding to arguing, Farnia et al., (2014) discussed the act of complaining against students. Their research has not yet touched on the realm of translation. Conclusion This research shows that there are several strategies used by characters in the film to respond to the rebuttal. The choice of strategy is influenced by the closeness of the speaker and the power they have. It translates the strategy for responding using literal techniques, common equivalents, modulation, borrowing, and modulation. The choice of technique then impacts the quality of the resulting translation, which is measured using instruments developed by Nababan, Nuraeni, & Sumardiono, (2012). The quality of the translation seen from the aspects of accuracy, acceptance, and readability in this study is not explicitly defined by confirming to the viewer's experts. The translation is evaluated based on the use of Bahasa Indonesia by the researcher and team whose native language is Bahasa Indonesia. The impact of the selection of techniques shows that common matching techniques provide accurate, acceptable and high translations in legibility. The modulation techniques impact quality that is accurate, unacceptable, and in readability. As a whole, most techniques found in BJ are literal techniques. The quality of the translation of the speech response strategy denied being categorized as less accurate, less acceptable, and moderate in terms of readability because the selection of the most widely used literal technique made the translation rigid and not contextual. This research is expected to provide an overview of how pragmatic approaches are used in translation studies. An understanding of the responding of arguing strategies found in the subtitles of the film. The application of appropriate translation techniques conveys messages in the source-language text. The application of translation techniques determines the quality of the translation. It is expected that further research will more closely link the rules for translating subtitles with a choice of techniques which have many things than working on the translation of novels or books.
2020-07-02T10:24:34.030Z
2020-06-25T00:00:00.000
{ "year": 2020, "sha1": "b4d79639b5117d4a9c3a29c4e8005f54ca0babd7", "oa_license": "CCBYSA", "oa_url": "https://ejournal.iainkendari.ac.id/index.php/langkawi/article/download/1766/1236", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "e2bd633f28566e1e6d5a7cca27120f21363af849", "s2fieldsofstudy": [ "Linguistics" ], "extfieldsofstudy": [ "Psychology" ] }
237001306
pes2o/s2orc
v3-fos-license
COVID-19 vaccination in the Federal Bureau of Prisons, December 2020—April 2021 Objectives To describe COVID-19 vaccine distribution operations in United States Federal Bureau of Prisons (BOP) institutions and offices from December 16, 2020—April 14, 2021, report vaccination coverage among staff and incarcerated people, and identify factors associated with vaccination acceptance among incarcerated people. Methods The BOP COVID-19 vaccination plan and implementation timeline are described. Descriptive statistics and vaccination coverage were calculated for the BOP incarcerated population using data from the BOP electronic medical record. Coverage among staff was calculated using data from the Centers for Disease Control and Prevention Vaccination Administration Management System. Vaccination coverage in the BOP versus the overall United States adult population was compared by state/territory. Univariate and multivariable logistic regression models were developed to identify demographic, health-related, and institution-level factors associated with vaccination acceptance among incarcerated people, using hierarchical linear modeling to account for institution-level clustering. Results By April 14, 2021, BOP had offered COVID-19 vaccination to 37,870 (100%) staff and 88,173/126,413 (69.8%) incarcerated people, with acceptance rates of 50.2% and 64.2%, respectively. At the time of analysis, vaccination coverage in BOP was comparable to coverage in the overall adult population in the states and territories where BOP institutions and offices are located. Among incarcerated people, factors associated with lower vaccination acceptance included younger age, female sex, non-Hispanic Black and Asian race/ethnicity, and having few underlying medical conditions; factors associated with higher acceptance included having a prior SARS-CoV-2 infection, being born outside the United States, and being assigned to a Federal Detention Center. Conclusions Early COVID-19 vaccination efforts in BOP have achieved levels of coverage similar to the general population. To build on this initial success, BOP can consider strategies including re-offering vaccination to people who initially refused and tailoring communication strategies to groups with lower acceptance rates. Introduction During the COVID-19 pandemic, the United States Federal Bureau of Prisons (BOP) has experienced high transmission rates of SARS-CoV-2 (the virus that causes COVID-19) due to institutions' congregate living environments and challenges implementing physical and social distancing, as well as high mortality rates from COVID-19 [1][2][3]. To control transmission, BOP has applied the Centers for Disease Control and Prevention (CDC) COVID-19 guidance for correctional settings, including recommended environmental cleaning, diagnostic and screening testing for SARS-CoV-2, medical isolation for incarcerated people testing positive and quarantine for those who have been exposed and tested negative, quarantine for new entrants and those preparing for release or transfer, restricting staff from work if symptomatic or testing positive, suspending in-person visitation and group programming, and limiting movement between BOP institutions [4,5]. In addition, from March 2020- April 2021, under and Economic Security (CARES) Act, BOP placed 16.8% of its prepandemic incarcerated population in home confinement or authorized their early release, prioritizing people with conditions associated with increased risk for severe COVID-19 illness as defined by CDC [6]. In December 2020, BOP received an independent allocation of COVID-19 vaccine from the federal government, with sufficient doses to offer vaccination to all directly-employed BOP staff and all people incarcerated in BOP-managed institutions. This article reports vaccination coverage among staff and incarcerated people as of ; presents demographic, health-related, and institution-level factors associated with vaccination acceptance among incarcerated people; and discusses potential strategies to promote vaccination among people from subgroups less likely to accept vaccination. Population and institutions The Federal Bureau of Prisons includes 122 BOP-managed institutions, eight administrative offices, and two staff training centers across 36 states, Washington, DC, and Puerto Rico. Analyses described below include the 37,870 directly-employed staff working in BOP-managed institutions, administrative offices, and training centers and the 126,413 incarcerated people (83.0% of the total BOP census) assigned to BOP-managed institutions as of April 14, 2021 [7,8]. Staff employed by external entities and people incarcerated in privately-managed BOP institutions or Residential Reentry Centers were not covered by the BOP COVID-19 vaccine allocation and were not included in analyses. The number of externally employed staff working in BOP institutions was not available. COVID-19 vaccine allocation, distribution, and prioritization Beginning in September 2020, BOP worked directly with CDC and the Federal Government COVID-19 Vaccine and Therapeutics Operation, formerly known as Operation Warp Speed, to develop a COVID-19 vaccine prioritization and distribution plan. Through this process, it was determined that vaccination would first be offered to staff due to their ongoing potential to introduce the virus into a facility from the community, or to transmit it from the facility to the community through daily movements; vaccination would then be offered to incarcerated people. The BOP vaccine allocation has included all vaccine products with an Emergency Use Authorization (EUA) in the United States (i.e., Pfizer-BioNTech, Moderna, and Janssen/Johnson & Johnson COVID-19 vaccines). Vaccine distribution across BOP facilities is overseen by a Vaccine Allocation Group that includes BOP pharmacists, physicians, infection control nurses, and health services administrators. Collectively, this group determines which vaccine product each institution receives, the number of doses, and the schedule of delivery based on institutions' population characteristics, storage capabilities, and staff capacity to administer vaccine at any given time. The Janssen vaccine, the only available single-dose option, has been distributed primarily to BOP's Federal Detention Centers, where people are held temporarily before and during trial. Because the length of time an individual will spend in a Federal Detention Center is difficult to predict, using a single-dose vaccine increases the likelihood that people held in these settings will be fully vaccinated before they are released to the community or transferred to another institution after trial. BOP received its first vaccine shipment on December 16, 2020. The first institutions to receive vaccine doses were BOP's seven Federal Medical Centers, which house people who have highacuity medical needs, many of which overlap with conditions associated with higher risk of severe COVID-19 illness [6]. Within each institution, vaccination was initially offered to all staff and subsequently to incarcerated people based on an assigned Vaccine Priority Level (Table 1). Vaccine Priority Level for incarcerated people was determined by their work assignment within the institution (with work assignments considered essential, such as food service, assigned a high Vaccine Priority Level), presence of underlying medical conditions associated with higher risk of severe COVID-19 illness, and assignment to a nursing care center within a Federal Medical Center [6]. Vaccine Priority Level was re-evaluated daily to reflect changes in eligibility. Date of vaccination eligibility for any given individual was dependent on the availability of vaccine within their assigned institution and, for incarcerated individuals, their Vaccine Priority Level. (See Supplemental Material for addi- Includes BOP-managed institutions only; excludes Residential Reentry Centers and prisons managed by private entities. 2 High priority work assignments include food service, cleaning for health services units, and others designated critical infrastructure roles. Nursing care center residents are those with underlying medical conditions requiring high-level care in one of the seven BOP Federal Medical Centers. 3 CDC criteria for people with higher risk of severe illness from COVID-19 have changed over time. At the time when BOP defined its Vaccine Priority Groups, these included body mass index 30, cancer, chronic kidney disease, chronic obstructive pulmonary disease, history of solid organ or stem cell transplant, pregnancy, sickle cell disease, history of smoking, serious cardiac conditions, and type II diabetes. 4 CDC criteria for people with higher risk of severe illness from COVID-19 have changed over time. At the time BOP defined its Vaccine Priority Groups, these included moderate/severe asthma, body mass index >25 but <30, cardiovascular disease, cystic fibrosis, dementia, hypertension, immunocompromised state, liver disease, pulmonary fibrosis, thalassemia, and type I diabetes. 5 Eligibility for COVID-19 vaccination was re-evaluated daily using an algorithm applied to updated data. COVID-19 vaccine education, administration, and reporting COVID-19 vaccination was voluntary for both staff and incarcerated people. Vaccines were administered by existing BOP clinical staff, who were trained to educate staff and incarcerated people regarding vaccine products' safety, efficacy, and side effects, and to address individuals' questions about their vaccination decisions. Each institution and office determined how vaccination was offered based on characteristics including the security level of the institution, layout of housing units, and number of staff available to provide education about the vaccines. Institutions implemented a combinationof vaccination clinics targeting entire housing units, vaccination offers to small groups, and one-to-one encounters. Staff vaccinations were documented in the BOP module of the CDC Vaccine Administration Management System (VAMS); however, staff vaccinations occurring outside of BOP (e.g., through staff members' healthcare providers, a health department, or community clinic) are not reflected in the BOP VAMS module and were not available to include in analyses. Vaccination of incarcerated people was documented in the BOP electronic medical record (BEMR) and reported to CDC daily. When incarcerated people were offered vaccination, they were required to sign either a consent form or a declination form, which were also stored in BEMR. Staff and incarcerated people who originally declined vaccination could request it later at any time. Incarcerated people could access a copy of their vaccination records through a request to BOP Health Services staff at their assigned institution. At the time of this analysis, vaccination efforts within BOP were still ongoing; vaccination had been offered to all staff and will continue until it has been offered to all incarcerated people as well. After that point, institutions will continue to offer vaccination to new entrants, people who previously declined the vaccine, and newly-hired staff. COVID-19 vaccine distribution and administration The total number of COVID-19 vaccine doses distributed to BOP institutions from December 16, 2020-April 14, 2021, the median and range in the number of doses administered per day, and the percentage of total distributed doses administered during this time period were calculated. COVID-19 vaccination among incarcerated people Using data from BEMR, descriptive statistics were calculated characterizing BOP's overall incarcerated population, as well as the subset who were offered and accepted COVID-19 vaccination, in terms of demographic, health-related, and institution-level factors. Vaccination acceptance was defined as receiving the first dose of a two-dose COVID-19 vaccine series or receiving one dose of a single-dose COVID-19 vaccine. The number and percentage of people incarcerated in BOP institutions who were offered COVID-19 vaccination were calculated, as well as the number and percentage who a) received at least one dose of a COVID-19 vaccine and b) were fully vaccinated. Fully vaccinated was defined as having received both doses of a two-dose COVID-19 vaccine series or one dose of a single-dose COVID-19 vaccine, consistent CDC definitions from the COVID Data Tracker website [9]. The number and percentage of incarcerated people who were offered vaccination a second time after an initial signed declination, and the number and percentage who accepted the second offer, were also calculated. Cross-sectional analysis using univariate logistic regression models was performed to identify individual-level factors (age, sex, race/ethnicity, country of birth, prior SARS-CoV-2 infection, number of medical conditions associated with severe COVID-19 illness) and institution-level factors (BOP-defined geographic region and institution type) associated with vaccination acceptance among incarcerated people to whom it was offered. Hierarchical Linear Modeling (HLM) was used to account for potential clustering at the institution level. People with 7 medical conditions associated with severe COVID-19 illness were aggregated into a single category due to small cell sizes. Unadjusted odds ratios (OR) and 95% confidence intervals (CI) are presented. Using the results of the univariate models, a multivariable logistic regression model was built to identify factors that are independently associated with vaccination acceptance among incarcerated people to whom it was offered, again using HLM to account for potential clustering at the institution level. All variables that were statistically significant in univariate models (OR estimates with 95% CI that do not include 1.0) were included in the multivariable model. Adjusted odds ratios (aOR) and 95% CI are presented. People who had completed a COVID-19 vaccine series prior to incarceration in the BOP system were not included in vaccination acceptance models but were retained in descriptive analyses of vaccination coverage in the BOP population. COVID-19 vaccination among staff The number and percentage of BOP staff who a) received at least one dose of a COVID-19 vaccine and b) were fully vaccinated through BOP's workplace vaccination campaign were calculated, using the number of staff with vaccination documentation in the BOP VAMS module as the numerator and the total number of BOP staff (including those not represented in the BOP VAMS module) as the denominator. Comparison of COVID-19 vaccination coverage in BOP vs. the overall population, by state/territory For each of the 38 states and territories where BOP institutions and/or offices are located, vaccination coverage among BOP staff and incarcerated people (combined) was calculated and compared to vaccination coverage in the overall adult population as of April 14, 2021. Data on vaccination coverage for overall state/territorial adult populations were downloaded from the CDC COVID Data Tracker website, which includes all entities that submit COVID-19 vaccination data to CDC, including BOP; BOP-affiliated records could not be separated from the overall adult state/territorial populations [9]. All analyses were performed using SAS Enterprise Guide 8.3 (Cary, North Carolina, US). Human subjects determination This activity was reviewed by CDC and was conducted consistent with applicable federal law and CDC policy. CDC determined that its involvement in this analytic project did not constitute engagement in research involving human subjects; CDC IRB review was not required. This project employs secondary analysis of preexisting data collected by BOP for routine clinical and operational purposes; BOP IRB review was not required. Demographic characteristics and COVID-19 vaccination coverage among incarcerated people Demographic and institution-level characteristics of people incarcerated in BOP institutions are presented in Table 2. The BOP incarcerated population was predominantly male (93.6%), Prison Camps are minimum security institutions with limited or no perimeter fencing. These institutions are work-and program-oriented; incarcerated people assigned to camps may have work placements off-site or within adjacent BOP institutions. 4 Federal Detention Centers (FDC) hold incarcerated people during trial. Length of stay in an FDC may range from several days to a year or more, depending on the length of the trial. 5 Federal Medical Centers house incarcerated people with high-acuity medical needs and include nursing care centers. Table 3 presents the results of unadjusted and adjusted logistic regression analyses, identifying factors associated with COVID-19 vaccination acceptance among incarcerated people to whom it was offered. In adjusted analysis, the multivariable model included age, sex, race/ethnicity, country of birth, prior SARS-CoV-2 infec- Table 4 compares COVID-19 vaccination coverage in BOP (staff and incarcerated people combined) versus the overall adult population in each state/territory that contains a BOP institution and/or office. Across these states/territories, 14.0-65.9% (median: 46.0%) of BOP staff and incarcerated populations had received at least 1 Includes BOP-managed institutions and administrative offices only; excludes Residential Reentry Centers and institutions managed by private entities. 2 State/territory-level vaccination coverage for the overall adult population was downloaded from the CDC COVID Data Tracker website and may include some BOP staff and/or people incarcerated in BOP institutions in that state/territory. 3 State/territory-level vaccination coverage for the BOP population includes both staff and people incarcerated in BOP institutions in a given state or territory. 4 BOP operates one institution in Puerto Rico. Because this institution is a Federal Detention Center, it was allocated Janssen/Johnson & Johnson vaccine for incarcerated people, who are assigned to the institution for an uncertain period of time and may not be present long enough to receive a 2-dose vaccine series. The first shipment of vaccine intended for incarcerated people was delayed en route; by the time it had been delivered, the federal government had paused administration of Janssen vaccine. 5 Discussion Within four months, the Federal Bureau of Prisons offered COVID-19 vaccination to 100% of staff and 69.8% of incarcerated people, with acceptance rates of 50.2% and 64.2%, respectively, and administered 95.9% of distributed vaccine doses. At the time of this analysis, COVID-19 vaccination coverage in BOP was comparable to coverage in the overall US adult population; 50.2% of BOP staff and 44.8% of incarcerated people had received at least one vaccine dose, and 47.2% and 29.9% had been fully vaccinated, respectively, compared with 47.0% of adults in the United States overall who had received at least one dose and 29.1% who had been fully vaccinated during the same time period [9]. In most states and territories that include a BOP institution and/or office, vaccination coverage in the BOP staff and incarcerated population was comparable to coverage in the overall adult population as well. The COVID-19 vaccination acceptance rate among people incarcerated in BOP institutions is similar to the rate reported by the California state prison system (66.5%), and higher than an estimate of vaccination intent measured among people incarcerated across 16 prisons and jails in four states (45%) during a similar time period [10][11]. In addition, factors associated with vaccination acceptance in BOP are consistent with factors identified in these two studies; specifically, vaccination intent and acceptance in these correctional populations are consistently lower among non-Hispanic Black participants, younger adults, and women, and higher among non-Hispanic White participants [10][11]. Similar demographic trends have emerged from general population surveys of vaccination intent as well [12][13]. Potential contributors to the progress of the BOP COVID-19 vaccination program could include early access to vaccine through a dedicated federal allocation, early and ongoing consultation with CDC and the Federal Government COVID-19 Vaccine and Therapeutics Operation to prepare an implementable vaccination plan, full support from senior BOP leadership, close coordination between the BOP Vaccine Allocation Group and individual BOP institutions, and education provided to institution administrators and healthcare staff administering the vaccine as well as to staff and incarcerated people offered vaccination. The relative absence of patientlevel barriers to vaccination in BOP compared with the general population could also contribute to BOP's level of vaccination coverage. For example, most people in the general population who wished to be vaccinated during this time period needed to actively monitor changing state vaccination eligibility requirements, navigate multiple healthcare provider websites to secure an appointment, and travel to the appointment, sometimes over long distances. By contrast, BOP staff and incarcerated people received an in-person, verbal vaccination offer from a healthcare provider, and vaccination occurred onsite. The availability of one-on-one education about the vaccine at the time of the vaccination offer may have improved vaccine confidence among those who intended to decline, and some people may have been encouraged to accept vaccination because they could see their peers being vaccinated. The availability of additional vaccination opportunities after the initial offer and the requirement for incarcerated people to sign a declination form if they chose not to be vaccinated may also have contributed to acceptance rates. Even with these early successes, there are opportunities to further increase vaccination coverage in BOP. Potential strategies fall into two categories, universal and tailored. One possible universal strategy is to continue offering vaccination to all incarcerated people who have declined; this analysis has demonstrated that approximately half of those with documentation of a second vaccination offer accepted it, without any further intervention designed to increase acceptance beyond the offer itself. Possible tailored strategies to increase coverage include designing vaccination education messages for people in specific subgroups that have thus far been less likely to accept vaccination, specifically non-Hispanic Black and Asian people, women, and younger, healthier people who may perceive themselves as having low risk for SARS-CoV-2 infection or severe COVID-19 illness. In addition, peer education programs where incarcerated people are trained to serve as educators have been successful in promoting participation in a variety of prevention, testing, and treatment programs for diseases such as hepatitis C and could be adapted for COVID-19 vaccination [14][15]. Tailored strategies may be especially important to promote vaccination in groups disproportionately affected by the COVID-19 pandemic. For example, vaccination messaging tailored toward Black incarcerated people could address reasons for vaccine hesitancy known to be common in some Black communities and could be delivered by Black healthcare providers or peer educators [16]. Because Black people are both disproportionately represented in incarcerated populations and less likely to accept COVID-19 vaccination (in both correctional settings and the community), increasing vaccination coverage among Black incarcerated people can contribute to health equity. One novel finding in this report is a higher vaccination acceptance rate among incarcerated people assigned to Federal Detention Centers (used to hold people before and during trial) compared with other types of institutions. The only major difference in vaccination operations in Federal Detention Centers is their use of single-dose vaccines, chosen because of the unpredictable length of time that people remain in these facilities. Additional study could investigate whether higher acceptance rates in Federal Detention Centers reflect a preference for single-dose vaccines, and whether offering this option in other types of institutions could increase acceptance rates. True vaccination coverage among BOP staff may be higher than what is reported here, because staff who have been vaccinated outside the BOP system are not included in these analyses. Even so, staff vaccination coverage could be further increased as well. To promote staff vaccination, BOP can consider similar universal and tailored strategies, re-offering vaccination to staff who have declined, using peer-to-peer education, and engaging trusted messengers to deliver information tailored to subsets of the staff population. Based on anecdotal reports, BOP institutions with higher vaccination coverage among staff have had internal vaccine ''champions" who are known to be influencers in their peer groups, as well as strong support from their Wardens. Best practices have included setting up vaccination stations in high-traffic areas that are easily accessible to staff, scheduling one-on-one appointments with all staff members to answer questions about COVID-19 vaccines and to administer the vaccine to those who want it, and engaging a diverse group of staff to promote vaccination in their circles of influence. At the time of this analysis, BOP policies regarding social distancing, masking, participation in group activities, SARS-CoV-2 diagnostic and screening testing, and movement between facilities did not differ based on individuals' vaccination status, in accordance with CDC guidance for correctional and detention facilities available at that time [4,5]. It is possible that the absence of a difference in the COVID-19-related restrictions placed on vaccinated versus unvaccinated incarcerated people has impeded vaccination efforts. CDC guidance for correctional and detention facilities released in June 2021 allows COVID-19 prevention measures to be relaxed for fully vaccinated people (e.g., vaccinated people may resume in-person visitation and group programming with other fully vaccinated people), and these changes may help to increase vaccination demand [17]. These analyses are subject to at least five limitations. First, staff who were vaccinated outside the BOP system are not represented in the BOP VAMS module, resulting in an underestimate of vaccination coverage among directly-employed BOP staff. Second, the BOP COVID-19 vaccine allocation did not include contracted staff, further limiting the ability to estimate overall vaccination coverage among all staff working in BOP institutions. Third, demographic data characterizing staff overall versus the subset of staff who were vaccinated were collected using different variable categories or were missing for a large percentage of staff records, preventing a comparison between the two groups. Fourth, state/territorial vaccination coverage data for the overall adult population downloaded from the CDC COVID Data Tracker website include some data submitted by BOP. Although BOP data could not be removed from state/territorial totals, the BOP population represents too small a fraction of any state or territory's overall population to have a meaningful impact on the results of this analysis. Fifth, the percentage of incarcerated people who were offered vaccination a second time included only documented instances of second offers, typically made on an individual basis; non-targeted second offers made to groups of people during town hall announcements or on electronic message boards were not included. Conclusions High COVID-19 vaccination coverage is critical for BOP to protect staff and incarcerated people during the COVID-19 pandemic and to return to pre-pandemic operations. As of the date these analyses were conducted, vaccination had been offered to all staff and to approximately two-thirds of incarcerated people, and BOP had reached levels of vaccination coverage comparable to those in the overall United States adult population and in other correctional settings. To continue to increase vaccination coverage, BOP can provide additional vaccination opportunities to people who initially declined and can consider tailoring communication strategies to groups with lower acceptance rates. Funding Source This work did not receive any specific grant from funding agencies in the public, commercial, or not-for-profit sectors. Declaration of Competing Interest The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
2021-08-14T13:17:37.270Z
2021-08-01T00:00:00.000
{ "year": 2021, "sha1": "b2eee96b53f20ee7dcbb873393476ae8753914d1", "oa_license": null, "oa_url": "https://doi.org/10.1016/j.vaccine.2021.08.045", "oa_status": "BRONZE", "pdf_src": "PubMedCentral", "pdf_hash": "cec8906d3d38d36002d060a88cbc0c5943cab3b9", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
4967519
pes2o/s2orc
v3-fos-license
Subcellular localization of MC4R with ADCY3 at neuronal primary cilia underlies a common pathway for genetic predisposition to obesity Most monogenic cases of obesity in humans have been linked to mutations in genes encoding members of the leptin-melanocortin pathway. Specifically, mutations in MC4R, the melanocortin-4 receptor gene, account for 3-5% of all severe obesity cases in humans1-3. Recently, ADCY3 (adenylyl cyclase 3) gene mutations have been implicated in obesity4,5. ADCY3 localizes to the primary cilia of neurons 6 , organelles that function as hubs for select signaling pathways. Mutations that disrupt the functions of primary cilia cause ciliopathies, rare recessive pleiotropic diseases in which obesity is a cardinal manifestation 7 . We demonstrate that MC4R colocalizes with ADCY3 at the primary cilia of a subset of hypothalamic neurons, that obesity-associated MC4R mutations impair ciliary localization and that inhibition of adenylyl cyclase signaling at the primary cilia of these neurons increases body weight. These data suggest that impaired signaling from the primary cilia of MC4R neurons is a common pathway underlying genetic causes of obesity in humans. The majority of mammalian cells, including neurons, possess a single, immotile primary cilium, an organelle that transduces select signals 7,8 . Defects in the genesis or function of primary cilia cause a range of overlapping human diseases, collectively termed ciliopathies 7,9,10 . Several ciliopathies, such as Bardet-Biedl syndrome and Alström syndrome, cause obesity 11 , and mutations in genes encoding ciliary proteins, such as CEP19 and ANKRD26, cause non-syndromic obesity in mice and humans 12,13 . While the mechanisms underlying a number of ciliopathy-associated phenotypes, such as polycystic kidney disease or retinal degeneration, have been at least partly elucidated, how ciliary dysfunction leads to obesity remains poorly understood 7,11 . Ubiquitous ablation of the primary cilia of neurons in adult mice causes an increase in food intake and obesity, suggesting that ciliopathy-associated obesity involves the post-developmental disruption of anorexigenic neuronal signals 14 . Recently, genetic and epigenetic studies have suggested a role for ADCY3 variations in human obesity 4,15 and loss of function mutations in Adcy3 in mice leads to a severe obesity phenotype 5 . ADCY3, a member of the adenylyl cyclase family that mediate Gs signaling from G-Protein Coupled Receptors (GPCRs), is specifically expressed at the primary cilia of neurons 6 . The melanocortin 4 Receptor (MC4R) is a Gs-coupled GPCR that transduces anorexigenic signals in the long-term regulation of energy homeostasis 16 . Heterozygous mutations in MC4R are the most common monogenic cause of severe obesity in humans and individuals with homozygous null mutations display severe, early-onset obesity [1][2][3] . Similar to humans, deletion of Mc4r in mice causes severe obesity 17 . MC4R is a central component of the melanocortin system, a hypothalamic network of neurons that integrates information about peripheral energy stores and that regulates food intake and energy expenditure 18 . Despite being a major target for the pharmacotherapy of obesity, nothing is known about the subcellular localization of MC4R. When expressed in un-ciliated heterologous cells, MC4R traffics to the cell membrane 2 . However, in ciliated cells such as mouse embryonic fibroblasts (MEFs), Retinal Pigment Epithelium (RPE), Inner Medullary Collecting Duct (IMCD3) cells, we find that a previously well-characterized, functional, C-terminally GFP-tagged MC4R (MC4R-GFP) 19 localizes to primary cilia (Fig. 1A). In a quantitative assay, developed in IMCD3 cells, the ciliary enrichment of MC4R was comparable to that of Smoothened (SMO), a known cilium-enriched protein 20,21 and was the strongest among members of the melanocortin receptor family (Fig. 1 B). We set out to determine if, and to what extent, MC4R localizes to primary cilia in vivo in mice. Most of the anorexigenic activity of MC4R is due to its function in a subset of Single Minded 1 (SIM1)-expressing neurons of the paraventricular nucleus of the hypothalamus (PVN) 22 and all MC4R expressing neurons in the PVN express SIM1 23 . Using a transgenic mouse line in which GFP is expressed in all SIM1-expressing neurons, we first investigated whether SIM1 expressing PVN neurons are ciliated. We find that an Adenylate-Cyclase 3 (ADCY3)-positive primary cilium was found at a majority of SIM1-expressing neurons of the PVN (Supplementary Fig. 1). Previous attempts to determine the subcellular localization of MC4R in vivo in mice have been unsuccessful due to the small number of neurons in which it is expressed, its low abundance and the lack of tractable antibodies. Using Cas9-mediated recombination in mouse zygotes, we inserted a GFP tag in frame at the C-terminus of the endogenous mouse Mc4r locus ( Fig. 2A). MC4R-GFP/+ knock-in mice do not have an obvious energy metabolism phenotype and are fertile, suggesting that the C-terminal GFP does not significantly impair the trafficking or function of MC4R in these mice. Confocal imaging of the PVN of these mice demonstrates that MC4R co-localizes with ADCY3 to the primary cilia of a subset of PVN neurons in vivo (Fig. 2, B-I). If MC4R localization to the primary cilia is essential for its function, then human obesitycausing mutations in MC4R may impair its function by compromising its ciliary localization. Heterozygous MC4R mutations are the most common genetic cause of severe childhood obesity 1 . Over fifty different obesity-associated mutations in MC4R have been described 24 (Supplementary Fig. 2). Functional assessment of the effects of these mutations in non-ciliated cells has revealed that many of these mutations disrupt trafficking of the receptor to the membrane or impair ligand activation ( Supplementary Fig. 2). In non-ciliated HEK293 cells, eight obesity-associated MC4R mutant proteins (p. (Ile130Thr) ) traffic normally to the cell membrane and respond normally to α-MSH activation 2,24-27 . To determine whether any of these mutations alter ciliary localization of MC4R, we quantified their ciliary enrichment in IMCD3 cells (Fig. 3A). We found that P230L and R236C significantly decreased MC4R ciliary localization. Interestingly, these two mutations are located in the third intracellular domain of MC4R ( Supplementary Fig. 2), a domain previously implicated in ciliary localization of other GPCRs 28 . To further determine whether the P230L mutation alters ciliary localization in vivo, we injected AAVs that express MC4R-P230L-GFP and MC4R-GFP in a Cre-dependent fashion into Sim1-Cre transgenic mice ( Fig. 3 B-D). The human wild-type MC4R-GFP localized to primary cilia of Sim1-expressing PVN neurons ( Fig. 3 E-H) confirming that the human receptor also traffics to the cilia in vivo. In contrast, MC4R-P230L-GFP failed to co-localize with ADCY3 to primary cilia ( Fig. 3 I-L). Together, these results suggest that MC4R mutations may cause human obesity by altering the ciliary localization of the receptor. If MC4R and ADCY3 function at the primary cilia to regulate body weight, we predicted that specific inhibition of Adenylyl cyclase at the primary cilia of MC4R expressing neurons should be sufficient to cause obesity. Specific inhibition of Adenylyl cyclase at primary cilia of neuron can be achieved by expression of a constitutively active version of the cilia specific Gi-protein coupled receptor GPR88 (GPR88p.(Gly283His) or GPR88*) 29 . GPR88* was delivered to Sim1 expressing neurons of the PVN using the same approach used for the hMC4RGFP DIO-AAV (Fig 3) but using high level of virus delivered at the midline to ensure large coverage of PVN neurons (Supplementary Figure 3). As visualization of cilia expression requires confocal imaging, a DIO-AAV expressing mCherry was co-injected with the DIO-AAV expressing a Flag tagged version of GPR88* to facilitate verification of the accuracy of the targeting and the coverage of the PVN at the end of the experiment in each mouse ( Supplementary Fig. 3). Weight-paired littermate mice injected only with the mCherry DIO-AAV were used as controls. Following AAV injections, mice in which Flag-GPR88* was expressed at the primary cilia of Sim1 expressing PVN neurons increased their food intake and gained significant weight compared to controls ( Figure 4) demonstrating that Adenylyl cyclase signaling at the primary cilia of these neurons is essential for the regulation of body weight. Combined, our data suggest that impaired signaling from the primary cilia of MC4R expressing neurons is a common pathway for syndromic and non-syndromic causes of monogenic obesity in humans. Our data do not indicate, however, that the primary cilia is necessary for Gs coupling and ADCY activation by MC4R since these occur in non ciliated cells. Rather our data suggest that this signaling has to occur at the primary cilia since impairing localization of MC4R at the primary cilia or inhibiting ADCY at the primary cilia impairs regulation of body weight. This functional link between MC4R and ciliopathyassociated obesity parallels findings underlying other human ciliopathy associated phenotypes. For example, syndromic and non-syndromic polycystic kidney disease is linked to impaired function of Polycytin 1 and 2, proteins expressed at the primary cilia of renal tubular cells, while impaired function of Rhodopsin (RHO) in the anterior segment of retinal cells, a specialized primary cilia, is a common pathway for both common and ciliopathy associated retinal phenotypes 7 . Our findings also provide important new insights into the sub-cellular basis underlying the relationship between short-term regulation of food intake and long-term regulation of energy homeostasis. PVN MC4R expressing neurons are part of a neuronal circuitry implicated in short term control of feeding as they receive synaptic gabaergic and glutamatergic inputs in particular from the arcuate nucleus of the hypothalamus 30 . MC4R itself, however, controls long-term energy homeostasis as evidenced by the phenotype of MC4R deficient mice or humans and both MC4R ligands have a slower effect on food intake regulation. In strong support of this model, a recent report has elegantly established that the PVN MC4R expressing neurons receive fast-acting food intake regulating synaptic inputs from the ARC that are post-synaptically modulated by MC4R through its slower acting neuropeptide ligands αMSH and AGRP 30 . Our finding that MC4R localizes to the primary cilia of MC4R PVN neurons provides for a sub-cellular compartmentalization of the slower signaling by the endogenous MC4R ligands, allowing for an independent control of long-term energy homeostasis, in neurons also implicated in the short-term regulation of food intake. Microscopes-Imaging of transfected immortalized cells was performed on a Zeiss Upright Axioscope 2 Plus Fluorescence Microscope and/or on a Leica SL, a Leica SP5 or a Zeiss LSM 780 confocal microscope. Quantification of ciliary localization in cultured cells-A 3-plane Z-stack of transiently transfected IMCD3 cells was acquired on an Olympus IX-70 microscope, and best focus of average was recorded using Metamorph software (Molecular Devices, Sunnyvale, CA). Relative ciliary enrichment was calculated using Matlab Software as the ratio between the green chanel pixel intensity of GFP-chimera expression at the primary cilium versus pixel intensity of the cell body, wherein the cilium was defined by acetylated tubulin staining recorded in the red channel. In vivo studies in mice All animal procedures were approved by the Institutional Animal Care and Use Committee of the University of California, San Francisco. Zygote injection and implantation was performed at the transgenic core of the Gladstone Institute. Generation of Mc4r-GFP knock-in mice-Super-ovulated female FVB/N mice (4 weeks old) were mated to FVB/N stud males. Fertilized zygotes were collected from oviducts and injected with (1) Cas9 protein (50 ng/ul), (2) a donor vector (20 ng/ul) consisting of 1kb of 5'flanking sequence (i.e. the MC4R coding sequence) followed by GFP (cloned in frame) and 5.5 kb of 3'flanking sequence and (3) a sgRNA (25 ng/ul) of which the guide sequence (see supplementary table) was designed to target nucleotides immediately downstream the MC4R stop codon in a short region that was not present in the donor vector into pronucleus. Injected zygotes were implanted into oviducts of pseudopregnant CD1 female mice. Pups were genotyped for insertions at the correct loci by PCR. Tissue specific expression of Mc4r-GFP was verified by qPCR. Imaging experiments were done in F2-F5 mice from two different founders. Origin of the other mouse lines used-Mice expressing Cre under the control of the Sim1 promoter, Tg(Sim1-cre)1Lowl, were obtained from Jackson Laboratories (Bar Harbor, ME). Sim1-GFP mice, Tg(Sim1-EGFP)AX55Gsat, were obtained from the Mutant Mouse Regional Resource Center (Davis, CA). Mice were housed in a barrier facility and maintained on a 12:12 light cycle (on: 0700-1900) at an ambient temperature of 23±2°C and relative humidity 50-70%. Mice were fed with rodent diet 5058 (Lab Diet) and group-housed up to 5. Experiments were performed with weight matched littermates. 2). Weight was measured for 2 months, after which mice were sacrificed to confirm the site of injection. Mice with missed injections were excluded prior to data analysis. mCherry expression was assessed in all mice by widefield microscopy to verify the accuracy and extent of the AAV infection and GPR88* expression since in mice injected with both AAV DIO mCherry and AAV DIO Flag-GPR88* infected neurons were infected with both viruses (supplementary figure 3). Figure 4 (and supplementary Figure 3), mice were single housed after AAV injections. Weight was measured for 2 months. Food intake was measured by CLAMS (Columbus Instruments, Columbus, OH) at baseline and 6 weeks after AAV injections. Mice were tested over 96 continuous hours, and the data from the middle 48 hours were analyzed. Image capture and processing-Widefield images were generated using a Zeiss ApoTome microscope. Confocal images were generated using a Zeiss LSM 780 confocal microscope. In confocal images, MC4R-GFP was labeled with Alexa 488, and the neuronal cilia marker Adcy3 was labeled with Alexa 633. For Alexa 488, the detector range was set from 490-534 nm. For Alexa 633, the detector range was set from 600-750 nm. Images were processed with Fiji. Maximal intensity Z projections are from at least 20 slices over 10 μm. In confocal images, GPR88-FLAG was labeled with Alexa 488, and the neuronal cilia marker Adcy3 was labeled with Alexa 633. For Alexa 488, the detector range was set from 490-534 nm. For Alexa 633, the detector range was set from 600-750 nm. mCherry was detected by direct fluorescence. Images were processed with Fiji. Maximal intensity Z projections are from at least 40 slices over 20 μm. Statistics-Sample sizes were chosen based upon the estimated effect size drawn from previous publications 7 and from the performed experiments. Data distribution were assumed to be normal but this was not formally tested. All test used are indicated in the figures. We analyzed all data using Prism 7.0 (GraphPad Software). Supplementary Material Refer to Web version on PubMed Central for supplementary material.
2018-04-03T05:06:29.608Z
2017-11-17T00:00:00.000
{ "year": 2017, "sha1": "b90335dc6e187492ebd6d08dcd5699308d04928d", "oa_license": "implied-oa", "oa_url": "https://europepmc.org/articles/pmc5805646?pdf=render", "oa_status": "GREEN", "pdf_src": "PubMedCentral", "pdf_hash": "b90335dc6e187492ebd6d08dcd5699308d04928d", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
3823728
pes2o/s2orc
v3-fos-license
Deciphering the Duality of Clock and Growth Metabolism in a Cell Autonomous System Using NMR Profiling of the Secretome Oscillations in circadian metabolism are crucial to the well being of organism. Our understanding of metabolic rhythms has been greatly enhanced by recent advances in high-throughput systems biology experimental techniques and data analysis. In an in vitro setting, metabolite rhythms can be measured by time-dependent sampling over an experimental period spanning one or more days at sufficent resolution to elucidate rhythms. We hypothesized that cellular metabolic effects over such a time course would be influenced by both oscillatory and circadian-independent cell metabolic effects. Here we use nuclear magnetic resonance (NMR) spectroscopy-based metabolic profiling of mammalian cell culture media of synchronized U2 OS cells containing an intact transcriptional clock. The experiment was conducted over 48 h, typical for circadian biology studies, and samples collected at 2 h resolution to unravel such non-oscillatory effects. Our data suggest specific metabolic activities exist that change continuously over time in this settting and we demonstrate that the non-oscillatory effects are generally monotonic and possible to model with multivariate regression. Deconvolution of such non-circadian persistent changes are of paramount importance to consider while studying circadian metabolic oscillations. Introduction The physiology and behavior of most complex organisms is regulated in a circadian manner. Chronic disruption of this rhythm can lead to a cascade of maladaptive effects including cognitive and psychiatric dysfunction, metabolic disorders and cancer [1][2][3][4][5][6][7]. Transcriptionally, cell autonomous circadian rhythms are generated via core clock genes. Briefly, BMAL1 and CLOCK are transcriptional activators that regulate the transcriptional repressors Cryptochrome (CRY1 and CRY2) and Period (PER1 and PER2). PER and CRY proteins, on the other hand, inhibit the BMAL1/CLOCK function 2 of 10 post translationally, and this activation-inhibition loop in turn generates the oscillatory rhythms in gene expression [8]. In addition to transcriptional oscillation, cellular metabolism also shows inherent oscillatory rhythm. The human osteosarcoma cell line, U2 OS, is used as a convenient model system as the core clock genes are well-characterized and constitute a cell-autonomous molecular oscillator [9]. Study of biological rhythms in metabolites is a recent development in this regard [10] and are indicative of potentially important phenomena. For example, recent results from our group showed that ectopic MYC oncogene expression significantly affects oscillation in glucose in metabolism and glutaminolysis [11]. While studies of such rhythms are increasingly important, the metabolic context of general metabolite changes and metabolic status over the sampling time required for high time-resolution circadian rhythm studies has not been described. In other words, the media of cell culture growth may contain elements of both the circadian metabolic phenomena in addition to general metabolite changes due to cell growth and change in cell numbers. Under ideal conditions, circadian measurements of cell autonomous systems would require the metabolic health of the cells under study to remain uniform throughout the course of the experiment, such that only oscillations due to clock-dependent changes are observed. In practice, we hypothesize that in the common setup for cell culture measurements of circadian characteristics, two temporal metabolic factors are in play. First, the innate circadian rhythm of the cellular metabolism (Circadian process) and second, a linear continuous change of the cellular metabolism over the two days of collection dictated by the properties of the secretome, i.e. nutrient availablity and waste metabolite build-up (Metabolic process). We investigated these effects by profiling the metabolites of the cell culture media using nuclear magnetic resonance (NMR) spectroscopy. NMR, although relatively insensitive, offers a robust and quantitative approach to study global metabolism [12]. Metabolomics using NMR was enhanced using the targeted profiling approach [13], which takes advantage of spectral fitting using a library of compound spectra and an internal standard to obtain absolute concentrations of metabolites in the sample. Thus, NMR-based metabolomics offers a quantitative window to study the changes in temporal cellular metabolism in the context of a cell autonomous oscillator. Our results indicate continuous change in metabolite composition of the media over time resulting from cellular metabolic changes. These results suggest that analysis of oscillatory metabolic rhythms must be corrected for linear growth/secretion effects. Temporal Change of the Global Secretome Composition in a Cell-Autonomous Model System Synchronized U2 OS cells were cultured and the media sampled every two hours over 48 h, followed by NMR spectroscopic analysis. Twenty-six metabolites from the media were profiled (Figure 1), and interrogated by multivariate analysis. Principal component analysis (PCA) indicated significant time dependent variation in the metabolic profile of the media. Scores from the 1st PC were evidently associated with sample collection timepoint (Supporting Information S1). For example, the variation along PC1 described the continuous change in the media profile, while PC2 showed an oscillatory pattern with an approximate period of 28 h (using eye estimate, no statistical analysis was performed). The scores plot from the PCA of the cultured media samples over 48 h showed two distinct groups divided approximately by circadian days. Interestingly, a shift from Day 1 to Day 2 was evident around 18-22 h of sampling time (Figure 2A). Further orthogonal partial least squares discriminant analysis (OPLS-DA) modeling was performed using the pre-shift (2 h-18 h) and post-shift (20 h-48 h) samples. The model was robust (CV-ANOVA p = 0.0004, Q 2 (cum) = 0.66) and the scores plot indicated a distinct clustering of the pre-and post-shift samples ( Figure 2B). Interestingly, the pre-and post-shift samples were well separated with the exception of the sample collected at 32 h. Metabolites that were significantly different between the two classes were selected based on the multivariate variable influence on projection (VIP, >1.0) and are listed in Table 1. We observed increase in glutamate, methylguanidine, alanine, acetate, formate, lactate and glycine. Concomitant decrease was observed in methionine, serine, glutamine, glucose, threonine, cis-Aconitate, and choline. Interesting trends include reverse temporal trends in glutamate-glutamine and glucose-lactate. Generally, temporal increase in metabolic excretory products such as lactate, methylguanidine, acetate and formate was observed while several metabolic precursors including amino acids and glucose decreased over time. window to study the changes in temporal cellular metabolism in the context of a cell autonomous oscillator. Our results indicate continuous change in metabolite composition of the media over time resulting from cellular metabolic changes. These results suggest that analysis of oscillatory metabolic rhythms must be corrected for linear growth/secretion effects. Temporal Change of the Global Secretome Composition in A Cell-Autonomous Model System Synchronized U2 OS cells were cultured and the media sampled every two hours over 48 h, followed by NMR spectroscopic analysis. Twenty-six metabolites from the media were profiled (Figure 1), and interrogated by multivariate analysis. Principal component analysis (PCA) indicated significant time dependent variation in the metabolic profile of the media. Scores from the 1st PC were evidently associated with sample collection timepoint (Supporting Information S1). For example, the variation along PC1 described the continuous change in the media profile, while PC2 showed an oscillatory pattern with an approximate period of 28 h (using eye estimate, no statistical analysis was performed). The scores plot from the PCA of the cultured media samples over 48 h showed two distinct groups divided approximately by circadian days. Interestingly, a shift from Day 1 to Day 2 was evident around 18-22 h of sampling time ( Figure 2A). The temporal analysis was done using two different multivariate models: OPLS-DA (for pre and post time variation) and OPLS (for correlation of metabolites with time). In both cases, metabolites with VIP (Variable Importance on Projection) > 1.0 were judged significant. In the left panel, negative loadings indicate increased level before the shift time, while, in the right panel, negative loading indicates linear decrease in level with time. Linear Temporal Changes in the Secretome We reasoned that there would be linear time-dependent changes in media composition as a function of both nutrient depletion and metabolite excretion. In order to test this hypothesis, metabolites that were linearly changing over the time course of sampling was obtained using multivariate OPLS modeling with time of collection as the Y-variable. A robust (CV-ANOVA p = 2.66ˆ10 -6 , Q 2 (cum) = 0.81) and strongly predictive OPLS model was generated ( Figure 2C). The metabolites (VIP > 1.0) uniformly changing with time are listed in Table 1. Univariate analysis showed 14 significantly (p < 0.05, FDR < 0.1) altered metabolites post 22 h timepoints. These included glutamate, acetate, alanine, formate, lactate, glycine, methylguanidine (all elevated post 22 h), glutamine, serine, glucose, threonine, choline, cis-aconitate, and myo-inositol (all decreased post 22 h). Relative levels of these metabolites are shown in Figure 3, p-values and FDRs of individual metabolites are listed in Table 2. Metabolites significantly changing between pre-and post-shift time points in the U2 OS cells and media were identified using permutation based unpaired t-test. Specific Temporal Metabolic Changes in the Secretome Five metabolites from the media samples showed a particularly strong linear correlation with time. These metabolites are acetate, alanine, choline, lactate and methylguanidine. Interestingly, with the exception of choline, the other metabolites are end products of metabolic pathways. These four metabolites were increased linearly (r for lactate = 0.88, acetate = 0.87, alanine = 0.88, formate = 0.90 and methylguanidine = 0.74, for all of them p < 0.01) over time while choline showed linear depletion in the media (r = −0.77, p < 0.01). To further understand the changes in the metabolites in the media, the total media metabolite profile was subjected to hierarchical cluster analysis (Figure 4). The metabolites were segregated into five clusters. Specific Temporal Metabolic Changes in the Secretome Five metabolites from the media samples showed a particularly strong linear correlation with time. These metabolites are acetate, alanine, choline, lactate and methylguanidine. Interestingly, with the exception of choline, the other metabolites are end products of metabolic pathways. These four metabolites were increased linearly (r for lactate = 0.88, acetate = 0.87, alanine = 0.88, formate = 0.90 and methylguanidine = 0.74, for all of them p < 0.01) over time while choline showed linear depletion in the media (r =´0.77, p < 0.01). To further understand the changes in the metabolites in the media, the total media metabolite profile was subjected to hierarchical cluster analysis (Figure 4). The metabolites were segregated into five clusters. Major clusters include: (1) Metabolites of branched chain amino acids, choline and cis-aconitate; (2) amino acids and glucose; (3) malonate, methionine, myo-inositol and phenylalanine; (4) end products of cellullar metabolism; and (5) pyroglutamate, pyruvate, glycine and glycerol. The metabolites in these five clusters showed grossly similar temporal behavior (Supplementary Information S2). For example, level of branched chain amino acid (BCAA) metabolites remained broadly the same longitudinally, while choline was steadily decreased in the media. Several amino acids and glucose level in the media were depleted during 18-24 h time while the metabolic end products were steadily increased in the media. Major clusters include: (1) Metabolites of branched chain amino acids, choline and cis-aconitate; (2) amino acids and glucose; (3) malonate, methionine, myo-inositol and phenylalanine; (4) end products of cellullar metabolism; and (5) pyroglutamate, pyruvate, glycine and glycerol. The metabolites in these five clusters showed grossly similar temporal behavior (Supplementary Information S2). For example, level of branched chain amino acid (BCAA) metabolites remained broadly the same longitudinally, while choline was steadily decreased in the media. Several amino acids and glucose level in the media were depleted during 18-24 h time while the metabolic end products were steadily increased in the media. Discussion Physiological circadian rhythms are an important component of organismal well-being, and disruption of the internal body clock can lead to detrimental and pathophysiological effects [14]. This is followed directly from recent literature related to biological rhythms in different systems [15][16][17]. Many of these studies investigate the problem using systems biology approaches [18,19], for example by studying large-scale transcript or protein profiles. For cellular systems under study, however, the temporal cellular metabolism will have two distinct components including the metabolic oscillation and cell growth/excretion factors, which are primarily linear in nature. Decoupling of these two components is crucial for time resolved analysis such as circadian metabolic studies. In this work, we attempted to investigate the linear changes of cellular metabolome by profiling the cell culture media secretome in a setting that is used for investigating circadian metabolic/gene oscillations. While intra-cellular metabolism can be complex and difficult to study, the culture media of the cell line under study could provide significant information about the physiological activity within the cell. This approach has been utilized for numerous cell lines including red blood cells [20] and hepatocytes [21]. Some of the findings may have potential therapeutic applications. For example, Drago et al. reviewed the potential of manipulation of stem cell secretome as an important pathway of therapeutic strategy for brain repair [22]. Therapeutic potential of secreome, specifically in context of metabolomics, were also demonstrated in non-invasive embryo assessment during in vitro fertilization (IVF) [23]. At a more fundamental level, bacterial culture media was recently used to demonstrate the effect of metal toxicity in Pseudomonas pseudoalcaligenes [24]. However, little is known about the metabolic effect of long time course experiment in the media found typically in a setting of Discussion Physiological circadian rhythms are an important component of organismal well-being, and disruption of the internal body clock can lead to detrimental and pathophysiological effects [14]. This is followed directly from recent literature related to biological rhythms in different systems [15][16][17]. Many of these studies investigate the problem using systems biology approaches [18,19], for example by studying large-scale transcript or protein profiles. For cellular systems under study, however, the temporal cellular metabolism will have two distinct components including the metabolic oscillation and cell growth/excretion factors, which are primarily linear in nature. Decoupling of these two components is crucial for time resolved analysis such as circadian metabolic studies. In this work, we attempted to investigate the linear changes of cellular metabolome by profiling the cell culture media secretome in a setting that is used for investigating circadian metabolic/gene oscillations. While intra-cellular metabolism can be complex and difficult to study, the culture media of the cell line under study could provide significant information about the physiological activity within the cell. This approach has been utilized for numerous cell lines including red blood cells [20] and hepatocytes [21]. Some of the findings may have potential therapeutic applications. For example, Drago et al. reviewed the potential of manipulation of stem cell secretome as an important pathway of therapeutic strategy for brain repair [22]. Therapeutic potential of secreome, specifically in context of metabolomics, were also demonstrated in non-invasive embryo assessment during in vitro fertilization (IVF) [23]. At a more fundamental level, bacterial culture media was recently used to demonstrate the effect of metal toxicity in Pseudomonas pseudoalcaligenes [24]. However, little is known about the metabolic effect of long time course experiment in the media found typically in a setting of circadian metabolic studies. In this work, we show that metabolic profiling of media could be used to gain insights of cellular metabolism in a system relevant for studying circadian rhythm in mammalian cell lines. Our data suggest that the metabolic composition of media in this U2 OS system undergoes a smooth transition during 18-22 h of experiment ( Figure 2). We note that this transition occurs within the context of a cell-confluent system. This is important as sub-confluent systems have been shown to demonstrate a combination of growth and circadian changes in core clock gene expression [25]. Wavelets based methods have been suggested for recovering information from a growing cell system that is otherwise not observed [26]. Our data suggested that a group of metabolites including precursor molecules such as amino acids and glucose were decreased after 22 h while several cellular end products were increased (Figure 3). Detailed temporal tracking of the metabolites indicated that the media level of several precursor amino acids and glucose underwent depletion specifically between 18 and 22 h (Supplementary Information S2). This effect suggests that the cellular consumption of metabolic precursor molecules is not continuous. Instead, the consumption is probably pulsatile and synchronized for different metabolic pathways. However, the catabolic end products of cellular metabolism (lactate, acetate, alanine, formate and methylguanidine) build up continuously in the media (Figure 4), suggesting the catabolic activity of cellular metabolism is not pulsed. This observation further raises the possibility that cellular metabolic oscillation is controlled, at least partially, by the temporal availability of precursor molecules. Indeed, we have observed specific cellular metabolic oscillations in U2 OS cells [10], which could be the result of pulsed cellular intake of media metabolites. In addition, such obvious exchange of metabolites across plasma membrane suggests that a more complex flux based models may be needed to elucidate the metabolic clock mechanism in U2 OS cells. Further analysis would require more sensitive analysis however as the inherent insensitivity of NMR technique does not allow for testing this hypothesis for relatively less abundant metabolites. Overall, our data suggest that a considerable amount of cellular chronobiology can be unraveled by profiling the metabolites of the cell culture media. Specifically, the cellular growth and catabolism related activities can be efficiently followed using NMR spectroscopy. As the NMR data are highly quantitative and robust, it is an ideal platform from which to study temporal changes. The present study, as well as our previous work [11], demonstrates that NMR spectroscopy can be utilized to that end. The nature of cellular circadian rhythm could be further unraveled by profiling the cellular metabolites. In addition, as noted, we did not observe any circadian rhythms in the media metabolites, which could be an obvious limitation of the NMR platform. More sensitive techniques such as LC-MS are needed for further investigation of oscillating media metabolites. Nevertheless, we have shown that linear effects of cellular metabolism are highly prominent in a typical setting to investigate circadian effects in such in vitro systems. Therefore, these effects should be accounted for while investigating oscillatory metabolic activities in the cell. U2OS Cell Collection Cells were prepared as previously described [27]. Briefly, U2 OS cells were seeded at 5 million cells per 10 cm dishes in DMEM medium containing 10% fetal bovine serum (FBS) without antibiotics. Cells were allowed to adhere to the surface for 24 h and synchronized with 0.1 µM dexamethasone. The collection began 24 h post-synchronization. During the collection, media was collected and snap frozen every two hours for 48 h [11]. Metabolite Extraction from Media Fifty microliters of media was thawed on ice and metabolites were extracted using a modified Bligh-Dyer method [28,29]. Briefly, a methanol:chloroform (2:1) mixture (300 µL) was added to the cell pellets, then vortexed and sonicated for 15 min. Chloroform and water (100 µL each) were then added and samples were vortexed. Organic and aqueous layers were separated by centrifugation at 13,300 rpm, for 7 min at 4˝C. The aqueous layer was dried and reconstituted in 200 µL phosphate buffer (pH 7.0) in 10% D 2 O (Cambridge Isotope Laboratories, Andover, MA, USA) containing 4,4-dimethyl-4-silapentane-1-sulfonic acid (DSS) as internal standard (Cambridge Isotope Limited). The samples were taken in 3 mm NMR tube (Bruker Biospin, Billerica, MA, USA) for spectral acquisition. NMR Spctroscopy All spectral acquisitions were performed in Bruker Avance III HD NMR spectrometer, equipped with a triple resonance inverse (TXI) 3 mm probe (Bruker Biospin). Bruker Samplejet was used for sample handling to ensure high throughput nature. The pulseprogram took the shape of first transient of a 2 dimensional NOESY and generally of the form RD-90-t-90-tm-90-ACQ [12]. Where RD = relaxation delay, t = small time delay between pulses, tm = mixing time and ACQ = acquisition. Water signal was saturated using continuous irradiation during RD and tm. The spectra were acquired using 76 K data points and 14 ppm spectral width. Sixty-four scans were performed and 1 s interscan (relaxation) delay and 0.1 s mixing time was allowed. The FIDs were zero filled to 128 K; 0.1 Hz of linear broadening was applied followed by Fourier transformation, baseline and phase correction using an automated program provided by Bruker Biospin. Spectral Profiling and Data Analysis Raw FID from 1 H-NMR was processed and profiled using Chenomx NMR suite 8.0. 1 H-NMR data were evaluated using a targeted profiling strategy [13] that allows quantification of metabolite data in the sample. Pre-processed data were exported to SIMCA-P 14 (Umetrics AB, Umea, Sweden) for further multivariate statistical analysis. Univariate scaling was appliedto put eqal weightage to all variables. Initially, unsupervised Principal Component Analysis (PCA) was carried out to look for trends in metabolites. Further, supervised clustering was performed using Orthogonal Partial Least Square-Discriminant Analysis (OPLS-DA). In addition, regression of metabolome with time was assessed using supervised OPLS modeling. Supervised models were internally cross-validated with a 7-fold cross validation. These models were considered significant for models with cross-validated ANOVA p < 0.05. Variables were selected based on variable importance of projection (VIP), and VIP > 1 was considered significant. Due to non-availability of one time point sample (14th time point), the missing data were replaced by averaging preceding (12th) and succeeding (16th) time points. Univariate analysis was carried out using MeV 4.9, with permutation-based t tests. Variables were considered significant for p < 0.05 and false discovery rate (FDR) < 0.1 correcting for multiple testing.
2016-08-24T23:09:51.855Z
2016-07-26T00:00:00.000
{ "year": 2016, "sha1": "9eee93c3ffafce27d24b5bc4bcc24effa3d601f7", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2218-1989/6/3/23/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "9eee93c3ffafce27d24b5bc4bcc24effa3d601f7", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
15267256
pes2o/s2orc
v3-fos-license
Angular-planar CMB power spectrum Gaussianity and statistical isotropy of the Universe are modern cosmology's minimal set of hypotheses. In this work we introduce a new statistical test to detect observational deviations from this minimal set. By defining the temperature correlation function over the whole celestial sphere, we are able to independently quantify both angular and planar dependence (modulations) of the CMB temperature power spectrum over different slices of this sphere. Given that planar dependence leads to further modulations of the usual angular power spectrum $C_l$, this test can potentially reveal richer structures in the morphology of the primordial temperature field. We have also constructed an unbiased estimator for this angular-planar power spectrum which naturally generalizes the estimator for the usual $C_l$'s. With the help of a chi-square analysis, we have used this estimator to search for observational deviations of statistical isotropy in WMAP's 5 year release data set (ILC5), where we found only slight anomalies on the angular scales $l=7$ and $l=8$. Since this angular-planar statistic is model-independent, it is ideal to employ in searches of statistical anisotropy (e.g., contaminations from the galactic plane) and to characterize non-Gaussianities. brake either one. The absence of theoretical guidelines will inevitably lead to an innite number of models and no underlying symmetries, which still would mean that we could not account for conrmed anomalies. This means we must analyze the problem in as much a model-independent manner as possible. We can, for example, start from the very basic denition of our statistical quantities (such as the two-point correlation function) and check whether they can be modied in a model-independent manner, basing our reasoning solely on physical symmetries and observational hints. One such possibility is to consider the two-point temperature correlation function, C(n 1 ,n 2 ), without some of the symmetries of the underlying space-time. Attempts in this direction have been made by Pullen and Kamionkowski [22], where the temperature correlation function is assumed to depend on the direction of any given unit vector in the celestial sphere, in such a way that one can search for power multipole moments in temperature maps. However, that approach consists of considering the temperature correlation function at zero lag, and thus it does not allow us to consider correlations between two dierent points in the sky. Another possibility is to consider the correlation function in its full form, i.e, a function that depends on all pairs of independent unit vectors in the sphere S 2 . This idea, which was introduced by Hajian and Souradeep [23,24,25], consists in expanding the temperature correlation function in a bipolar spherical harmonic series in order to take into account its functional dependence. The authors then construct a bipolar power spectrum κ which can account for deviations from statistical anisotropy if observations give us κ > 0 at a statistically signicant level. Unfortunately, that approach is too generic: it is not clear what the associated statistical test is measuring, or how one can motivate it in the absence of an underlying theoretical or phenomenological model. In this work we also go back to the two-point correlation function C(n 1 ,n 2 ), but instead ask whether it can depend not only on the separation angle between two given unit vectors, cos ϑ =n 1 ·n 2 , but also on the orientation of the plane of the great circle dened by the unit vectors. Such a functional dependence can be unambiguously constructed once we realize that for any two unit vectors in the CMB sky, their angular separation and their associated plane are uniquely dened by their dot and cross products, respectively. The new planar dependence (on the direction dened by the normal to the planes of the two unit vectors) codies modulations of the usual two-point correlation function as we rotate these planes while keeping the separation angles ϑ xed. We have also constructed, in a completely model-independent way, an angular-planar power spectrum and its associated unbiased estimator, which naturally generalizes the usual angular power spectrum C , and for which we recover the known results in the limit of statistical isotropy. Our approach has a strong observational motivation, which lies in the fact that some astrophysical planes, like the galactic and ecliptic ones, play an important role in CMB measurements and could still be manifested in the data if the foregrounds were improperly removed. One such example was possibly found in [26] where, besides the alignment of the multipoles = 2 and = 3, the authors detected a strong correlation between these two and the ecliptic plane. The existence of a preferential plane could also be related to the so-called north-south asymmetry [5,8,27], in which case a plane could naturally separate regions of maximum and minimum temperature power. There exists also a third situation in which a physical plane can play and important role in cosmology, namely, the unavoidable presence of our galactic plane in all CMB measurements acts as an important source of astrophysical and foreground contamination. All these facts lead us to believe that a planar signature on the correlation function would be an important statistical property of the CMB, and is a potential test of its nature. We have organized this work in the following way: we begin §II with a brief description of the two-point correlation function and its general properties. After discussing some of its known generalizations, we extend our argument to include a planar dependence. In §III we carry a multipolar decomposition of the correlation function with planar dependence and show how the resulting coecients (i.e., the angular-planar power spectrum) are related to the usual temperature multipolar coecients a m 's. This leads us to the question of how to build an unbiased estimator to measure planar signatures in temperature maps and, in particular, how this can be implemented with the help of a simple chi-square analysis. We illustrate, still in this section, the application of our statistics to the well-known ΛCDM concordance model, where we present some gures for the best-t ΛCDM angular-planar power spectrum. In §IV we use a chi-square test to search for planar signatures in the WMAP full-sky temperature maps, and show that the angular scales = 7 and = 8 seem to be slightly anomalous for a particular range of planar separation l. We conclude in §V, where we also give some perspective of further developments. II. TEMPERATURE CORRELATION FUNCTION The main observable in the CMB is the temperature uctuation eld, ∆T . In its full generality, this eld is a function of a position vectorn and of the time interval in which we measure this temperature but in practice our measurements are made in time intervals which are negligible compared with the cosmological timescales. The eld ∆T is a scalar, continuous function on the unit sphere, which means we can decompose it in the usual fashion, in terms of spherical harmonics: All information is therefore encrypted in the multipolar coecients a m . Essentially all inationary models predict these coecients not as uniquely given, but rather as realizations of a random variable, in such a way that the physics is not in the a m 's themselves, but rather on their statistical properties. Since, by construction, this eld has zero expectation value, ∆T = 0, the two-point correlation function expresses the rst nontrivial momenta of the underlying statistical properties of the physical eld, and is given by: Alternatively, the covariance matrix above, a 1m1 a * 2m2 , gives all the information about the quadratic momenta of the underlying distribution. If the eld ∆T is Gaussian, then this covariance matrix encloses all the information that is needed to describe the nature of the uctuation eld (1). In this work we shall restrict ourselves to a ducial Gaussian model, for simplicity. We note also that the separable nature of the denition (2) implies a reciprocity relation for the correlation function: C(n 1 ,n 2 ) = C(n 2 ,n 1 ) . This symmetry must always be satised, regardless of the underlying physics. A. Isotropic case In a globally homogeneous and isotropic universe, the two-point correlation function of the temperature can only depend on the separation angle between the vectorsn 1 andn 2 , that is: Comparing this expression with Eq. (2), we notice that the covariance matrix becomes diagonal: with the diagonal terms given by the angular power spectrum, C . In principle the angular power spectrum suces to describe the statistical properties of the temperature eld (1). However, since we have only one universe to measure, and therefore only one set of a m 's, the average in (5) is poorly determined. The best we can do then is to take advantage of the ergodic hypothesis, which states that averaging over an ensemble can be treated as averaging over space, and hence to consider each of the 2 + 1 real numbers in a m as statistically independent, in such a way as to build a statistical estimator for the C 's: Since C = C , this estimator is said to be unbiased. Also, because for a Gaussian eld ( C −C )( C −C ) ∝ δ , this estimator has the least cosmic variance. C is, therefore, the best estimator that can measure the statistical properties of the multipolar coecients a m when both statistical isotropy and gaussianity hold. B. Some anisotropic cases The rst line in Eq. (4) for the temperature two-point correlation function is valid if and only if the universe is statistically isotropic. This means that any functional dependence that does not reduce to a dependence on cos ϑ =n 1 ·n 2 will measure some deviation from statistical isotropy. There are innite possible combinations ofn 1 andn 2 that violate statistical isotropy. However, since the vectorsn 1 andn 2 are constrained to have a common origin and size, symmetry and simplicity does not leave us many choices. One possibility is to consider these two vectors as being the same, in which case we are left with a correlation function of the form: and for which a decomposition similar to (1) exists. This form of the correlation function makes it suitable for searching for power multipole moments in CMB temperature and polarization maps, once we dene a power multipole moment estimator [22]. On the other hand, this is also a correlation function at zero lag, so by construction it does not allow us to consider anisotropic correlations between dierent points in the sky. A second possibility is to consider the correlation function as being the most general (but separable) function of two unit vectors that one can possibly have [23]: This function admits a decomposition in terms of the bipolar spherical harmonics [23] which has the nice property of behaving in many mathematical aspects as the usual spherical harmonics. The main drawback of the decomposition (7), however, is that it carries too many degrees of freedom which, in the absence of a specic cosmological model, cannot be resolved with simple estimators. Therefore, these two approaches are either too simple or too generic to reveal deviations from statistical isotropy in a more model-independent way. C. Anisotropy through planar dependence The guiding principle used in the construction of (6) and (7) is rather general and based mainly on our prejudices about what statistical anisotropy should look like. However, in the absence of theoretical guidelines, we have to conne ourselves to the observations of the CMB temperature or, more specically, to the signature of its known anomalies. One example is the role played by the galactic and ecliptic plane in the quadrupole-octupole/north-south anomalies [4,5,6,7,8], not to mention the importance of our galactic plane as a source of foreground contamination in the construction of cleaned CMB maps. The existence of a cosmic plane might even be a manifestation of some mirror symmetry [28]. In general, the simple fact that we are bound to make all our measurements inside our galactic plane suggests that the correlation between elds at two positionsn 1 andn 2 might be sensitive not only to their separation angle but also to the orientation of the plane they live in, as is shown in Fig. 1. Such a planar dependence can be included in the correlation function if we realize that two unit vectors on the sphere S 2 uniquely dene both a separation angle ϑ and a directionn perpendicular to the great circle (or plane) where they live. We are then left with a new possibility for the functional dependence of the two-point correlation function: which corresponds formally to a function of the form C : D 3 → R, where D 3 is the set of all points (x, y, z) such that x 2 + y 2 + z 2 ≤ 1 [40]. Dening n ≡n 1 ×n 2 , the above expression can be further decomposed in spherical coordinates as follows: where: |n| = sin ϑ ,n = {θ, φ} . Notice that:n where (θ i , φ i ) are the angles dened by the vectorsn i . Some comments on the decomposition (9) are in order. First, we note that there is an intrinsic ambiguity in the sense of the vector n (as we might as well have dened n ≡n 2 ×n 1 ), which is obviously inherited from the ambiguity in the denition of the normal to a plane. This ambiguity can be avoided if we restrict the sum in l to even values, which is what we will do from now on. Note that such a restriction arises naturally as a consequence of the reciprocity relation Eq. (3). Second, for = 0 we recover (6) and therefore all the analysis made in [22] arises as a special case here. III. ANGULAR-PLANAR POWER SPECTRUM The multipolar C lm coecients in Eq. (9) correspond to a generalization of the usual angular power spectrum C 's. In fact, they can be seen as a spherical harmonic decomposition of the angular power spectrum, if it suers modulations as we sweep planes on the sphere. The function C (n) for a given is: Clearly, the monopole of C (n) (the average over the whole sphere) is the usual angular power spectrum, C 00 = C , and the higher multipoles measure modulations of the spectrum. Since we are restricting our analysis to the Gaussian case, the set of coecients C lm completely characterizes the two-point correlation function. Still, what is accessible through observations are temperature maps which we can use to try to estimate the correlation function. In this respect the multipolar coecients C lm would be of limited interest, unless we can relate them directly to our observables. It would be interesting if we could, for example, relate these coecients to the covariance matrix a 1m1 a * 2m2 by equating expressions (9) and (2), as is usually done. However, this procedure is far from being trivial, since the complicated coupling of the angles ϑ, θ and φ dened in (10) make it dicult to use the usual orthogonality relations to isolate the C lm 's. Fortunately, as we show below, we can estimate the C lm 's if we use the invariance of the scalar productn 1 ·n 2 and chose our coordinate system in order to integrate out the ϑ dependence. Once this is done, we make a passive rotation of the coordinate system and then we integrate over the remaining angles θ and φ, which then are given precisely by the Euler angles used in the rotation. The details are rather technical and can be found in the Appendix. The nal expression is: where the 6-index expression in parenthesis is the Wigner 3J symbol, and where λ im are a set of coecients resulting from the ϑ integration, which vanish unless i + m = even (see the Appendix for more details.) It is easy to show that expression (11) induces no coupling between the eigenvalues and l, as expected, since the length of the vector n is completely independent of its orientation. There are, however, subtle couplings present in (11) which do make a dierence when we apply it to real data. This is due to the Legendre polynomial in the integral (12), which selects only those values of 1 and 2 which have the same parity as the angular momentum . Moreover, the 3J symbols appearing in (11) give dierent weights to the triple (l, 1 , 2 ) depending on the parity of ( 1 , 2 ) and, as a consequence, we can expect typical oscillations in any function of (11) that we may build when plotted as a function of . This will be shown explicitly in the next section, when we apply these tools to the WMAP 5 year data. Expression (11) does not take into account the fact that real data is not given exactly by (1), but rather by a pixelized temperature map which is a combination of the true cosmological signal, plus instrumental noise and residual foreground contamination. Schematically, the temperature of the map in each pixel i is given by Typically, the cosmological signal ∆T S is smoothed out by a Gaussian beam W (n i ) of nite width which, in harmonic space, is given by W = exp(− 2 σ 2 b ), where σ b = θ fwhm / √ 8 ln 2 and θ fwhm is the beam full width at half maximum. For the V-band frequency map of the WMAP experiment, θ fwhm = 0.35 • , which implies a minimum min 390 for which the eect of a beam smoothing will be important, much higher than the low-regions where known anomalies were reported. Thus, for the sake of simplicity we will neglect the eect of the beam in this work. Also, for the 390 region, cosmic variance is known to dominate the source of error over instrumental noise, and therefore we can neglect the latter as well. On the other hand, the residual foreground can be an important source of contamination, and therefore deserves a careful analysis which is beyond the scope of the present work. In a companion paper we carry a more rigorous analysis of planar signature in CMB data in which the eect of the residual foreground will be estimated [29]. A. Statistical estimators and χ 2 analysis We now would like to use expression (11) to examine the observed universe. We start by noting that in the limit of statistical isotropy (SI), that is, when a 1 m1 a 2 m2 = C 1 δ 1 2 δ m1m2 , expression (11) reduces to Conversely, if the only non-zero C lm 's are given by l = m = 0, then C 00 = C . Therefore, statistical isotropy is achieved if and only if the C lm 's are of the form (13), and any observational deviation from this relation would be an indication of statistical anisotropy. However, we only get to observe one universe, and this makes the cosmic sample variance a severe restriction that we have to live with. This means that if we want to know, let's say, the mean value and variance of the C lm 's, we will have to build statistical functions which can only estimate these properties, just like it is done with the fundamental quantities a m and the associated estimators C (see the discussion in §II A). In other words, in order to evaluate the statistical properties of the C lm 's we will have to treat them as our new fundamental quantities, which will be determined exclusively as a function of the a m 's. As a consequence, we will redene expression (11) as: and will treat the coecients C lm as uniquely given once we have a map. Of course, expression (14) is nothing more than the unbiased estimator of the angular-planar power spectrum (11) and as long as cosmic variance is an issue this second order approach we are adopting here (i.e, the prescription of adopting this estimator of the correlation function as our fundamental quantity, rather than the temperature eld) is the best we can do when searching for statistical deviations of isotropy. In theory, it is also possible to use the CMB polarization induced by galactic clusters to probe dierent surfaces where CMB photons last scattered, and to use such independent measurements as a way to alleviate cosmic variance [30,31]. However, the gain in terms of a reduced variance is still limited. Having these limitations in mind, we can now ask: how good does a theoretical model of anisotropy, C th,lm , t the observational data once it is given by (14)? To answer this question we can use the well-known chi-square (χ 2 ) goodness-of-t test, which in our case can be written in the following generalized form: in which the scales l and are seem as independent degrees of freedom, and where σ lm is just the standard deviation of the dierence (C lm − C th,lm ). Although this expression can be readily applied to any theoretical model of anisotropy, in practice it is better to work with its reduced version: which is just the chi-square function divided by the 2l + 1 planar degrees of freedom. Expression (15) will be the starting point of our statistical analysis, which we will pursue in detail in §IV. Before we move on, it is important to choose a particular cosmological model of anisotropy against which we want to compare our estimator (14). B. ΛCDM model The most important model to be analyzed using the estimator (14) is, of course, the concordance ΛCDM model which was conrmed with striking accuracy by the 5 year release dataset of the WMAP team [2]. For this model, statistical isotropy holds and, as we have shown, any multipolar coecient C lm with non-zero planar dependence should be identically zero in this case [see expression (13).] Therefore, for this model we can take: where it should be clear that we are only considering the cases with l ≥ 2 (as we will do from now on). Now, if the data under analysis is really Gaussian and SI, then its covariance matrix can be explicitly calculated: where we have used (5) and the null hypothesis (16). This covariance matrix has some interesting properties: rst, we note that the planar degrees of freedom in (15) are really independent in this case. Moreover, the variance (σ lm ) 2 = M llmm becomes m-independent: Second, its diagonal terms (i.e., σ l ) are completely determined by the angular power spectrum C , up to some geometrical coecients which arise as a consequence of the way in which we split our CMB sky. This makes it possible to give a visual interpretation of the angular-planar power spectrum C lm , similar to that of the C 's. For that, let us introduce the reduced angular-planar spectrum: which has a simple interpretation when compared to the usual angular spectrum, because H 0 = (2 + 1)/2σ 0 = C , as can be easily shown using Eqs. (17) and (25). In Fig. 2 we show some plots of the reduced spectrum H l , both as a function of l and . Notice that, as a result of our planar splitting of the CMB sky, the low-sector of the spectrum H l is suppressed when we consider planes separated by smaller angles (bigger values of l). This is a consequence of the nontrivial coupling of the moments l, 1 and 2 : since the C 's are roughly given by a monotonically decreasing sequence, and since |l − 1 | ≤ 2 ≤ l + 1 , bigger values of l make the moment 2 probe deeper and deeper regions of the Sachs-Wolfe plateau. This suppression reaches cosmological scales up to the rst acoustic peak, after which the planar dependence becomes negligible. IV. χ 2 TEST OF STATISTICAL ANISOTROPY We now come back to the question of how the angular-planar power spectrum C lm ts the observed universe. We begin by showing that if we want to compare our data against the standard ΛCDM universe, then the chi-square function (15) becomes a very simple expression. As we have shown in the preceding section, for this model, σ lm = σ l and C th,lm = 0. Therefore (15) simplies to: where C obs,lm is calculated by applying the estimator (14) to the data given by a obs m . It is now clear that if the data under analysis is really Gaussian and statistically isotropic, then it should be true that: This means that a positive test of planarity will be quantied by how far our chi-square function deviates from unity. We can do even better and dene a new function as: which, if signicantly dierent from zero, will point towards anisotropy. It should be stressed that, for a given CMB map, the chi-square analysis must be done entirely in terms of that map's data. Indeed, any arbitrary introduction of a ducial bias in (19) (for example, by calculating σ l using C ΛCDM ) would only include our a priori prejudices about what the map's anisotropies should look like. The angular spectrum C , being by construction a measure of statistical isotropy, can only be said to be small/big when compared to a particular cosmological model (for example, the ΛCDM model). Consequently, an anomalous detection of C is by no means a measure of statistical anisotropy, and it is this value that should be used to calculate σ l if we want to nd deviations of isotropy, regardless of how high/low it is. Note also that while the function χ l has some isotropy variance which could be computed for the ΛCDM model from rst principles, in practice it is much easier to simulate many realizations of a Gaussian and isotropic random eld to obtain that variance. Finally, we would like to mention that although each number χ l is an individual measure of anisotropy (i.e., planarity), a consistently biased set of values over a range of l's or 's can also be seen as an indication of anisotropy, even if all individual χ l 's in that range are well within their variance limits. Following the prescription outlined above, we applied the estimator (20) to the 5 year WMAP full sky data (also known as ILC5 map) where, for practical reasons, we have restricted our analysis to the range of values ∈ [2,12] and l ∈ [2,12] (notice that the momenta l can only assume even values). Our results are presented in Fig. 3, where we keep the momenta l xed, and vary the momenta related to angular separation, . As discussed before, for this range of values cosmic variance dominates over other sources of noise. We estimated the eects of cosmic variance by running a simulation of 10 3 realizations of this estimator, using the best-t (theoretical) scalar C 's made available in [32]; this corresponds to the shaded area in Fig. 3. It is also important to explain that, while the data points in Fig. 3 were calculated using the ILC5 map alone, we have also included in our analysis a rough estimate of the possible residual foreground contamination present in the data. This was done by computing the sample variances of the full-sky maps shown in Table I, which were then used as error bars. In other words, the error bars in Fig. 3 do not account for instrumental noise, which is believed to be under control at these angular scales. [36] Delabrouille et. al. [37] Table I: Full sky WMAP maps used in our analysis to estimate the error bars of Fig. 3. The data points in this (and subsequent) gure was calculated using only the ILC5 map [1,33]. Theses gures present some peculiarities: rst, we notice that the magnitude of the error bars oscillate for the smallest values of . As we mentioned in §III, this is partially a consequence of the 3J symbols, which are weights appearing in the denition of the anisotropic power spectra and whose eect is to couple dierently odd and even multipoles. The second peculiarity is that, in all these gures, the modulations of the quadrupolar moment = 2 are entirely consistent with zero. This result suggests that the low value of the quadrupole C 2 is perhaps not a consequence of statistical anisotropy, at least for the test we are considering here. Note also that the octupole = 3, which has been reported as unusually planar by some groups, grows slightly from l = 4 to l = 8, although it is compatible with cosmic variance in all the planar range considered. In what concerns deviations of isotropy, our analysis shows that the most anomalous scales are in the sectors (l, ) = (4, 7) and (6,8), where we can see that the points χ 4 7 and χ 6 8 are only marginally allowed by the 2σ cosmic variance area. In order to make the visualization of the above gures easier, we repeat the analysis but now keeping the angular separation xed and varying the planar separation l. The result is shown in Fig. 4. Notice that the planar modulations of the quadrupole = 2 are consistently positive, but always compatible with zero. We can also see in these gures the growing behavior in the octupole = 3 from l = 4 to l = 8 as mentioned before. Figure 4: Anisotropic angular-planar estimator applied to the WMAP ILC5 data. The panel shows χ l as a function of l, for the particular values = (2,3,4,5,6,7,8,9,10,11). Note that for xed l, the error bars do not oscillate (see the text for more details). V. CONCLUSIONS We have investigated the minimum statistical framework of modern cosmology by enlarging the domain of the two-point correlation function to admit not only the usual angular dependence, but a directional (planar) dependence as well. Our observable, the anisotropic angular-planar power spectrum, can account not only for the usual angular separation between any two spots in a CMB map, but also for any planar signature that this map might have. Besides having a strong observational motivation, an interesting feature of this approach is that it leads naturally to an unbiased estimator of statistical anisotropy, in the same spirit as is done with the multipolar temperature coecients a m 's and their associated estimator C . As an example of its use, we have applied this estimator to a concrete model of cosmology, i.e., the ΛCDM model, where we have shown that under the hypothesis of gaussianity and statistical isotropy, the angular-planar power spectra have zero mean, but of course, non-zero covariance. By means of a simple chi-square analysis, we have also applied our estimator of planar anisotropy to the WMAP ILC5 data, where we found that the planar modulations of the quadrupole = 2 are compatible with the null hypothesis over the range of planar momenta l ∈ [2, 12] we probed. Our results suggests that the low value of the quadrupole C 2 is perhaps due to some local physics, and not to deviations of statistical isotropy, at least as far as planar modulations are concerned. Our analysis has also shown that the angular scales = 7 and = 8 suer some degree of modulation around the planar scales l = 4 and l = 6, respectively. This could be an indication of some foreground contamination coming from a planar region of typical size ∆l = 4 ∼ 6. However, a complete treatment of the sources of errors and the eect of masks is needed before we can reach a more denitive conclusion for that analysis, see [29]. From a theoretical perspective, our techniques can be readily applied to any particular model of ination predicting a specic anisotropic shape for the matter power spectrum. Due to the generality and simplicity of our formulas, the angular-planar power spectrum can also be used to analyze CMB polarization. Other possible applications include stacked maps of cosmic structure, such as the galaxy cluster catalog 2Mass [38]. We need now to integrate out the θ and φ dependence in the right-hand side of (23) which was hidden due to our choice of a particular coordinate system. In order to do that, we keep the vectorsn 1 andn 2 xed and make a rotation of our coordinate system using three Euler angles ω = {α, β, γ}. This rotation changes the coecients C lm 's and a m 's according to where C lm and a lm are the multipolar coecients in the new coordinate system and where D l mm (ω) are the elements of the Wigner rotation matrix. The advantage of positioning the vectorsn 1 andn 2 in the plane xy is that now the angles θ and φ are given precisely by the Euler angles β and γ, regardless of the value of α l,m where in the last step we have used Y lm (0, 0) = (2l + 1)/4π δ m0 . Therefore, in our new coordinate system we have (dropping the in our notation) We may now isolate C lm using the identities [39] dω D l1 * If we now do the redenitions −m 2 → m 2 , −m → m , C lm → (−1) m C l,−m and note that the rst 3J symbol above is identically zero unless m 1 = m 2 , we obtain nally (11). Useful identities We present here some useful identities related to the 3J symbols: where in the derivation above we have made use of the Fourier series expansion of the Legendre polynomial. So we conclude that which is needed in the derivation of (13) and (18).
2009-09-29T18:17:44.000Z
2009-07-13T00:00:00.000
{ "year": 2009, "sha1": "774b7aa278be9f6d92693c720f2e8a72a319decf", "oa_license": null, "oa_url": "http://arxiv.org/pdf/0907.2340", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "774b7aa278be9f6d92693c720f2e8a72a319decf", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
8389687
pes2o/s2orc
v3-fos-license
Ambient‐Temperature Synthesis of 2‐Phosphathioethynolate, PCS–, and the Ligand Properties of ECX– (E = N, P; X = O, S) Abstract A synthesis of the 2‐phosphathioethynolate anion, PCS–, under ambient conditions is reported. The coordination chemistry of PCO–, PCS– and their nitrogen‐containing congeners is also explored. Photolysis of a solution of W(CO)6 in the presence of PCO– [or a simple ligand displacement reaction using W(CO)5(MeCN)] affords [W(CO)5(PCO)]– (1). The cyanate and thiocyanate analogues, [W(CO)5(NCO)]– (2) and [W(CO)5(NCS)]– (3), are also synthesised using a similar methodology, allowing for an in‐depth study of the bonding properties of this family of related ligands. Our studies reveal that, in the coordination sphere of tungsten(0), the PCO– anion preferentially binds through the phosphorus atom in a strongly bent fashion, while NCO– and NCS– coordinate linearly through the nitrogen atom. Reactions between PCS– and W(CO)5(MeCN) similarly afford [W(CO)5(PCS)]–; however, due to the ambidentate nature of the anion, a mixture of both the phosphorus‐ and sulfur‐bonded complexes (4a and 4b, respectively) is obtained. It was possible to establish that, as with PCO–, the PCS– ion also coordinates to the metal centre in a bent fashion. Introduction Multiple bonds involving elements with a principal quantum number greater than two were once thought to be inaccessible under the tenets of the double bond rule. [1,2] This theory has since been thoroughly disproved, with countless examples of heavier main group systems containing multiple bonds that are kinetically and thermodynamically stabilised by substituents with high steric bulk. [3] For example, the diphosphene and the phosphaalkene pictured in Figure 1 can both be stabilised by large aromatic groups. [4][5][6][7] Even formal P≡C triple bonds are stable when sterically encumbering substituents are employed, for example in phosphaalkynes. [8] By contrast, examples of multiply bonded main group element systems that are not sterically protected by bulky substituents are much rarer. One such case is the 2-phosphaethynolate anion, PCO -, the phosphorus analogue of cyanate, first synthesised as a lithium salt by Becker in 1992. [9] In this species, stabilisation of the P-C multiple bond is a consequence of negative charge delocalisation along the anion. This precludes common decomposition pathways such as oligomerisation (available to related species such as phosphaalkynes), because of electrostatic repulsion between monomers. Nevertheless, lithium salts coordination sphere of tungsten(0), the PCOanion preferentially binds through the phosphorus atom in a strongly bent fashion, while NCOand NCScoordinate linearly through the nitrogen atom. Reactions between PCSand W(CO) 5 -(MeCN) similarly afford [W(CO) 5 (PCS)] -; however, due to the ambidentate nature of the anion, a mixture of both the phosphorus-and sulfur-bonded complexes (4a and 4b, respectively) is obtained. It was possible to establish that, as with PCO -, the PCSion also coordinates to the metal centre in a bent fashion. of PCOhave significant covalent character, leading to a decrease in formal negative charge on the PCOmoiety. Consequently, [Li(dme) 2 ][PCO] (dme = 1,2-dimethoxyethane) was found to be very sensitive to decomposition. Accordingly, very few accounts of the reactivity of the 2-phosphaethynolate anion were reported in the twenty years following its discovery. [10,11] There has been renewed interest in the 2-phosphaethynolate anion over the last four years, stimulated primarily by novel and facile synthetic routes, which enable its isolation on multigram scales. [12,13] These preparations yield the sodium and potassium salts, which are more ionic in nature and consequently more stable with respect to decomposition. There have been numerous reports of the use of the 2-phosphaethynolate anion as a precursor to original organophosphorus derivatives, [14][15][16] phosphorus-containing heterocycles, [12,[17][18][19][20][21] and as a phosphidetransfer reagent. [22][23][24] A recent report on tris(amidinate) actinide (Th, U) complexes of PCOrevealed an oxygen-bound terminal phosphaethynolate, whereas both the NCOand NCSanalogues favoured nitrogen coordination. [25] This is attributed to the largely ionic nature of the bonding; as a result the E/X atom (E = N, P; X = O, S) with the greater negative partial charge preferentially binds to the actinide centre. This is in accord with a previous spectroscopic and computational study exploring the ambidenticity of the phosphaethynolate anion. [26] The groups of Grützmacher and Peruzzini have synthesised the only known transition metal complex of PCOto date (reported alongside the cyanate analogue for comparison), in the form of [Re(η 1 -ECO)(CO) 2 (triphos)] (E = N, P; Figure 2). [27] In contrast to the actinide complexes, the two ECOligands both bind through the pnictogen atom, but in this case with different coordination geometries. The PCOligand coordinates in a strongly bent manner, with a Re-P-C bond angle of approximately 90°, whereas NCObinds almost linearly and the Re-N-C bond angle approaches 180°. This was corroborated by computational calculations and is not merely an effect of crystal packing; however, crystallographic disorder in both rhenium structures precludes an accurate discussion of bond metric data. IR spectroscopy revealed that, despite this profound difference in coordination geometry, both ECOligands "exert indistinguishable effects on the [Re(CO) 2 (triphos)] fragment. " The aforementioned studies prompted us to carry out a more extensive exploration of the ligand properties of PCOcompared to those of NCO -. The metal complexes chosen for carrying out this analysis were those derived from the W(CO) 5 fragment. Many reactive species have been trapped and stabilised in the coordination sphere of this moiety, and as a result W(CO) 5 (L) complexes have been used as a benchmark for a multitude of experimental and computational ligand studies in the past, allowing a thorough comparison to other systems. From a practical perspective, the tungsten(0) centre is stable to reduction by the relatively reactive PCO -, a phenomenon which has previously plagued the coordination chemistry of this ligand. [27] Coordination Studies of PCOand NCX -(X = O, S) Photolysis of a THF solution of W(CO) 6 [1]). This product could also be obtained more cleanly by simple displacement of the labile acetonitrile ligand of W(CO) 5 (MeCN). The coordinated product, 1, displayed a singlet at δ = -439.3 ppm, 44 ppm upfield of free PCO -, in the 31 P NMR spectrum. The most striking feature of the 31 P NMR spectrum of 1 is the magnitude of the 183 W satellites ( 183 W: 14.3 % abundant, I = 1/2), which is only 51.9 Hz (Figure 3). This value is too large to result from a three-bond scalar coupling, as would be anticipated for oxygen-bound PCO -. Therefore, it must be due to a one-bond coupling arising from phosphorus coordination; however, typical 1 J P-W coupling constants are of the order of 250 Hz. We believe that our recorded value is small, because bonding to the tungsten metal centre predominantly occurs through one of the phosphorus p orbitals. The Fermi-contact mechanism is crucially dependent on the s character of the bond, as only the s orbitals have a non-zero probability of having electron density at the nucleus. This would be the case if PCOwas bound in a side-on manner with a W-P-C bond angle of approximately 90°, in an analogous manner to the rhenium structure reported by Grützmacher, Peruzzini and co-workers. The lone pair on phosphorus for this compound was calculated to have very high s character (68 %). [27] This low 1 J P-W coupling constant can therefore be perceived as a spectroscopic indicator of side-on bonding. The proposed bonding mode was confirmed by single-crystal X-ray diffraction studies. Large, yellow, block-shaped crystals were grown by slow diffusion of hexane into a THF solution of [K(18-crown-6)] [1], and the crystallographic analysis clearly shows PCOto be bound through the phosphorus atom in a bent geometry. Unfortunately, crystallographic disorder causes the PCOmoiety to be disordered over two positions trans to one another, with the carbonyl group occupying the other position 50 % of the time. This precludes the accurate analysis of bond metric data, particularly the W-C bond length trans to the PCOanion, which is one of the parameters we were most interested in to probe the properties of the ECOligands. To extend this study we also synthesised the thiocyanate complex, [K(18-crown-6)][W(CO) 5 (NCS)] ([K(18-crown-6)] [3]). This anion has also been studied spectroscopically; however, to the best of our knowledge, no crystal structure for it has been reported to date. NCSis an archetypal ambidentate ligand; it is able to bind through either the nitrogen or the sulfur centre depending on the metal in question. Previous IR spectroscopic studies on the [W(CO) 5 (NCS)]anion suggested that the ligand is nitrogen-bound in this complex. [33,34] This is due to the decrease in charge on the metal centre compared to that for the isoelectronic Mn I complex, in which the ligand is bound through the sulfur atom. [35] The crystal structure of [K(18-crown-6)] [3] confirms that the ligand is nitrogen-bound in an end-on manner. [K(18-crown-6)][1] exhibited extensive crystallographic disorder, and hence the structure was not suitable for a discussion of bond metrics; therefore, both 1 and 2 were synthesised by using different counter-cations. An exchange of the potassiumsequestering agents led to the formation of [K(2,2,2crypt)][W(CO) 5 (ECO)] {E is P for [K(2,2,2-crypt)] [1] and N for [K(2,2,2-crypt)] [2]}. Both of the structures determined by singlecrystal X-ray diffraction were free from any disorder phenomena, and this enabled a comparative evaluation of their bond metric data (Table 1). The identity of the cation makes very little difference to the structure of the anion for the two cyanate complexes, [K(18crown-6)] [2] and [K(2,2,2-crypt)] [2], as all of the bond lengths were found to be identical within statistical error. Therefore, the discussion will focus on the differences between the two tion. [12] In contrast, the coordinated cyanate ion in 2 has a significantly longer C1-O1 bond length of 1.221(4) Å and a short N1-C1 bond length of 1.156(4) Å (cf. 1.219(5) and 1.128(5) Å, respectively, in [NMe 4 ][OCN]). [36] This is consistent with the bonding model proposed by Grützmacher, Peruzzini and coworkers for the rhenium complexes. [27] While [K(2,2,2-crypt)] [1] resembles a metallaphosphaketene with an allene-like O=C= Pmoiety, [K(2,2,2-crypt)] [2] contains a cyanate anion, which is better described as having a contribution from a N≡C-Oresonance structure in addition to that from the expected O= C=N -. These bonding models are further supported by looking at the interactions of the bound ECOligands with the countercation, [K(2,2,2-crypt)] + . This sequestering agent is well documented to strongly encapsulate K + , and accordingly there is no interaction between the O1 atom of the PCOmoiety in [K(2,2,2-crypt)] [1] and the K + centre. However, in [K(2,2,2crypt)] [2], the ligand oxygen atom would formally bear a partial negative charge if the zwitterionic [M + -N≡C-O -]structure were a valid resonance contribution. This appears to be the case and results in a sufficiently strong electrostatic cation/anion interaction to splay open the 2,2,2-crypt (see Figure 4). The bond metric data for [K(18-crown-6)] [3] are very similar to those of [K(18-crown-6)] [2]. This implies that the nitrogenbound NCOand NCSligands coordinate in a similar manner; the NCSligand binds end-on with contributions from the two different resonance structures described above. There was no evidence for sulfur-bound NCScoordination products in any of the experiments carried out. The W1-P1 bond in [K(2,2,2-crypt)] [1] is remarkably long [2.666(1) Å]. A search of the CSD for tungsten-pentacarbonyl fragments bearing a phosphorus-bound ligand returned a mean W-P bond length of 2.513 Å and revealed that our value is only slightly shorter than the maximum value of 2.686(4) Å. [37] This latter value is from the tris(tert-butyl)- Table 2. One-bond coupling constants for cis and trans carbonyl groups (in Hz). [K(18-crown-6)] [1] [K (18- phosphine-tungsten-pentacarbonyl complex, W(CO) 5 (PtBu 3 ), where the long bond length primarily arises from the large steric bulk of the phosphine (the Tolman cone angle is 182 ± 2°). [38,39] In our system, steric constraints are relatively insignificant, and instead the long bond is another manifestation of the fact that the bonding is predominantly p-orbitalbased. This is substantiated further by comparison of the 1 J P-W coupling constants between the two species; despite the similar bond lengths, the value for W(CO) 5 (PtBu 3 ) is of a typical magnitude (228.5 Hz). [40] This implies that the low coupling constant for [K(2,2,2-crypt)][1] (51.9 Hz) is not a result of the length of the W-P bond, but instead is due to the orbitals involved in bonding to the metal centre. The trans influence, also known as the structural trans effect, is a ground-state phenomenon, not to be confused with the kinetic trans effect. It is most commonly assessed in terms of the length of the metal-ligand bond trans to the ligand in question derived from crystallographic data. The W1-C trans bond length, that is, the bond length trans to the ECXligand, is [3]. This suggests that the three ligands have the same net electronic properties; however, these experimental data do not allow us to deconvolute the σ and π effects. The W1-C cis bonds are significantly longer than W1-C trans bonds in all cases, which implies that the ECXligands have a weaker trans influence than the CO ligands, as expected. Another useful method of assessing the trans influence is available through NMR spectroscopy studies. The 1 J W-C coupling constant of the carbonyl group trans to the ECXligand obtained from the 13 C NMR spectrum is dependent on the trans influence of this ligand, a higher value of 1 J W-C indicating a weaker trans influence. The strong W-C bond, strengthened by the large π-accepting ability of the CO ligand, makes it more sensitive to variations in ligand properties and therefore an ideal probe for studying this effect. [41] A study of this type has previously been carried out by Buchner and Schenk, where a range of monosubstituted tungsten-carbonyl complexes, including the NCSspecies, were assessed. [34] The magnitude of a one-bond coupling constant can be taken to reflect the σdonating ability of the ligand in the trans position. If the ECXligand trans to a CO makes a lesser demand for the tungsten 6s orbital, then a rehybridisation will occur to increase the contribution from this orbital to the W-C bond. Perturbation theory by Burdett and Albright has shown that the W-C bond will be strengthened if there is an increase in the energy difference between the donor orbitals of the ECXligand and the tungsten acceptor orbital, or if there is a decrease in the overlap integral of the W-E bond. [41] The coupling constant data are shown in Table 2. As the NMR spectroscopic data of the two salts are similar, only values for the [K(18-crown-6)] + salts will be discussed (all values are given in the Experimental Section). The coupling constants to the cis carbonyl groups vary only to a small degree, which is entirely consistent with previous studies. [34] Thus, when E = P, the values all tend to be approximately 125 Hz, and N-donors give slightly higher values of approximately 130 Hz. The coupling constants to the trans carbonyl groups are more characteristic of the bonding properties of the ECXligand. The value for [K(18-crown-6)][1] is significantly higher (168.6 Hz) than those for the two NCXcomplexes ( Figure 5). This implies that the PCOligand exerts a weaker trans influence, leading to a stronger W-C trans bond and a greater Fermi interaction. In fact, this value is much larger than those for the other phosphorus-based ligands included in prior studies, where typical coupling constants for phosphine ligands are around 140 Hz. From these NMR spectroscopic data, we can infer that PCOacts as a weak σ-donor and negligible π-acceptor. By contrast, NCOand NCSare stronger σ-donors; however, as a result of their increased π-acceptor ability relative to PCO -, they have a similar net electronic effect, which is manifest in similar W-C trans bond lengths and IR spectra (vide infra). [1]. The signs associated with the coupling constants are described in ref. [34] The 2-Phosphathioethynolate Anion, PCS -We sought to extend this study to incorporate the heavier group 16 analogue, PCS -. The latter was originally synthesised as the lithium salt by Becker in 1994. [42] One of the reported syntheses was the reaction of O,O′-diethyl thiocarbonate with lithium bis(trimethylsilyl)phosphide, by analogy with the original preparation of PCO -. [9] More interesting to us was an alternative synthesis, which entailed the direct reaction of PCOwith one equivalent of CS 2 to afford PCSand OCS, presumably via an undetected cyclic intermediate (Scheme 1). It should be noted that both of these syntheses required the mixing of reagents at -50°C and yielded sensitive, pale yellow crystals of We reasoned that, by analogy with PCO -, the formation of an alkali metal salt of PCSother than that of lithium would make the compound easier to handle and the anion more amenable to further study. This factor, combined with a judicious choice of solvent, allows us to now report the first room-temperature synthesis of PCS -. [K(18-crown-6)][PCO] was dissolved in 1,2-dichlorobenzene (1,2-DCB) in an NMR tube, and one equivalent of CS 2 was added with a microsyringe. This solution immediately darkened from yellow to orange, and 31 P NMR spectroscopy revealed a singlet resonance at δ = -118.0 ppm, which is consistent with the clean and quantitative formation of [K(18-crown-6)][PCS] (lithium salt at δ = -121.3 ppm). [42] These solutions can be kept at room temperature for several weeks with no apparent decomposition. The same reaction can be carried out in 1,2-difluorobenzene (1,2-DFB); however, the solution gradually darkens over several hours after addition of CS 2 , although product formation is still clean by 31 P NMR spectroscopy. We believe that these solvents are particularly effective because 1,2-DCB and 1,2-DFB are both highly polar but also relatively non-coordinating. By comparison, if the reaction is carried out in THF under the same conditions, a copious amount of dark precipitate is rapidly produced and only a very weak signal corresponding to PCSis observable in the 31 P NMR spectrum after a couple of hours. Pale yellow needles suitable for single-crystal X-ray diffraction were obtained, and this analysis revealed PCSto have a linear triatomic structure with close electrostatic interactions to two neighbouring [K(18-crown-6)] + cations through the P1 and the S1 atoms ( Figure 6). This leads to one-dimensional chains of alternating anions and cations propagating through the lattice, akin to those of [K(18-crown-6)][PCO]. [12] However, the P1 and S1 atoms are rendered indistinguishable in the crystal structure due to an inversion centre on the central C1 atom of the anion. This is a manifestation of the comparable C=P and Eur. J. Inorg. Chem. 2016, 639-648 www.eurjic.org C=S bond lengths and the fact that sulfur and phosphorus are adjacent in the periodic table and therefore have similar X-ray scattering factors. This disorder prevents a meaningful analysis of the bond metric parameters. Analogous reactivity was investigated for the [Na(1,4-dioxane) x ][PCO] salt (1 < x < 3). In this case, the starting material is completely insoluble in 1,2-DCB, presumably because of the stable three-dimensional network of octahedrally coordinated Na + cations bridged by 1,4-dioxane units. [13] Unsurprisingly, this meant that no reactivity was initially observed upon addition of the CS 2 . However, addition of one equivalent of 18-crown-6 to this sample led to the immediate darkening of the reaction mixture and the formation of [Na(18-crown-6)][PCS], as indicated by a singlet resonance at δ = -119.4 ppm in the 31 P NMR spectrum. The 18-crown-6 solubilises the starting material by sequestering the Na + cations and breaking apart the extended network, leading to reactivity that mirrors the K + salt, as expected. For the purposes of consistency, only the reactivity of the [K(18-crown-6)] + salt of PCSwill be discussed henceforth {the structure of the [Na(18-crown-6)] + salt is provided in the Supporting Information}. Reaction of W(CO) 5 (MeCN) with a 1,2-DCB solution of PCSled to a darkening of the solution. 31 P NMR spectroscopy revealed the slow emergence of two resonances of almost equal intensity either side of the free PCSanion after several days (Figure 7). These two resonances occur at -192.6 and -92.9 ppm and have been attributed to the phosphorus-bound and sulfurbound species 4a and 4b, respectively, due to the ambidentate nature of the PCSligand (vide infra). These were distinguishable by the presence of the 183 W satellites on the phosphorusbound resonance only. The change in chemical shifts relative to free PCSis also consistent with this assignment. For 4a, there is an upfield shift, similar to that observed for [K(18crown-6)] [1], while for 4b a downfield shift is observed, akin to those for the oxygen-bound An-OCP species (An = Th, U) discussed previously. The 1 J W-P coupling constant is only 46.0 Hz, which is a spectroscopic indication that the PCSis bound in a side-on manner mainly through a p orbital, as in [K(18-crown-6)] [1]. Crystals suitable for single-crystal X-ray diffraction were grown by slow diffusion of hexane into a 1,2-DFB solution of the product (Figure 8). Unfortunately, extensive crystallographic disorder makes the unambiguous assignment of the product difficult. The structure certainly contains a PCSligand coordinated to a tungsten-pentacarbonyl centre, but whether it is phosphorus-bound, sulfur-bound, or indeed a combination of the two, is non-trivial, as the residual electron densities change very little between the various refinements. The final solution has been modelled as sulfur-bound, as this gave a slightly lower R1(all data) value of 2.29 %. The sulfur-and phosphorus-bound products are unstable in 1,2-DCB at room temperature over the course of approximately four days, as monitored by 31 P NMR spectroscopy. This, coupled with the fact that PCSwas very slow at displacing the acetonitrile ligand, meant that this reaction was unsuitable for obtaining a compositionally pure sample of either product. Extensive attempts were made to vary the reaction conditions. Carrying out the reactions at lower temperatures was not a viable option, as this further retarded the already slow rate of coordination. A range of other tungsten-pentacarbonyl precursors were used, including photolysis in the presence of tungstenhexacarbonyl, but all to no avail. We were further limited by Eur. J. Inorg. Chem. 2016, 639-648 www.eurjic.org having to carry out the reactions in non-coordinating solvents, which ruled out common derivatives like W(CO) 5 (THF). This unfortunately meant that we were not able to explore any further ligand properties of PCSexperimentally. Computational Studies All the ECXligands were studied computationally by DFT with the B3LYP hybrid functional, and the results are in good agreement with experimental observations. Full details of the computational methods employed are provided in the Experimental Section. The ability of the ECXligands to undergo linkage isomerism, that is, bind through either the E or the X atom, within the [W(CO) 5 (ECX)]complex was probed. This was done by calculating the difference in energy between the X-bound and E-bound structures: ΔE link = E X-bound -E E-bound (Table 3). In all cases the value is positive, which indicates that all the ECXligands preferably coordinate to the W(CO) 5 fragment through the E atom. However, in the case of PCSthe value is +3.5 kJ mol -1 , which is small and within the expected error of the calculations. This supports the earlier spectroscopic observations that the phosphorus-and sulfur-bound species 4a and 4b are formed in comparable ratios in the reaction of PCSwith W(CO) 5 (MeCN). Selected computed bond metrics of the five experimentally relevant complexes are presented in Table 4, along with the data for the free ECXligands for comparison. For both PCXligands, phosphorus coordination leads to a lengthening of the P-C bond and a shortening of the C-X bond relative to the free anion. This is consistent with the proposed [M-P=C=X]metallaphosphaketene structure and the crystallographic data discussed previously (A in Figure 9). Nitrogen coordination for the NCXligands leads to a slight shortening of the N-C and a larger contraction of the C-X bonds. We believe this is more consistent with the formulation [M-N=C=X] -(B1), although Table 4. Selected computed bond lengths (Å) and angles (°) for free and coordinated ECXligands. The bond angle W-X-C refers to the sulfur-bound 4b only. E-C C-X W-E/X-C there will be some contribution from the zwitterionic resonance form [M-N + ≡C-X -] -(B2). If the latter were the major form, a longer C-X bond would be expected in each case. These computational data differ slightly from the crystallographic data for the coordinated NCXstructures [K(2,2,2-crypt)] [2] and [K(18crown-6)] [3]. The crystallographic N-C bonds are shorter than their computational analogues in each case, and the C-X bonds are longer. This suggests that the structures derived crystallographically have a slightly greater contribution from resonance B2. This is presumably because the formal negative charge on the X atom in B2 can be stabilised by the localised K + cation, whereas these interactions are not present in the computational structures. When the PCSligand binds through the sulfur atom, the P-C bond shortens and the C-S bond lengthens relative to the free anion. This is consistent with resonance structure C containing a formal P≡C-S fragment (Figure 9). The optimised structures for 1 calcd. and 4a calcd. show that both PCXspecies are bound side-on, with W-E-C bond angles of 99.4°and 104.0°, respectively. When the PCXanions were repositioned to an end-on geometry, and the optimisation was constrained to C 4v symmetry, the optimised geometries each gave two imaginary frequencies (a second order saddle point), which correspond to two symmetric relaxations to the sideon geometry. This is consistent with the rehybridisation of the phosphorus atom to a lone pair of electrons and the concomitant loss of some of the multiple bond character of the P-C bond. The difference in energy between the two conformational isomers was calculated as ΔE con = E end-on -E side-on and gave values of +65.9 and +38.5 kJ mol -1 for PCOand PCS -,respectively.Thisshowsthattheside-onbondingmodeisthermodynamically favoured for the PCXligands and not simply a result of crystal packing. The smaller ΔE con for PCSrelative to PCOcan be rationalised by considering the allene-like structure A of the side-on ligand. In both cases, there is a decrease in the P-C multiple bond character from the linear structure B, but there is also an increase in the C-X multiple bond character. It is simply less favourable to form multiple C=S bonds than C= O bonds as a result of the poorer overlap and energy match of the orbitals involved. The pentacarbonyltungsten fragment is also an ideal fragment for the assessment of ligand properties by using IR spectroscopy. W(CO) 5 (PR 3 ) systems have been used extensively to study the bonding nature of phosphines as a safer alternative to those derived from the toxic Ni(CO) 4 used by Tolman in his seminal studies. [39,44] The tungsten-carbonyl stretching frequencies for [W(CO) 5 (ECX)]have been calculated and com-pared to experimental solid-state data for the [K(18-crown-6)] + salts where possible ( Table 5). Note that no correction factors have been applied to the computational values, but qualitative discussions of trends are still poignant. The force constants for the cis and trans carbonyl groups have also been calculated for both computational and experimental cases by using the method published by Cotton and Kraihanzel. [43] Table 5. Computed and experimental tungsten-bound CO stretching frequencies for [W(CO) 5 (ECX)] -(cm -1 ) and Cotton-Kraihanzel force constants (mdyne Å -1 ). [43] PCO -NCO -NCS -PCS - The symmetry labels in Table 5 for the stretching frequencies are strictly for W(CO) 5 (L) structures with C 4v symmetry, in which case the B 1 frequency would not be visible in the IR spectrum. In all cases this is slightly relaxed, though the structures for 2 calcd. and 3 calcd. are very close, and the B 1 band is extremely weak. Structures 1 calcd. and 4a calcd. both have C s symmetry, which also lifts the degeneracy of the E band (separation less than 2 cm -1 ), so an average value is given. In all the experimental structures the symmetry is lowered by the interaction of the anion with the [K(18-crown-6)] + cation, so the anion can never truly be considered separately. However, all the anions are pseudo-C 4v and are treated as such for this discussion. As can be seen in Table 5, all of the ECXligands exert comparable effects on the IR spectra of complexes 1-4, despite significant variations in the coordination modes of the phosphorus-and nitrogen-containing species. The recorded CO stretching frequencies differ by no more than 12 cm -1 , which indicates that the net trans influence of the ligands is largely the same. These data and the 1 J W-C(trans) coupling constant values discussed earlier strongly suggest that the nitrogen-containing ligands, NCOand NCS -, possess increased π-acceptor properties compared to those of their heavier phosphorus-containing analogues. This offsets their greater σ-donor ability, making the overall effect on the electron density of the metal centre comparable for all of the ligands studied. Conclusions We have extensively explored the ligand properties of PCOcompared to those of NCO -, both experimentally and computationally. We report the first synthesis of PCSunder ambient conditions, and the ligand study was extended to include ECS -(E = N, P). We can confirm that PCXspecies preferentially bind in a side-on manner, in contrast to their NCXanalogues. Further-more, the observation of weak 1 J W-P coupling constant values show that these bent conformations are maintained in solution and that terminal (linear) PCXstructures are largely unfavourable. This was further confirmed by DFT calculations. Interestingly, a comparison of PCOand NCOligands has shown that, while the two ligands exert a comparable trans influence on the carbonyl substituent (as evidenced by structural and IR spectroscopic data), this effect arises from the fundamentally different electronic properties of the ligands and their respective bonding modes. Thus, we conclude that PCOis a significantly weaker σ-donor ligand than NCO -; however, as a result of both the coordination geometry of NCOand its electronic structure, the increased π-acceptor properties give the two ligands the same net properties. Further studies on the coordination chemistry and reactivity of the coordinated PCXanions are currently on-going and bode well for the generation of novel phosphorus-containing small molecules. Experimental Section General Synthetic Methods and Reagents All reactions and product manipulations were carried out under an inert atmosphere of argon or dinitrogen by using standard Schlenk or glovebox techniques (MBraun UNILab glovebox maintained at less than 0.1 ppm O 2 and less than 0.1 ppm H 2 O) unless otherwise specified. [21] [K(18crown-6)][PCO] [12] and W(CO) 5 (MeCN) [45] were prepared according to literature methods. (30 mL) and stirred for one hour to give a yellow/brown solution. Volatiles were removed in vacuo to give a yellow/brown solid. This was washed with toluene (3 × 10 mL) to afford a yellow solid. Yield: 320 mg (75 %). Yellow crystals suitable for single-crystal X-ray dif-fraction were grown by slow diffusion of hexane into a THF solution of [K(2,2,2-crypt)] [ and slowly warmed up to room temperature. The solution went dark over the course of an hour. The reaction was monitored by 31 P NMR spectroscopy, which showed the growth of two new resonan-ces in addition to that of free PCS -, which have been attributed to products 4a (P-bound) and 4b (S-bound). After four days at room temperature, there were no signals in the 31 P NMR spectrum. Single crystals were grown by slow diffusion of hexane into a 1,2-DFB solution of a mixture of 4a and 4b, but the crystals exhibited extensive crystallographic disorder, which precluded any detailed analysis (CCDC-1412571). 31 IR spectroscopic data were recorded by using solid samples in a Nujol mull. Samples were prepared inside an inert-atmosphere glovebox, and the KBr plates were placed in an airtight sample holder prior to data collection. Spectra were recorded with a Thermo Scientific iS5 FTIR spectrometer in absorbance mode. Single-crystal X-ray diffraction data were collected by using either an Oxford Diffraction Supernova dual-source diffractometer equipped with a 135 mm Atlas CCD area detector or an Enraf-Nonius kappa-CCD diffractometer equipped with a 95 mm CCD area detector. Crystals were selected under Paratone-N oil, mounted on micromount loops and quench-cooled with an Oxford Cryosystems open-flow N 2 cooling device. [46] Data were collected at 150 K by using mirror monochromated Cu-K α radiation (λ = 1.5418 Å; Oxford Diffraction Supernova) or graphite-monochromated Mo-K α radiation (λ = 0.71073 Å; Enraf-Nonius kappa-CCD). Data collected with the Oxford Diffraction Supernova diffractometer were processed by using the CrysAlisPro package, including unit cell parameter refinement and interframe scaling (which was carried out by using SCALE3 ABSPACK within CrysAlisPro). [47] Equivalent reflections were merged, and diffraction patterns were processed with the Crys-AlisPro suite. For data collected on the Enraf-Nonius kappa-CCD diffractometer, equivalent reflections were merged and the diffraction patterns were processed with the DENZO and SCALEPACK programs. [48] Structures were subsequently solved by direct methods or by using the charge flipping algorithm, as implemented in the program SUPERFLIP, [49] and refined on F 2 with the SHELXL 97-2 package. [50] CCDC-1412564 (for [K (18- [4]) contain the supplementary crystallographic data for this paper. These data can be obtained free of charge from The Cambridge Crystallographic Data Centre via www.ccdc.cam.ac.uk/data_request/cif. Computational Details: All DFT calculations were performed by using the Gaussian09 software package. [51] The B3LYP functional was used throughout, [52] with the 6-311+G(2df,p) basis set on all atoms except for tungsten, [53] which was treated with a Stuttgart/ Dresden Effective Core Potential (ECP). [54] Solvent interactions (tetrahydrofuran) were treated implicitly with a polarisable continuum model. [55] The nature of stationary points was confirmed by frequency analysis: in all cases all frequencies were real except for the geometries constrained to C 4v symmetry, each one of which displayed two imaginary frequencies. All quantum chemical results were visualised by using the Chemcraft 1.7 software. [56]
2016-05-12T22:15:10.714Z
2015-11-04T00:00:00.000
{ "year": 2015, "sha1": "6ef4610daae0576b6e221981302003d42bcac89e", "oa_license": null, "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/ejic.201501075", "oa_status": "BRONZE", "pdf_src": "PubMedCentral", "pdf_hash": "6ef4610daae0576b6e221981302003d42bcac89e", "s2fieldsofstudy": [ "Chemistry" ], "extfieldsofstudy": [ "Chemistry", "Medicine" ] }
999582
pes2o/s2orc
v3-fos-license
On the Automorphisms of Order 15 for a Binary Self-Dual [96, 48, 20] Code The structure of binary self-dual codes invariant under the action of a cyclic group of order $pq$ for odd primes $p\neq q$ is considered. As an application we prove the nonexistence of an extremal self-dual $[96, 48, 20]$ code with an automorphism of order $15$ which closes a gap in `"On extremal self-dual codes of length 96", IEEE Trans. Inf. Theory, vol. 57, pp. 6820-6823, 2011'. Introduction Let C = C ⊥ be a binary self-dual code of length n and minimum distance d. A binary code is doubly-even if the weight of every codeword is divisible by four. Self-dual doubly-even codes exist only if n is a multiple of eight. Rains [9] proved that the minimum distance d of a binary self-dual [n, k, d] code satisfies the following bound: Codes achieving this bound are called extremal. If n is a multiple of 24, then a self-dual code meeting the bound must be doubly-even [9]. Moreover, for any nonzero weight w in such a code, the codewords of weight w form a 5-design [1]. This is one reason why extremal codes of length 24m are of particular interest. Unfortunately, only for m = 1 and m = 2 such codes are known, namely the [24, 12,8] extended Golay code and the [48,24,12] extended quadratic residue code (see [10]). To date the existence of no other extremal code of length 24m is known. For n = 96, only the primes 2, 3 and 5 may divide the order of the automorphism group of the extremal code and the cycle structure of prime order automorphisms are as follows This note consists of three sections. Section 2 is devoted to some theoretical results on binary self-dual codes invariant under the action of a cyclic group. In Section 3 we study the structure of a putative extremal self-dual [96,48,20] code having an automorphism of order 15. Using this structure and combining the possible subcodes we prove Theorem 1. In an additional section, namely Section 4, we prove that an extremal self-dual code of length 96 does not have automorphisms of type 3-(28,12). This assertion is used by other authors but no proof has been published so far. Theoretical results Let C be a binary linear code of length n and let σ be an automorphism of C of order r where r is odd (not necessarily a prime). Let be the factorization of σ into disjoint cycles (including the cycles of length 1). If l i is the length of the cycle Ω i then lcm(l 1 , . . . , l m ) = r and l i divides r. Therefore l i is odd for i = 1, . . . , m and 1 ≤ l i ≤ r. where v|Ω i is the restriction of v on Ω i . With this notation we have the following. Theorem 2 The code C is a direct sum of the subcodes F σ (C) and E σ (C). Proof: We follow the proof of Lemma 2 in [4]. Obviously, Let v ∈ C and w = v + σ(v) + · · · + σ r−1 (v). Since w ∈ C and σ(w) = w we get w ∈ F σ (C). On the other hand, wt(σ j (v)| Ω i ) = wt(v| Ω i ) for all i = 1, 2, . . . , m and j ≥ 1. Hence σ(v) + · · · + σ r−1 (v)| Ω i is a sum of an even number of vectors of the same weight. Thus Theorem 3 If C is a binary self-dual code with an automorphism σ of odd order then C π = π(F σ (C)) is a binary self-dual code of length m. Proof: Let v, w ∈ F σ (C). If · , · denotes the Euclidean inner product on F n 2 then v, w = π(v), π(w) = 0 since l i is odd for all i. Hence C π is a self-orthogonal code. If Hence u ′ ∈ F σ (C) and therefore u = π(u ′ ) ∈ C π which proves that C π is a self-dual code. ✷ Corollary 4 Let C be a binary self-dual code of length n = cr + f and let σ be an automorphism of C of odd order r such that where Ω i = ((i − 1)r + 1, . . . , ir) are cycles of length r for i = 1, . . . , c, and Ω c+i = (cr + i) are the fixed points for i = 1, . . . , f . Then F σ (C) and E σ (C) have dimension (c + f )/2 and c(r − 1)/2, respectively. Proof: Clearly, m = c + f is the number of orbits of σ. If σ is of prime order p with c cycles of length p and f fixed points we say that σ is of type p-(c, f ). Corollary 5 If C is a binary extremal self-dual [96, 48, 20] code with an automorphism σ of order 15, then σ can be decomposed in a product of six cycles of length 15 and two cycles of length 3. Suppose that f = 6. In this case C π is a self-dual code of length 12. According to [8] there are three codes up to equivalence, namely B 12 , C 6 2 and C 2 ⊕ A 8 . The code C π ∼ = B 12 has a generator matrix of shape But then C contains vectors of weight 15+3 = 18 which contradicts its minimum distance. The other two types can obviously not occur since its minimum distance is 2. Hence f = 6 is not possible. Thus if C has an automorphism σ of order 15, then σ has six cycles of length 15 and two cycles of length 3. ✷ Connections with quasi-cyclic codes For further investigations, we need two theorems concerning the theory of finite fields and cyclic codes. Let r be a positive integer coprime to the characteristic of the field F l , where l is the power of a prime. Consider the factor ring . . , s. Finally, by e j (x) we denote the generator idempotent of I j ; i.e., e j (x) is the identity of the two-sided ideal I j . With these notations we have the following well-known result. Theorem 7 (see [5]) (iv) s j=0 e j (x) = 1. According to [7] there is a decomposition To continue the investigations, we need to prove some properties of binary linear codes of length cr with an automorphism τ of order r which has c independent r-cycles. If C is such a code then C is a quasi-cyclic code of length cr and index c. Next we define a map φ : is a linear code over the ring R of length c. Moreover, according to [7], we have φ(C) ⊥ = φ(C ⊥ ) where the dual code C ⊥ over F 2 is taken under the Euclidean inner product, and the dual code φ(C) ⊥ in R c is taken with respect to the following Hermitian inner product: In particular, the quasi-cyclic code C is self-dual if and only if φ(C) is self-dual over R with respect to the Hermitian inner product. Every linear code C over the ring R of length c can be decomposed as a direct sum where C i is a linear code over the field G i (i = 0, 1, . . . , m), C ′ j is a linear code over H j and C ′′ j is a linear code over H * j (j = 1, . . . , t). Theorem 8 (see [7]) A linear code C over R of length c is self-dual with respect to the Hermitian inner product, or equivalently a c-quasi-cyclic code of length cr over F q is selfdual with respect to the Euclidean inner product, if and only if where C i is a self-dual code over G i for i = 0, 1, . . . , m of length c (with respect to the Hermitian inner product) and C ′ j is a linear code of length c over H j and (C ′ j ) ⊥ is its dual with respect to the Euclidean inner product for 1 ≤ j ≤ t,. The case r = pq We consider now the case r = pq for different odd primes p and q such that 2 is a primitive root modulo p and modulo q. The ground field is F 2 . Then where Q i (x) is the i-th cyclotomic polynomial. Moreover, both Q p (x) and Q q (x) are irreducible over F 2 since 2 is primitive modulo p and modulo q as well. Finally, if is the factorization of the r-th cyclotomic polynomial into irreducible factors over F 2 , then these factors have the same degree, namely φ(r) where Ω i = ((i − 1)r + 1, . . . , ir) are cycles of length pq for i = 1, . . . , c, Ω i = (cr + (i − 1)q + 1, . . . , cr + iq) are cycles of length q for i = c + 1, . . . , c + t q , Ω i = (cr + t q q + (i − 1)p + 1, . . . , cr + t q q + ip) are cycles of length p for i = c + t q + 1, . . . , c + t q + t p , and Ω c+tq+tp+i = (c + t q + t p + i) are the fixed points for i = 1, . . . , f . Let E σ (C) * be the shortened code of E σ (C) obtained by removing the last t q q + t p p + f coordinates from the codewords having 0's there. Let C φ = φ(E σ (C) * ). Since E σ (C) * is a binary quasi-cyclic code of length cr and index c, C φ is a linear code over the ring R of length c. Moreover where M i is a linear code over the field G i , i = 1, . . . , m, M ′ j is a linear code over H j and M ′′ j is a linear code over H * j , j = 1, . . . , t. For the dimensions we have ). Since E σ (C) * is a self-orthogonal code, C φ is also self-orthogonal over the ring R with respect to the Hermitian inner product. This means that M i are self-orthogonal codes of length c over G i for i = 1, . . . , m (with respect to the Hermitian inner product) and, for 1 ≤ j ≤ t, we have M ′′ j ⊆ (M ′ j ) ⊥ with respect to the Euclidean inner product. This forces dim M i ≤ c/2 for i = 1, 2, . . . , s and dim M ′ j + dim M ′′ j ≤ c. It follows that According to the balance principle (see [2], [5] or [10]), the dimension of the subcode of C consisting of the codewords with 0's in the last six coordinates, is equal to 42 = 48 − 6. Since three of these codewords together with the zero vector belong to the fixed subcode This means that where M 1 is a Hermitian self-orthogonal [6, 2, ≥ 2] code over the field G 1 ∼ = F 4 , M 2 is a Hermitian self-dual [6, 3, 1. M ′ is an MDS [6,2,5] code and M ′′ is its dual MDS [6,4,3] code. It is well known that any MDS [n, k, n − k + 1] code over F q is an n-arc in the projective geometry P G(k − 1, q). There are exactly four inequivalent [6,2,5] MDS codes over F 16 [6] (their dual codes correspond to the 6-arcs in P G (3, 16)). We list here generator matrices of these codes: [6,3,4] codes. According to [6], there are 22 MDS codes with the needed parameters over F 16 (they correspond to the 6-arcs in P G(2, 16)). We consider generator matrices of these codes in the form Table 1. Five of the binary codes have minimum distance 24, and six of them have minimum distance 20. Table 2. Ten of the binary codes have minimum distance 24, and eight of them have minimum distance 20. In the following G 1 is the field with four elements and identity and G 2 the field with 16 elements and identity defined in the beginning of this section. Furthermore µ 2 = x 11 + x 10 + x 6 + x 5 + x + 1 is a generator of G 2 . According to [12], there are two Hermitian self-dual [6, 3, d ≥ 3] codes over F 16 up to the equivalence defined in the following way: Two codes are equivalent if the second one is obtained from the first one via a sequence of the following transformations: • a substitution x → x t , t = 2, 4, 8; • a multiplication of any coordinate by x; • a permutation of the coordinates. We fix the M ′ ⊕ M ′′ part of the generator matrix and consider all possible generator matrices for the M 2 part. Note that even if the matrices generate equivalent codes M 2 the codes generated by M ′ ⊕ M ′′ ⊕ M 2 may not be equivalent. We consider the two possible matrices for the M 2 part under the products of the following maps: 1) a permutation τ ∈ S 6 of the 15-cycle coordinates; 2) multiplication of each of the 6 columns by nonzero element of F 16 ; 3) automorphism of the field (x → x t , t = 2, 4,8). After computing all possible generator matrices we obtain exactly 675 inequivalent [90, 36, 20] binary codes: 232 from the first matrix H 1 , and 443 from the second H 2 . These codes have automorphism groups of orders 15 (557 codes), 30 (111 codes), 45 (2 codes) and 90 (5 codes). Next, we add the fixed subcode. According to Corollary 6, the code π(F σ (C)) is equivalent to the extended Hamming [8,4,4] code H 8 . We need to choose two out of the eight coordinates in H 8 for the cycles of length 3. The automorphism group acts 2-transitively on the code, so we can take any pair of coordinates for these cycles. Then we consider all 6! = 720 permutation of the 15-cycles that can lead to different subcodes. Only 47 of the constructed codes φ −1 (M ′ ⊕ M ′′ ⊕ M 2 ) ⊕ F σ (C) have minimum distance d ′ = 20 (we list the number of their codewords of weights 20 and 24 and the order of the automorphism groups in Table 3). We fix the generator matrices of the 47 codes and consider the matrices H 3 , H 4 , H 5 , H 6 under compositions of the following transformations: 1) a permutation τ ∈ S 6 of the 15cycle coordinates; 2) multiplication of each of the 6 columns by a nonzero element of G 1 ; 3) automorphism of the field (x → x 2 ). Thus we construct binary [96,44] codes. Our computations show that none of these codes has minimum distance d ≥ 20. This proves Theorem 1 which states that a binary doubly-even [96, 48, 20] self-dual code with an automorphism of order 15 does not exist. 4 On the automorphism of type 3-(28, 12) In this section we fill a gap in the literature caused by a missing proof on the nonexistence of an extremal self-dual code of length 96 having an automorphism of type 3-(28, 12). In paper [2], the authors used this assertion in their proof of the main theorem. Let us consider the partitioned weight enumerator A ij for the code C π , where 0 ≤ i ≤ 28 and 0 ≤ j ≤ 12. We use the following restrictions: • If i + j is odd then A ij = 0. Using the MacWilliams identities for coordinate partitions (see [11]) and the above restrictions, we obtain a system of linear equations which forces λ = −1, a contradiction. ✷
2014-11-01T14:52:33.000Z
2014-03-19T00:00:00.000
{ "year": 2014, "sha1": "8fae76d1220d88f003e7a9f3005de1933cb443bc", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1403.4735", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "8fae76d1220d88f003e7a9f3005de1933cb443bc", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Computer Science", "Mathematics" ] }
268382666
pes2o/s2orc
v3-fos-license
Observational study on the non-linear response of dolphins to the presence of vessels With the large increase in human marine activity, our seas have become populated with vessels that can be overheard from distances of even 20 km. Prior investigations showed that such a dense presence of vessels impacts the behaviour of marine animals, and in particular dolphins. While previous explorations were based on a linear observation for changes in the features of dolphin whistles, in this work we examine non-linear responses of bottlenose dolphins (Tursiops Truncatus) to the presence of vessels. We explored the response of dolphins to vessels by continuously recording acoustic data using two long-term acoustic recorders deployed near a shipping lane and a dolphin habitat in Eilat, Israel. Using deep learning methods we detected a large number of 50,000 whistles, which were clustered to associate whistle traces and to characterize their features to discriminate vocalizations of dolphins: both structure and quantities. Using a non-linear classifier, the whistles were categorized into two classes representing the presence or absence of a nearby vessel. Although our database does not show linear observable change in the features of the whistles, we obtained true positive and true negative rates exceeding 90% accuracy on separate, left-out test sets. We argue that this success in classification serves as a statistical proof for a non-linear response of dolphins to the presence of vessels. Previous studies have explored the effect of the presence of vessels on dolphins.Here, we review the main methodologies.One recent work analysed the differences in the dolphins' whistles emission rate and signal duration in the presence of vessel transit routes 9 .Roughly 1800 whistles were compared before, during, and after transit occurrences, and average differences were observed during and immediately after the passing of nearby vessels.A similar conclusion was drawn in another recent study that investigated the impact on whistle parameters due to vessel approach and engine shutdown 10 .The method involved a target vessel that turned its engine on and off, and the analysis compared features of approximately 100 recorded whistles at intervals of 0-5 min and 5-10 min after engine shutdown.For dolphins of the oceanic ecotype, results showed differences in the peak frequency and in the whistle rate, while no change was observed for dolphins of the coastal ecotype.A possible conclusion drawn was that coastal dolphins are more acclimatized to vessels' presence.Supporting this conclusion are differences in the whistle band frequency between the oceanic and coastal groups, as analyzed in another interesting case study focused on the response of dolphins to the significant reduction in shipping activity during Covid'19 11 and in a recent Ph.D. thesis that analyzed differences in the vocalization of white-beaked dolphins in Iceland during Covid-19 12 .Using AIS records, the authors found roughly 15 k indications of vessel presence in 2020, compared to nearly 30 k in 2022.Analysis of continuous recordings over several months for both years showed that whistle rate was higher by roughly 20% in 2020.The spectral characteristics of whistles while distinguishing between the presence and absence of tour boats was the subject of yet another recent Ph.D. thesis 13 .While adrift next to a group of dolphins, continuous recording of the dolphins vocalizations were recorded during occurrences of approaching vessels.Analysis of the histograms of high signal to noise ratio (SNR) from roughly 150 whistles showed differences in the minimum, maximum and starting frequency of the whistles, which were explained as a possible strategy to avoid increase of energy expenditure.A longer-term analysis compared between two sites of different vessel activity 14 .Results for roughly 1000 whistles showed statistically significant differences between the distribution of whistle features from the two groups.More specifically, a higher number of whistles was detected in the site where vessel presence was more continuous.In the same site, an increase in the signature of the dolphins whistle modulation was observed. While the above works describe changes in whistle waveforms in the presence of vessels, we argue that some gaps exist in the available methodological tools.In particular, to evaluate impact of the presence of vessels in cases where the features of the dolphin's whistle do not follow a simple correlation with presence indicators of vessels.In other words, when there is no linear relation between the whistle features and vessel presence.For example, while focusing on the echolocation clicks of finless porpoises, recent work showed that, in complex environmental conditions, the acoustic response of the individuals to vessel presence is not linear 15 and the signal features show fluctuations in an attempt to adapt to the changing noise level.A step towards a non-linear exploration was offered in a previous study that instead of examining the values of the features explored the variability in the whistle features as a function of environmental parameters 16 .Analysis of a total of roughly 2000 whistles from 60 different dolphin groups revealed clear association between signal features variability and vessel presence.Still, the method involves a linear analysis in the form of a principle component analysis (PCA) which associates a cause with the observed variability, and thus avoids the exploration of other non-linear relations.Moreover, especially when the the whistle feature values cannot be well separated between cases of vessel absence or presence, it is hard to uniquely associate a cause through an increase in variability, and there is a need to quantify non-linear impacts.Such a non-linear evaluation requires the analysis of a sufficiently large dataset from a singular site and for a specific group of dolphins.In this research, we use non-linear classification tools to process roughly 50,000 detected whistles, and show evidence for non-linear relations. To address the goal of exploring complex dolphin response to vessel presence, in this work we classified non-linear differences in the vocalization of dolphins.Dolphins are known to express stress through acoustic behaviour.For example, the signature-whistle rates were shown to be the most telling vocal indicator for short term stress when dolphins in Sarasota Bay (Florida) experienced capture-release events compared to undisturbed conditions 17 .Other work 18 showed that signature whistles comprised 50% of the vocal repertoire of free-ranging bottlenose dolphins as opposed to 90% in temporarily restrained dolphins.The study also found that the number of loops in a whistle increased during capture-release conditions compared to undisturbed conditions.Differences in the structure and rate of dolphins whistles can thus serve as a classification metric to distinguish between two cases: when a vessel is present and when there is no vessel around. Our research hypothesis is that, if affected by vessel presence, the whistles of the dolphins can be used to predict vessel presence.This prediction involves the binary classification of features from detected dolphins whistles into a "With vessel" class, indicating that a vessel is present, and a "No vessel" class, indicating that no vessel is around.Consequentially, the analysis results are explored in terms of the classification confusion matrix.In particular, the true positive and the true negative; that is, to show if the classification attempt indeed succeeds.We take this approach since a classifier may capture non-linear relations, which are otherwise hard to distinguish.We argue that a statistically significant success in such prediction would prove that dolphins respond to the presence of vessels.While we do not make an argument about the cause of this reaction, may it be a sign of stress or notification of vessel presence, we explore features in the structure of the dolphin whistles and quantities that indicate such a response.We further analyze what are the dominant whistle features in such events, and use this analysis to comment on the meaning of the impact of vessel presence. To remove possible biases in the results due to different dolphin populations and sea conditions, we offer a statistical analysis for responses over a specific population of bottlenose dolphins.Alongside visual aid verifying the existence of the same dolphin population, we explored dolphins' responses to vessel presence from opportunistic vocalizations identified within a large dataset of continuous passive recordings.In particular, we performed long-term acoustic recordings in a marine protected area close to both a known dolphin habitat and a shipping lane, to serve as a proxy for behavioral implications.We identified a large number of vessels present in the recording period by detecting URN of vessels (We note that reliable AIS recordings were not available www.nature.com/scientificreports/ in the test site) and a large number of dolphin whistles within the recorded database, and aimed to statistically explore if the dolphins responded to vessel presence. Preliminaries To explain the results below, we will first provide a description of our testbed, the content of the database, and a short introduction of the analytical methods.A more through description of the methods then follows in the "Methods" Section. Description of the testbed The research was conducted in the vicinity of the 'Dolphin Reef ' facility, in the Red Sea of Eilat, Israel, see map in Fig. 1a.At the time, the 'Dolphin Reef ' was home to four captive bottlenose dolphins (Tursiops truncatus), three mature females and a young male, that had spent their entire life in human care, though they frequently swam in the open sea, as the 'Dolphin Reef ' boundaries were not enclosed by fences or nets.Since the average group size of bottlenose dolphins is five individuals 19 , we argue that four individuals is a good proxy to an average group.To verify that during our data collection only these four dolphins were present in the vicinity of the recorder we relied on reports from park rangers and inquiries of visitors to the marine protected area, but also on designated visual observations.These observations took place for three days in the beginning of the recording period, one day in the middle of the recording period and two additional days towards the end of the recording period, and lasted between 3 and 5 hours per day.The observer stood on a dock some 10 m above the water and had a clear The adjacent coastal property is the Europe Asia Pipeline Company (EAPC) Oil Port, which receives oil imports from large shipping vessels that dock at a designated pier, located 1 km from the 'Dolphin Reef ' facility.A commercial shipping port and navy port are also a few kilometers up the coast, and all large vessels associated with these assets introduce much noise disturbance to the marine environment in the waters surrounding the 'Dolphin Reef ' facility.In addition to shipping vessels, many small recreational motor vessels frequent the area, and often approach the 'Dolphin Reef ' boundaries to observe the dolphins.All large commercial vessels pass in a 2 km corridor whose western boundary is located only 300 m from the underwater acoustic recorder deployment site.This area was thus ideal for the study as it provided a combination of acoustic disturbance from both small and large vessels, and guaranteed consistent dolphin presence, of the same four individuals, who would present a similar vocal responses across the entire study period.The recording device was placed on the sea-floor at a depth of 60 m, at location 29 • 31' 19.9" N 34 • 56' 11.0" E, at a distance of approximately 500 m from the 'Dolphin Reef ' and 600 m from the EAPC Oil Port pier. Our testbed included two self-made acoustic recorders, each of which included two GeoSpectrm M36 hydrophones, a pre-amplifier, a sampling card, a processing unit, and a battery large enough to last for two months of data collection.The electrical components were mounted in a custom designed underwater casing with two underwater cables leading to the hydrophone units.Before deployment, the acoustic sensitivity of the entire system was measured in an acoustic tank using a calibrated sound source, and the results showed a flat frequency response with less than 1 dB across the 5-20 kHz band which fits most of dolphin's whistles 17 .The units where anchored such that the hydrophones were hovering 1 m above the seabed.The hydrophones were set 5 m apart for low correlation between the two channels, and the recorders were placed on patches of sand to reduce self-noise (Fig. 1b).Our testbed also included an automatic identification system (AIS) receiver that collected AIS transmissions of passing vessels.The AIS receiver was stationed on the top of a building located roughly 1 km from the deployment site.It was able to pick up vessel signals up to a distance of 50 km.However, visual identification of vessels proved that the AIS information is biased, as most vessels did not operate their AIS, and thus vessel tagging was performed based on the recorded acoustic data.We recorded raw acoustic signals for a period of 22 days during the month of June 2021.During this period, we used a local weather station to track wind intensity, wave height and water temperature.We note that, as customary for the Red sea of Eilat, the weather conditions were extremely stable with wave height not exceeding 0.5 m, and water temperature remaining between 24 • and 25 • .No rain occurred during the data collection period. Description of the datbase The collected data were sampled at 96 kHz at 3 bytes per sample, and was fragmented into short files of 300 s from which time-frequency spectrogram matrices are formed.An example of such spectrogram comprising of both dolphins' whistles and vessel indication is shown in Fig. 2. We analyzed the recorded data offline in two threads.For vessel identification, manual tagging was performed by an expert, with vessels identified by their spectral signature, as well as by listening to the recorded data.This resulted in a dataset of 7546 identified vessels.Based on the spectral content of the recordings, we identified that roughly 80% of the vessels were large cargo vessels, while also some recreational vessels were spotted. For dolphin identification, we used a machine learning (ML)-based procedure.To train the ML detector, we engaged high-school students from the 'Open School' in Haifa, Israel to manually annotate part of the dataset (see details in the "Methods" section).The results from the 22 days of data collection were 82,340 identified whistles.Out of these, we selected the 55,852 whistles whose SNR exceeded 15 dB.This filtering reduces errors in the process of feature extraction, which is sensitive to low SNR, especially for harmonic identification, clustering and duration estimation.www.nature.com/scientificreports/Based on the identified vessels, we classified dolphin whistles into those emitted in the presence of vessel URN, termed with vessel, and whistles detected when no URN from a vessel was present, termed no vessel.The former is defined by vessel presence within a time buffer of 1 min before or after the dolphin whistles.The latter is determined if no vessel was identified within a time window of 5 min before and after the whistle detection event.Other whistles were excluded from analysis.Vessel identifications were obtained by manually observing the specgrogram of each time buffer.In particular, a sonar expert looked for vessel acoustic features: harmonics of vessel engines; increase in the broadband noise intensity; or the presence of narrowband modulations of a thruster.Indications of vessel presence were verified by a second expert.This resulted in 25,982 whistles labeled as "vessel", and 25,249 whistles labeled as "no vessel". After identifying and classifying the dolphin whistles, the data were further processed to extract features.We chose features recognized as behavioral indicators 20,21 , as well as behavioral stress indicators 22,23 .The feature set included whistle duration (duration), where, since the duration of harmonics and multipath signals may appear shorter or longer due to lower SNR or temporal spreading, respectively, we consider the time period from the beginning of the whistle to its end with no averaging performed; the number of basic whistles, i.e., no harmonics or multi-path, in a time buffer of 60 s (whistle number); the number of whistles overlapping with the identified one (number of overlaps); the number of traces that reflect harmonics and multi-path of a single whistle (number of cluster); and the harmonic spectral rate of the whistle (harmonic rate), where, we measure the ratio by which the frequency of the harmonic changes from the basic whistle structure, normalized by the harmonic number, averaged over the harmonic duration and over all harmonics.We clarify that, while qualifying as one of the characteristics of the channel, we included the multi-path in the 'number of cluster' feature since more multipath arrivals are expected to pass the detection threshold the louder the whistle is.We report that other tested features did not yield good classification results and were thus not incorporated in the final analysis.These include the whistle bandwidth, the maximum and minimum frequency, a 6-degree polynomial representation of the whistle time-frequency trace, symmetrical shape characteristics, and the variance of the signal strength across the bandwidth.We note that absolute signal strength was not considered since we could not verify the location of the dolphins and therefore their distance from the underwater recorder. Classification results A histogram for the number of detected dolphin's whistles per recording day is presented in Fig. 3.We observed a notable difference in the number of detected whistles across different study days.We therefore considered whistles acquired in different calendar days as independent.We also observed that more than 1000 whistles were detected during most days, which allows for reliable statistical analysis.From the raw data, no apparent difference were shown between the number of "with vessel" and "no vessel" whistles.This is further explored in Fig. 4, where we observed that the 'Whistle Number' feature of the "with vessel" class significantly overlaps with that of the "no vessel" class.Interestingly, Fig. 4 also showed that the 'Whistle Number' feature of the "with vessel" class is generally higher than that of the "no vessel" class.That is, in-spite of the URN of the vessels, the dolphins tend to produce more vocalizations when a vessel was in the vicinity.This result also proves that, using our methodology, the dolphins' whistles were identified despite the presence of vessels (i.e., at lower SNR). Histograms of the other four significant whistle features, namely the 'Duration' , the 'Harmonic Rate' , the 'Number of Overlaps' and the 'Number of Clusters' are shown in Fig. 5a-d, respectively.As in the case of the 'Whistle Number' feature, in all four cases we observed a significant overlap between the values for the "with vessel" and "no vessel" classes.According to these result we conclude that the relationship between the dolphin's whistle and the presence of a vessel is not trivial and cannot be detected by means of simple statistical segmentation.We thus chose a support vector machine (SVM) with a non-linear radial basis kernel function as the classification method, whose inputs are the five features of the dolphin whistles.www.nature.com/scientificreports/ The classification results of the "with vessel" and "no vessel" dolphin whistle classes are shown in Fig. 6 in terms of the true positive (termed TP: correct classification to "with vessel"), and true negative (termed TN: correct classification to "no vessel") rates.Results are shown separately for each day, considering the corresponding calendar day as testing data.For example, producing results for day X, we trained and validated our classifier based on all 51,231 whistles detected except from those whistles detected on day X, while testing and calculation of the TP and TN rates was performed only for those whistles detected on day X.The number of whistles used for such testing per day is given in the labels of the bars.We observed that, for all days, TP exceeds 75% and is 84.8% on average.The TN rates are slightly lower on average, reaching 83.9%, and exceeding 73% for all days but Day 1, where the TN is 63%.The stability of the results across calendar days demonstrates the generalization (i.e., robustness) of the classification, which is also evident when observing the stable validation accuracy in Fig. 7, where, as in Fig. 6, the data for validation is considered per calendar day.As expected, the validation accuracy is better than the classification accuracy measured on the test set, further suggesting that there was no risk of overfitting.These classification results, which are much higher than chance level, demonstrate the existence of a strong relation between the dolphin whistles and the presence of a vessel, though not a trivial one.To explore differences between daytime and nighttime activity, in Fig. 8 we show the ratio between the TP and TN results obtained for whistles detected during the day and during the night.The total number of detected whistles was 42,926 and 8305 for daytime and nighttime, respectively, which is not balanced.Also the vessel activity during nighttime was roughly 10% of that during daytime.Still, the number of defections at nighttime was high enough to perform classification.The results show that, for most days, the difference in TP and TN between daytime and nighttime is small, with an average daytime versus nighttime ratio of 92% for the TP and 106% for the TN. To better understand the importance of different whistle features on the vessel classification, we repeated the process using different feature spaces.For example, to explore the impact of feature i, we performed the classification using all features except feature i, and another classification using only feature i.The results are then compared with the TP and TN obtained when using all features.To prove this point we show in Fig. 9 testing results from Day 17, which includes the largest amount of whistles.We note that similar results were obtained for other calendar days.We observe that, in terms of the TP, TN, and the F-measure, classification accuracy when using only the 'Whistle Number' feature does not deteriorate much compared to using all features, while classification results are significantly impacted when removing this feature from the dataset.We therefore conclude that this is the most important feature for discriminating the presence of nearby vessels. Finally, we examine changes in the communication rate of the dolphins.This cannot be observed directly based on the number of detected dolphin whistles per recording day (as in Fig. 3), but rather based on the features related to the communication rate, namely the 'Whistle Number' , the 'Number of Overlaps' and the 'Number of Clusters' .To quantify this, denote ρ(i) as the ratio between the mean of feature i for "with vessel" whistles and its mean for "no vessel" whistles.We report that for the above features related to the communication rate, the value of ρ is 2.13, 2.80, and 2.81, respectively, whereas for the 'Harmonic Rate' and the 'Duration' features, which are less related to the amount of communication sessions, we obtain ρ = 0.99, 1.02 , respectively. Discussion Our results in Fig. 4 and in and Fig. 5d show that there is no linear relation between the whistle features and the presence of vessels, and that the values of the features overlap for the two cases of 'with' and 'without' a vessel presence.To find if a more complex relation exists, we turned to non-linear classification where, rather than Right panel: days 12-22.Data are divided into "with vessel" whistles (blue) and "no vessel" whistles (red). exploring linear effects such as if dolphins emitted more or less whistles in the presence of vessels, we predict if a vessel is present using features of a dolphin whistle. Our results show that different to previous works [9][10][11]13,14 that observed differences in the whistle rate when vessels are present, we did not observe such a linear relation in our dataset, and that there is overlap in distributions for the 'with' and 'without' vessel cases for each of the whistle features. Whileone possible explanation is that our results are specific for the four explored dolphins, the differences could also be explained by the high number of whistles and vessels explored, which may capture more complexities within the dataset.Still, generalization of our results is difficult as dolphins vocalize differently in different marine environments.In the following, we thus refer only to the community of dolphins we explored and to the methodology used.Due to the non-linear relation between the whistle features and the presence of vessels, we cannot conclude on the actual change in the whistle when a vessel is present.Still, observing the more significant features in the whistles that contributed to the success of the classification, we can comment about conceptual differences.We conclude that both by the high values of ρ obtain for the three features related to the communication rate, and based on the results in Fig. 9 that show that the 'Whistle Number' , the 'Number of Overlaps' , and the 'Number of Clusters' features are all important for classification.We thus argue that, in the presence of a vessel, the dolphins' whistle emissions are clustered together, and the vocalization is denser.Such more dense communication may project on stress or excitement of the dolphins.While these results can be explained by an increase in the number of dolphins present, we argue that, due to the small number of dolphins, such effects are filtered out in the statistical evaluation of our database of > 100 k whistles.According to the results in Fig. 9, we also observe that, while difference in value are not apparent between the two classes, the 'Duration' feature holds information for classification.Since this feature is related to the shape of the whistles, we deduce that the structure of the vocalization also changes when a vessel is present.Following the comparison between daytime and nighttime in Fig. 8, we conclude that the observed changes in the dolphin whistles due to the presence of vessels do not change significantly during the period of the day. Previous studies have attempted to link dolphin vocalizations with specific behaviors 24,25 , though it is challenging to establish a clear relationship due to the variable interpretation of some behaviors and the fact that they often occur out of sight of observers.In the case of our study, since dolphins are known to express stress through acoustic behaviours 17 , it could simply be an acoustic expression of stress.An alternative explanations could also be: either the presence of vessels 'activates' the nearby dolphins and creates a reason to communicate, such as coordination of approach or avoidance, or rather, the noise introduced into the water masks the whistles produced, thereby causing the dolphins to vocalize more in order to assure that they are indeed heard despite the surrounding noise. The fact that, due to the highly complex acoustic propagation pattern and the limitations of our setup, we could not evaluate vessel URN intensity as experienced by the dolphins, limits our conclusions to a broad effect of vessel presence rather than to vessel URN level.A consequence of this limitation is that the response of the dolphins could not be associated with vessel URN characteristics, vessel URN intensity or vessel distance from dolphins.Further, without the ability to localize the dolphins, we could not identify how many individuals were emitting whistles at a given time window, and thus could not be entirely certain if the effects shown are due to a change in the vocalizations of the dolphins.However, we argue that the large quantity of whistles detected reduces this ambiguity.The results collected apply to the small population of the four dolphins in the 'Dolphin Reef ' facility, which are likely acclimatized to vessel URN while swimming beyond the facility boundaries, since the area explored is dense in vessel activity.Here, we argue that, if this is the case, then no response from the dolphins was expected.Overall, it appears that vessel acoustic disturbance strongly affects dolphin whistle vocalization, though it is not possible to quantify its hindrance to the dolphins.We can still claim that vessel presence seems a main factor of disturbance, and should be monitored similarly to a pollution factor. Future work should further explore the influence of vessel noise on dolphins, for example by labeling vessels according to their acoustic signature, which would allow to investigate whether the responses of the dolphins change according different types of vessels (e.g., large ships vs. recreational boats).Further questions to explore could also be, for example, whether the response of the dolphins to vessels becomes more evident as the vessel noise experienced by the dolphins increases, an analysis which requires an estimate of the dolphin and vessel locations, or whether it is evident only for a specific type of noise.However, such studies would need to take place in areas where, unlike Eilat, AIS transmissions are mandatory.Other interesting research directions would be to repeat the same analysis for different populations of dolphins, or to record spatial information which would allow a study on how behavioral changes are related to the proximity between the dolphin and the vessels. Methods In this section, we define the methodological details of our analyses and discuss the algorithms used for noise cancellation, whistle detection, whistle clustering and whistle classification.A block diagram of our data analysis pipeline is given in Fig. 10.Our analysis starts with noise cancellation to improve the SNR.The process builds on the expected statistical dependency between the samples comprising the dolphin whistles to separate from the independent noise samples, and is performed by an adaptive normalized least mean square filter tuned to find the similarities between two signals recorded in consecutive time segments.Referring to the illustration in Fig. 10, the filter fits the delayed version of the input signal, s(t), such that the error signal, e(t), between the filtered signal and s(t) is minimized.The output of the filter, y(t), is then the desired signal with improved SNR. www.nature.com/scientificreports/An example for the spectrogram of an input signal including a dolphin whistle and its noise cancellation version is shown in Fig. 11.Full details for the noise cancellation process are given in 26 . Whistle detection A key step towards exploring the effect of vessel URN on dolphins is whistle detection.The time-varying characteristics of the dolphin whistles makes it favorable for detection by spectrum analysis.Previous approaches modeled the signal as a chirplet and used for detection the cross-correlation between the spectrum of the signal and a synthetic kernel representation of a whistle 27 , reporting that the "French hat" wavelet is a good fit for such kernel 28 .PAMGuard 29 , which is an open source passive acoustic monitoring (PAM) software, uses a flexible correlation method, which enables the user to build the correlation kernel segment-by-segment.Alongside this, PAMGuard offers an energy detector and a time-domain matched filter detector.Yet, the robustness of PAMGuard is limited to a dictionary of recorded sounds.To improve robustness, approaches motivated by speech recognition techniques exploited a pitch detection algorithm to identify harmonics in the signal's spectrum and to estimate the pitch and its temporal changes 30 .These are then matched with a harmonic template that is less demanding than knowledge of the full chirplet signal.To detect sequences of whistles one can use a chain of peak energy detectors, a multi-band energy detector, and a spectral entropy detector, each designed to detect different features in the signal's spectrum 31 .Others have proposed to use supervised neural networks to track frequency lines by assuming an underlying Markov model representation of the whistle's curve 32 .Another general method to detect a signal of a distinguished spectrum pattern involves an edge detection filter to segment spectrum segments by fitting a quadratic equation to types of rising, falling, flat or blank spectrum shape 33 .www.nature.com/scientificreports/However, the results obtained are below 80% for both the true positive and the true negative cases, which could create a bias in our analysis.The design of an automatic whistle detection system confronts two main challenges.First, the signal is received within the ambient noise, which can be high such that the SNR is low.Second, the recorded signal may be affected by non-isotropic noise sources such as transient noises from snapping shrimps 34 or depth meters and sonar signals, which makes it hard to establish and verify a desired false alarm rate.Our approach for detection thus begins with filtering the data between the 5 and 20 kHz band to capture most of the energy of the whistles 17 , followed with a wavelet de-noising filter 35 to mitigate noise transients.Other noise sources were handled through an adaptive noise cancellation scheme 26 that identifies signals by their samples' dependencies. After pre-processing, we performed whistle detection by a state-of-the-art deep neural network (DNN) carefully tagged by a human expert.We implemented a transfer learning network that is based on the VGG16 architecture that is suitable for object recognition 36 .To match with VGG, we converted the spectrogram image into 3D tensors and normalized the gray-scale intensities.For transfer learning, we replaced the top layers of the VGG with two fully connected layers with a ReLU activation function and an output layer based on a softmax activation and a binary cross-entropy as a loss function.The Adam optimizer was used to train the model, and hyperparameters were optimized using the Optuna framework 37 .Training involved 108,317 spectrograms, of which 49,807 were tagged as noise and 58,510 as dolphin whistles.Further details are given in 38 .After thresholding, the results of the DNN are segments of spectrogram images, termed region of interest (ROI) images, whose pixels are ranked by similarity to dolphin whistles traces. To train the machine-learning whistle detector, we used manual tagging.Tagging was performed as part of a Citizen Science project, where high-school students from the 'Open School' in Haifa, Israel, observed the created spectrograms and listened to the recorded data to mark the trace of identified whistles.For validation, tagging was performed separately by four different groups handling the same dataset.Disagreements among the groups were resolved by an expert.The students were given a full day training by a sonar expert.The students manually tagged parts of the acquired dataset, which was considered as a training dataset for the detector.We have randomly selected four hours from each day of recording.Following the custom four-fold approach, we have divided the training dataset into training, evaluation, and testing.To avoid overfitting, whistles in the testing part were never seen by the machine learning trainer. Whistle classification The classification of whistles into the two classes: "with vessel" and "no vessel" is based on feature analysis.To extract features such as the number of overlapping whistles or rate of harmonics, we performed a preliminary clustering procedure.Our process is illustrated in Fig. 12 and includes three steps: 1. discovery of all traces of the whistles; 2. identification of multi-path, harmonics and discontinuities in the trace of the whistles; and 3. solving an optimization problem to merge all identified traces into whistle groups.The last step labels the set of dolphin whistle traces, where each number represents a different whistle, and is stemmed from the likelihood of each trace to be connected to its nearby traces, both in the time and the spectral domains.The process is presented in detail elsewhere 39,40 and its key idea is given here for completeness. To identify the whistles traces, we identify curve-like patterns in the ROI image.These curves are a timefrequency representative of the whistle and its harmonics.While we avoid a hard assumption regarding the shape of the whistle, we do consider each the pixels of each time-frequency trace to be stationary, i.e., follow a statistical pattern that is different than that of the noise.Our aim is thus to identify the most probable sequence of pixels within the spectrogram image.Formally, let c(i, j) be the pixel value in the ith frequency bin and the jth time bin of the spectrogram image.Also let C j = {c(1, j), . . ., c(I, j)}, j = 1, . . ., J , where I and J are the number of frequency and time bins in the ROI image.We consider the frequency bins as states, s 1 , . . ., s I , and sets C j as observations, and find the sequence of J states that represent a whistle trace.To that end, we choose the Viterbi algorithm 41 as a dynamic programming approach.Here, we regard the DNN's detection output samples corresponding to the pixels of the spectrogram image as a measure of likelihood to contract the emission probability of the elements in the sequence.Further, assuming a maximum steepness of the dolphin whistle curve, we set limitations on the frequency difference between the pixels as transition probabilities.That is, for any consecutive pair of frequency-time cell indices, (i, j) and (n, j + 1) , that comprise a valid whistle trace, we assume 0 ≤ |n − i| < ρ , and we set ρ = 3 bins based on our own manually tagged database.The outcome of the Viterbi algorithm are sequences of state traces, X k , sorted by their accumulated likelihoods.Placing a threshold over these probabilities would determine the number of identified traces, N, within the spectrogram image.Note that identified traces within a single ROI image can originate from different dolphins.Our next step is thus to cluster the identified N traces into single whistles by associating them to three types of basic whistles, harmonics or multi-path, such that each trace uniquely belongs to a single cluster. Clustering the traces is performed by weighting their likelihoods to the three types.Denote w i,j as the resulting likelihood of two traces X i , X j to share the same cluster encoded within an affinity matrix W . Since two whistle- traces cannot be simultaneously harmonics and delayed templates of each other, harmonics and continuum of each other, or delayed templates and continuum of each other, we set w i,j to account for the most likely phe- nomena of the three.Define a cluster C k as a vector whose dimension corresponds to a different trace.Formally, where ω i,k is an indicator for the association of the ith trace X i to the kth cluster C k .The stability of the clusters is formalized as where D is a diagonal matrix whose (i, i) entry is the sum of similarities of the ith whistle-trace to all other whistle-traces, In (2), we combine the inter-cluster stability (first term) with penalty for too-large clusters (second term).The solution for the trace association is found by solving Once (4) is solved, the feature extraction is performed per cluster.Duration is set as the length of the basic whistle in the cluster; 'Harmonic Rate' is calculated as the multiplication from the basic whistle; 'Number of Overlaps' is determined by overlapping traces belonging to different clusters; and 'Whistle Number' is the number of traces comprising a cluster. Recall our aim is to prove the relationship between a dolphin whistle and URN from nearby vessels.To that end, we classify the features of the 51,231 identified ROI images into "with vessel" and "no vessel" classes, and measure the classification success in terms of the TP and TN rates compared to the ground truth information.For classification we use a non-linear SVM with radial basis kernel.We choose SVM as a simple classification tool that works directly on the features of the signal, thereby allowing drawing conclusions of what are the important features for classification.We note that a classification attempt using a Long Short Term Memory (LSTM) network with sigmoid activation that works directly on the ROI image or on the whistle features (as a matrix of features) was less successful, probably because the reduced number of features increased the risk of overfitting for classifiers based on neural networks.Additional trials with K-NN and K-means classifiers indeed yielded better performance, but inferior to the SVM.We take a four-fold approach and determine our SVM model by dividing the database into train and validation phases.To avoid overfitting, instead of picking random whistles, we consider data obtained within a full day as independent and perform testing separately for each calendar day.That is, for a chosen day, we perform training and validation for all other days and testing only for the chosen day. Data availibility The raw acoustic data, the identified dolphin whistles, and the tagging indications of vessels are available in 42 .(1) Figure 1 . Figure 1.A map of the deployment site (vessel shipping lane is marked in yellow line and deployment location is marked in orange), and two pictures of the deployment setup with dolphins inspecting the instruments.Eilat, July 2021.The two white bottles are floats set to lift the hydrophones 1 m above the seabed. Figure 2 . Figure 2. Spectrogram showing dolphin whistles masked by radiated noise from a container ship recorded in the Red Sea, June 2021.The red color indicates higher acoustic intensity. Figure 3 . Figure 3. Number of detected whistles with and without the presence of nearby vessels.Data are divided into "with vessel" whistles (blue) and "no vessel" whistles (red). Figure 6 . Figure 6.Classification results of dolphin whistles: percentage of correct identification of boat indications (True positive) and percentage of correct identification of no boat indications (True negative).Left panel: days 1-11.Right panel: days 12-22.Data are divided into "with vessel" whistles (blue) and "no vessel" whistles (red). Figure 7 . Figure 7. Per day accuracy classification during the Validation Stage. Figure 8 . Figure 8. Ratio between classification results of day and night.In brackets number of test whistles: (day,night).Left panel: days 1-11.Right panel: days 12-22.Data are divided into "with vessel" whistles (blue) and "no vessel" whistles (red). Figure 10 . Figure 10.A block diagram of the methodology for dolphin's whistles classification. Figure 11 . Figure 11.Example of the spectrum of a recorded dolphin whistle (left panel) and its noise cancellation version (right panel.SNR improvement is roughly 30 dB. Figure 12 . Figure 12.Illustration of the clustering procedure. Histogram of the "Whistle Number" feature as collected from in all 22 days.A partial overlap is observed between the "with vessel" and "no vessel" classes.The histograms show similarities between the distributions of the features, and thus rules out the choice of a linear classifier.Data are divided into "with vessel" whistles (blue) and "no vessel" whistles (red).
2024-03-15T06:18:45.772Z
2024-03-13T00:00:00.000
{ "year": 2024, "sha1": "ad2ae7e73a604456928672051e86e661af58aa7b", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "2341ff1eda1c00972f306e7ace7221f5522d9ab1", "s2fieldsofstudy": [ "Environmental Science", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
253958291
pes2o/s2orc
v3-fos-license
Microfluidic preparation and optimization of sorafenib-loaded poly(ethylene glycol-block-caprolactone) nanoparticles for cancer therapy applications The use of amphiphilic block copolymers to generate colloidal delivery systems for hydrophobic drugs has been the subject of extensive research, with several formulations reaching the clinical development stages. However, to generate particles of uniform size and morphology, with high encapsulation effi-ciency, yield and batch-to-batch reproducibility remains a challenge, and various microfluidic technologies have been explored to tackle these issues. Herein, we report the development and optimization of poly(ethylene glycol)- block -( e -caprolactone) (PEG-b -PCL) nanoparticles for intravenous delivery of a model drug, sorafenib. We developed and optimized a glass capillary microfluidic nanoprecipitation process and studied systematically the effects of formulation and process parameters, including different purification techniques, on product quality and batch-to-batch variation. The optimized formulation delivered particles with a spherical morphology, small particle size ( d H < 80 nm), uniform size distribution (PDI < 0.2), and high drug loading degree (16 %) at 54 % encapsulation efficiency. Furthermore, the stability and in vitro drug release were evaluated, showing that sorafenib was released from the NPs in a sustained manner over several days. Overall, the study demonstrates a microfluidic approach to Introduction The ability of amphiphilic block copolymers (ABCs) to selfassemble into colloidal particles of various morphologies in solution [1] has inspired a vast body of research on their applications as controlled drug delivery systems to improve the treatment of various diseases, with a major focus on cancer [2].Chemotherapy is an important part of cancer treatment, characterized by severe side effects due to toxicity and limited specificity of the medications.In addition, many anticancer drugs are very poorly soluble in aqueous media, which results in low bioavailability for oral formulations [3], and require the use of solubilizing excipients for intravenous (i.v.) injections, which may cause additional excipient-related side effects [4,5].The lack of aqueous solubility constitutes a major challenge for the development therapeutic molecules, with up to 90 % of new drug candidates being poorly water-soluble [3]. One strategy to tackle these issues is to solubilize hydrophobic compounds by encapsulation in polymer-based nanoparticles (NPs) [6,7], whence they can be released in a sustained manner in vivo after i.v.injection, bypassing the first-pass metabolism by the liver and potentially improving the pharmacokinetic profile by maintaining the concentration of free drug safely within the therapeutic window for extended periods of time [5].For cancer therapy, additional benefits can be obtained by nanotechnology.For the treatment of certain types of solid tumors, NPs can accumulate preferably at the tumor site by the enhanced permeability and retention effect [8,9] and targeting potential can be further increased by chemical attachment of targeting moieties on the NPs for enhanced accumulation in the diseased tissue [10,11], potentially increasing the treatment efficacy and reducing offtarget side effects. For the preparation of ABC nanoparticles, poly(ethylene glycol) (PEG) and the biodegradable polyester poly(e-caprolactone) (PCL) are favorable materials due to their widespread use in medical applications and excellent safety profiles [12][13][14][15][16][17][18].The selfassembly of PEG-b-PCL in a selective solvent has been shown to generate a variety of interesting NP morphologies, such as spherical micelles, worm-like micelles, vesicles or larger precipitate, typically resulting in a distribution of morphologies as opposed to only a single type [19][20][21][22].However, for drug delivery systems, it is important to have control over the particle morphology and reach a narrow size distribution as these factors are important to treatment safety and efficacy [23,24].Additionally, reproducibility of manufacturing from batch-to-batch is obviously necessary for industrial translation of any drug delivery system. Nanoprecipitation method, and different adaptations thereof, have shown great utility in the preparation of nanoparticles for drug delivery applications [25,26].To further improve this method, micromixers and microfluidic technology have been successfully applied to enable improved control over particle size distribution and drug loading, as well as batch-to-batch reproducibility and scalability [25,27,28].In particular, glass-based microcapillary devices offer high chemical resistance as well as high thermal stability, low thermal expansion and high heat dissipation capacity compared to devices manufactured from plastics or elastomers [29][30][31] and have been widely applied in the preparation of nanoparticles for pharmaceutical applications [27,[32][33][34][35][36].The resistance to strong solvents is a key advantage of glass capillary technology, allowing formulation scientists to explore a wider range of raw materials and process conditions to optimize the properties of the final product. Sorafenib is a multi-kinase inhibitor approved for the treatment of certain liver, kidney and thyroid cancers [37,38].It is a very poorly water-soluble compound, clinically administered as an oral tablet formulation of its toluenesulfonic acid salt (NEXAVAR), with limited bioavailability [37].Besides the mentioned indications, it has been studied for a variety of other cancers, including glioblastoma [39][40][41], breast cancer [42] and melanoma [43] and nonsmall cell lung cancer [44].However, the utility of sorafenib for different indications is hampered by poor water solubility, side effects, drug resistance and limited brain penetration, problems which can potentially be ameliorated by advanced nanoformulations.Accordingly, several developments to this direction can be found in the literature [33,36,[45][46][47][48][49][50]. Here, we report the development of sorafenib-loaded block copolymer NPs, for i.v.administration of a model drug, sorafenib.The study is focused on micellar-like PEG-b-PCL NPs, in which PCL forms a bioresorbable core [12,51], capable of encapsulating and releasing the hydrophobic drug, and PEG forms a hydrated hydrophilic surface which improves colloidal stability by steric stabilization and increases circulation time in vivo [52][53][54].We develop and systematically optimize a microfluidic nanoprecipitation process and subsequent purification steps, and study the effect of various process factors on particle size distribution, morphology distribution, zeta (f)-potential, drug loading, encapsulation efficiency and process yield.Furthermore, the release of the drug and the stability of the NPs on storage are studied. Acetone (!99.5 %) was purchased from Sigma.Methanol was purchased from VWR and acetonitrile from Merck.Tetrahydrofuran (THF) used for size-exclusion chromatography was supplied by Honeywell.All solvents were chromatography or reagent grade.Ion-exchange water was obtained using a Milli-Q Ò Integral 15 Water Purification System with a Millipak Ò Express 40 filter (Merck Millipore).All other materials were purchased from Sigma. Characterization of the block copolymers Molecular weight distributions of the block copolymers were analyzed by size-exclusion chromatography against polystyrene standards supplied by Polymer Standard Service.Polymers were dissolved in THF and injected into the system consisting of a Waters 515 HPLC pump, Biotech DEGASi GPC Degasser, Waters 717 plus Autosampler, Waters 2487 Dual k Absorbance Detector and Waters 2410 Differential Refractometer, with Waters Styragel HR1, HR2, and HR4 (7.8 Â 300 mm) columns and a guard column. Chemical compositions of the polymers were analyzed by nuclear magnetic resonance spectroscopy ( 1 H NMR) in CDCl 3 using Ascend 400 spectrometer (Bruker) for 64 scans with 1 s relaxation time and analyzed using TopSpin 4.1.1software.In addition, Fourier-transform infrared spectroscopy (FTIR) was used for chemical characterization of the polymers.Vertex 70 instrument with OPUS 8.1 software (Bruker) and a diamond ATR unit (Pike) were used to collect the absorbance spectra of polymer powders from 600 to 4 000 cm À1 at 4 cm À1 resolution for 32 scans. Furthermore, thermal properties of the polymers were analyzed by differential scanning calorimetry (DSC).About 3-4 mg of polymer powder was sealed in aluminium crucibles with pierced lids and analyzed using a DSC823e instrument with STARe 9.00 software (Mettler Toledo).The samples were first heated to + 80 °C and cooled to 0 °C at 10 °C/min under nitrogen to erase previous thermal history and finally heated from 0 to + 80 °C at 10 °C/min under nitrogen to determine the melting temperature. Fabrication of the microfluidic device A heat-resistant micromixer-type co-flow microfluidic device was fabricated from borosilicate glass capillaries and a glass rod (World Precision Instruments, USA), as published elsewhere [55].For the outer channel, a capillary with 1.50 mm outer diameter and 1.10 m inner diameter was used.For the inner channel, a capillary with inner diameter of 0.58 mm and outer diameter of 1.0 mm was used.One end of the inner capillary was tapered using a micropipette puller (P-97, Sutter instrument co., USA) and the tapered tip was adjusted to an outer diameter of 0.11 mm using fine sand paper.To improve mixing efficiency, a mixing element was prepared using a glass rod with an initial diameter of 1.0 mm, which was given an undulating hourglass-like crosssectional profile using the same micropipette puller.The capillaries and rods were then assembled co-axially, connected to inlet and outlet ports, and the device was sealed using melted polypropylene.As the molten plastic solidified around the capillaries, it provided a water-tight mechanical seal through thermal contraction.A schematic representation the device cross-section can be seen in Figure S2. Microfluidic nanoprecipitation method Nanoparticles were prepared by the microfluidic nanoprecipitation method.A schematic presentation of the process is shown in Figure S3.First, the aqueous and organic phases were prepared.Solutions of SFB, PCL-b-PEG-Me and PCL-b-PEG-COOH at different concentrations were used as organic phases and ion-exchange water or phosphate-buffered saline (PBS) were used as aqueous phases. During nanoprecipitation, the microfluidic device and approximately 15 cm of the outlet tubing were immersed in a temperature-controlled water bath.Organic phase was injected through the inner capillary and aqueous phase through the outer capillary of the microfluidic device, using disposable plastic syringes (HSW), infusion pumps (PHD 2000, Harvard Apparatus), stainless steel syringe needles and PE-LD tubing (Scientific Com-modities, Inc.).Concentrations and process conditions were adjusted for each optimization run.To prevent clogging of the device, a 0.45 lm Nylon syringe filter (Acrodisc, PALL) was applied to the inner phase syringe.Injection was started and after the process had stabilized, NPs were collected and immediately transferred for solvent removal by either dialysis, evaporation or tangential flow filtration (TFF), as detailed later in text.After solvent removal, samples were systematically filtered using 0.45 lm syringe filters (cellulose acetate, VWR). Different solvent removal methods Three different organic solvent removal methods were used in the present study, the details of which are shown below.a) Dialysis: Collected NPs were dialyzed against 1 L of ionexchange water for 48 h with three water changes (5, 20, 30 h) in a bag of 50 kDa MWCO regenerated cellulose membrane (Spectra/Por) under moderate magnetic stirring.b) TFF: Collected NPs were purified using a Minimate TM TFF system with 30 K Omega TM polyethersulfone membrane filtration cassette (PALL) by filtering against ion-exchange water by discontinuous filtration using two concentrationdilution cycles of 500 mL to 10 mL, resulting in theoretical final acetone concentration of<0.01 % (v/v).Before each run, the system, including the filtration cassette, were flushed with 0.5 L of ion-exchange water.After each run, the system, including the cassette, were systematically washed with 0.2 L of ion-exchange water, 0.2 L of 70 % (v/ v) ethanol, 0.2 L of ion-exchange water, 0.2 L of 0.5 M NaOH and finally with 0.1 M NaOH, which was left inside the cassette to prevent microbial growth during cassette storage. The cassette was stored at + 7 °C between experiments.c) Evaporation: Samples were placed in open glass vials and acetone was allowed to evaporate for 24 h under slow magnetic stirring at room temperature. Process optimization The microfluidic process was optimized with respect to seven independent variables: aqueous phase volume fraction during nanoprecipitation, polymer concentration in the organic phase, weight fraction of SFB of the total initial mass of SFB and polymer, weight fraction of PCL-b-PEG-COOH of total polymer mass, pH and salt concentration of the aqueous phase, nanoprecipitation temperature and total flow rate expressed as Reynolds number (Re).Re was calculated according to Eq. ( 1) where D is the inner diameter of the outer capillary, q is the density of water at 25 °C, v is the linear flow speed calculated from total flow rate and capillary cross-sectional area and l is the dynamic viscosity of water at 25 °C.Influence of temperature and solvent ratios on density and viscosity were not taken into account. Ranges of the different variables (Table S1) were chosen based on earlier work on the bulk nanoprecipitation of PEG-b-PCL [56] and on preliminary drug loading experiments.Full details of each optimization run are listed in Tables S2 and S3.The goal of the optimization was to minimize the polydispersity index of NPs and to maximize SFB loading degree, encapsulation efficiency and process yield. Particle size distribution and f-potential Particle size distribution and f-potential were analyzed by dynamic light scattering (DLS) and electrophoretic light scattering (ELS) methods, respectively, using Zetasizer Nano ZS instrument (Malvern) at 173°scattering angle.Samples were suspended in 0.1X PBS pH 7.4 and filtered using 0.45 lm Nylon syringe filter (Acrodisc, PALL) prior to analysis.All analyses were performed at 25 °C in triplicates. Particle morphology Size and morphology of selected optimization samples were further analyzed by cryogenic transmission electron microscopy (Cryo-TEM).To prepare the specimens, lacey carbon coated copper grids (Electron Microscopy Sciences) were treated with plasma and a drop of NPs dispersion ( % 1 mg/mL) was applied onto the grids, blotted with filter paper and immersed in liquid ethane using an automatic plunge freezer (EM GP2, Leica).The specimens were stored under liquid nitrogen and imaged at 300 kV using JEM-3200FSC (JEOL) microscope and Digital Micrograph software. Loading degree (LD) and encapsulation efficiency (EE) Sorafenib loading degree (LD) and encapsulation efficiency (EE) and the nanoparticle yield were analyzed by a dry weight method.Aliquots of the dispersions were freeze-dried, weighed and fully dissolved in acetone.Then, SFB concentration was analyzed by high-performance liquid chromatograpy (HPLC, 1260/1100 Infinity series, Agilent) with a C18 column (Gemini NX-C18, Phenomenex) using an isocratic mobile phase of 0.2 % trifluoroacetic acid and acetonitrile (42:58).The LD, EE and yield were calculated according to Eqs. 2-4: NPs yield ¼ Mass of purified NPs Mass of raw materials used ð4Þ Stability of NPs in water over storage To study colloidal stability, NPs dispersion in ion-exchange water ($1 mg/mL) was divided into aliquots, stored at + 7 °C and + 25 °C and analyzed at 0, 15, 30, 45, 60, 90 and 120 days.Particle size distribution was analyzed by first mixing the samples by vortex and dispersing into 0.1X PBS (pH 7.4), followed by measurement in triplicates, as discussed in section 2.5, without syringe filtration before the measurement.Drug content was measured by first centrifuging the stored samples for 5 min at (4.0 Â 10 3 ) Â g to remove any possible precipitated non-encapsulated drug or agglomerated particles, followed by mixing the supernatant with acetone (1:19) to dissolve the NPs and analyzing the SFB concentration using a method described in 2.9. Sorafenib solubility Aqueous solubility of SFB was determined in buffers at pH 3.9-7.4with and without supplementation of the buffer with 10 % v/v FBS.0.1 M acetate buffers at pH 3.9, 4.6 and 5.3 and 1X PBS buffers at 6.0, 6.7 and 7.4 were used.An excess of drug powder was mixed in a glass vial with the dissolution medium and stirred at + 37 °C overnight.Three independent vials were prepared for each dissolution medium.Undissolved drug powder was then removed by centrifugation for 15 min at 2.0 Â 10 4 Â g at + 37 °C, samples were carefully collected from supernatants, diluted 1:1 with methanol and analyzed by HPLC using the method shown in 2.9. Sorafenib release rate in vitro Release rate of SFB from the PEG-b-PCL NPs was analyzed by the dialysis bag method [57].PBS buffers at pH 7.4 and pH 5.5, supplemented with 10 % (v/v) FBS were used as the dissolution media.To ensure sink conditions (as discussed in section 3.7), 400 mL of the release medium was used and the amount of drug inside the dialysis bag was limited to 25 ± 2 lg.To illustrate the finite permeation rate and equilibration time inherent to the dialysis method, a simple solution of SFB in poly(ethylene glycol) 400 (PEG400) and ionexchange water (1:1) was prepared and used as a control sample. To start the experiment, 3.5 mL of NP dispersion or the control solution was placed inside a dialysis membrane bag (50 kDa MWCO) and immersed in the release medium at + 37 °C under moderate magnetic stirring.Samples of 2.0 mL were drawn from the release medium at time points of 0.5, 1, 2, 4, 8, 24, 48 and 72 h and replaced with equal volumes of fresh medium.To reach quantifiable concentrations, SFB was extracted from the samples twice with diethyl ether (!97.5 %, Aldrich) (2 Â 3.5 mL), the ether was evaporated and the dry samples dissolved in 0.25 mL of methanol.Finally, the samples were analyzed by HPLC, as shown in section 2.9. At the end of each experiment, the amount of SFB remaining inside the dialysis bag was quantified.Firstly, the contents of the bag were collected and the bag was rinsed with a small amount of water.The collected liquid, including the rinsing liquid, was freeze-dried.To the dry samples, 2.0 mL of methanol was added and the samples were mixed carefully by three alternating cycles of vortex mixing (10-15 s) and tumbling (10-15 s), followed by a final 10-15 s vortex mixing, after which extraction was allowed to occur for $ 2 h at room temperature.Finally, the samples were centrifuged for 10 min at 1.6 Â 10 4 Â g and the supernatants were analyzed by HPLC, as detailed above. Characterization of the block copolymers Results from polymer characterization are shown in Fig. 1.FTIR spectra of the two polymers were almost identical and important absorption bands could be seen at 1722 cm À1 (PCL, carbonyl C@O stretch), 2945 and 2864 cm À1 (CAH stretch), 1177 cm À1 (PEG, CAOAC stretch) and 1240 cm À1 (PCL, COO stretch).These results are in accordance with the PEG-b-PCL structure, as was also reported elsewhere [58].Based on SEC data, the number-average molecular weights of the block polymers were determined to be 1.14 Â 10 Effects of formulation and microfluidic process parameters: Dialysis-based solvent removal experiments To optimize the microfluidic process, the effects of seven formulation and process parameters on product properties (particle size, PDI, f-potential, loading degree and encapsulation efficiency) were explored.For each optimization run, 4.0 mL of the NP dispersion was prepared by the microfluidic method and purified using dialysis and syringe filtering as detailed above. Effects of formulation and process parameters on particle size distribution and f-potential The optimization results with respect to d H , PDI and f-potential and particle morphology are shown in Figs.2-4.As seen in Fig. 2A, increasing the amount of PCL-b-PEG-COOH from 0 to 30 % reduced strongly the mean d H (from 91.0 ± 0.5 nm to 65.9 ± 1.0 nm) and PDI (from 0.174 ± 0.005 to 0.141 ± 0.005).This can be explained by the electrostatic repulsion of the negatively charged hydrophilic blocks and its effect on block polymer self-assembly.In order to maximize the distance between ionized PEG end groups, structures of higher surface curvature, and thus, lower PEG tethering density per unit area are favored, which results in small spheres as opposed to larger particles, worm-like or bilayer structures [59].Indeed, examination of samples (Runs 16 and 20, Table S2) by Cryo-TEM confirmed that the addition of COOH-functionalized polymer reduced the amount of worm-like and lamellar structures and favored a spherical morphology (Fig. 3).Corresponding lowmagnification micrographs are available in Figure S4 for a better overview of the obtained morphologies.The results indicate that a more homogeneous morphology distribution was obtained at 30 % of PCL-b-PEG-COOH. Fig. 2B shows that also the salt concentration of the aqueous phase had a significant effect on the particle size distribution, but did not affect the f-potential.For example, when PBS (pH 7) was used as the aqueous phase, larger particles (89.0 ± 0.4 nm at PDI 0.163 ± 0.006) were obtained as opposed to when using water (71.9 ± 0.3 nm at PDI 0.126 ± 0.007).This salt effect can be explained by shielding of the deprotonated À COOH-groups by the salt ions [59], which reduces the effect of electrostatic repulsion of the corona-forming chains, as discussed above.The effect of aqueous phase pH was small within the narrow pH range studied.Increasing the polymer concentration appeared to slightly reduce the mean d H , with no significant effect on PDI or fpotential (Fig. 2C).This can be explained by an increased supersaturation level of polymer, leading to formation of larger amounts of stable nuclei [25]. Increasing drug feed (Fig. 2D) caused a slight increase in d H (from 69.9 ± 0.3 nm at 0 % to 73.6 ± 0.5 nm at 25 % drug feed), which can be explained by increased drug loading per particle, whereas no significant effects on PDI or f-potential were observed.Within the studied range, water fraction during nanoprecipitation had no significant effects on d H , PDI or f-potential (Fig. 2E).The size distribution data is consistent with our previous findings using the bulk nanoprecipitation method, showing that within values of approximately 55 to 87.5 %, water fraction does not significantly affect the particle size distribution for this polymer [56]. Fluid flow rate had a significant effect on d H and PDI (Fig. 2F), with larger values obtained at low Re (e.g., d H = 85.2 ± 0.2 nm and PDI = 0.189 ± 0.004 at 2 Re) and smaller values obtained at high Re (e.g., d H = 62.7 ± 0.5 nm and PDI = 0.135 ± 0.013 at 140 Re).The flow rate region of 30-150 Re seemed stable in terms d H , PDI and fpotential.Qualitatively, these observations regarding flow rate are consistent with the current understanding of the nanoprecipitation of diblock copolymers, using the diffusion-limited coalescence model.Low flow rates inside a micromixer result in slow mixing of the solvent and antisolvent phases.If the corresponding mixing time is larger than the characteristic time associated with the coalescence process, the process will be dominated by particle growth through the addition of individual polymer chains into existing nuclei, resulting in large average particle size [25,28].Conversely, if mixing times are low compared to the characteristic time, high supersaturation is achieved rapidly throughout the sample and the formation of nuclei and cluster-to-cluster aggregation are the dominant mechanisms for particle formation [25].As the nuclei are stabilized at certain aggregation number by steric effect of the hydrated PEG chains, a population of small, homogeneous NPs is obtained at high supersaturation values (at high Re) [60].When mixing time is below the characteristic time, particle size will not decrease further with increased mixing speed, which can be seen here as plateauing of particle size and PDI at above $ 30 Re in Fig. 2F and later in section 3.5, and this behavior is consistent with earlier reports on microfluidic nanoprecipitation [61]. Next, we studied the effect of nanoprecipitation temperature in more detail, by performing three complete repetitions of the experiment to account for batch-to-batch variation.The effect of temperature on particle size distribution is summarized in Fig. 4. Increasing the process temperature from + 20 to + 65 °C increased the average hydrodynamic diameter and reduced the PDI significantly.Increasing the nanoprecipitation temperature improves mixing of the fluids by reduction of viscosity and increase in diffusivity, which is expected to reduce PDI. An increase in PEG-b-PCL particle size as a function of nanoprecipitation temperature was reported by Zhou et al. when THF was used as the organic solvent and it was shown to correspond with the formation of vesicles [20].They hypothesized that evaporation (boiling) of the organic solvent assists in the self-assembly process through bubble formation.In the present work, boiling of acetone was detected within the micromixer at 50 °C and 65 °C, which may indeed modify the flow patterns inside the micromixer and the resulting microbubbles may influence copolymer self-assembly.However, no major differences were seen in the particle morphol-ogy as a function of nanoprecipitation temperature by Cryo-TEM, as a mixture of spheres and worm-like structures was seen in samples prepared at + 22 °C and + 65 °C (Fig. S5, Runs 25 and 28 in Table S2). Interestingly, substantial batch-to-batch variation was observed in sorafenib encapsulation even though particle size, PDI and f-potential data remained very consistent between parallel batches (Fig. S6 C-D).Poor encapsulation was also clearly visible as the formation of SFB precipitate inside the dialysis bag during the solvent removal process.In light of this variation, the loading degree and encapsulation efficiency data corresponding to the data points in Fig. 2 were discarded. Effects of different solvent removal methods As discussed, substantial batch-to-batch variation was observed in sorafenib encapsulation when using the microfluidic process with dialysis-based solvent removal.Formation of crystalline SFB precipitate was evident from visual examination of several of the dialyzed samples and corresponded with low encapsulation efficiency values.We hypothesized that this inconsistent loading may be due to premature drug release during the solvent removal process due to different rates of solvent removal between the methods.To test this hypothesis, several trial runs were performed using different sample collection and solvent removal techniques. For each experimental run, 4.0 mL of sample was prepared by the microfluidic method, using the default parameter values in Table S1.The sample was collected either into an empty container (non-dilutive) or into 20 mL of ion-exchange water (dilutive), in order to see if an increase in water-to-acetone ratio could retard the hypothesized premature drug release, and solvent removal was started immediately.Three different methods were used for solvent removal: dialysis, TFF or evaporation (as detailed in Methods).After solvent removal, regardless of the method used, samples were filtered using 0.45 lm syringe filters (Cellulose acetate, VWR).Three full repetitions of the experiment were performed.The data is presented in Fig. 5. Important differences can be seen between samples prepared using different purification methods.Firstly, the evaporation method yielded significantly smaller particles than the other methods (Fig. 5A).No major differences were observed in f-potential or PDI values (Fig. 5B,C).As expected from the earlier experiments, substantial variation in drug loading was seen in the dialyzed samples, with drug loading varying between 11.7 % and 0.8 % between parallel batches (Fig. 5D).Solvent removal by evaporation resulted in systematically very low final loading degrees (between 0.7 and 0.9 %).No clear differences between dilutive and non-dilutive sample collection could be observed.Interestingly, drug loading remained the most consistent (between 7.1 and 10.4 %) from batch-to-batch when the TFF method was applied. We hypothesize that these differences in final loading degree between samples prepared with different solvent removal methods are due to different rates of solvent removal.After nanoprecipitation of drug-loaded NPs, the dispersing solution will be supersaturated with drug.If the drug itself starts to precipitate out from the solution as crystals, supersaturation level will be reduced and drug diffusion will be able occur from NPs into solution more rapidly due to the higher concentration gradient.Additionally, if this precipitation of crystalline drug starts when solvent concentration remains elevated, the drug solubility and the rate of drug diffusion will be high, and thus, the drug will be able to diffuse out from the NP rapidly. For example, in the case of the evaporation method, the samples were placed in open glass vials with magnetic stirring at room temperature.In such a setup, the rate of solvent removal is expected to be slow compared to TFF.Similarly, the equilibration of a dialysis system takes several hours (as shown also later in section 3.7) and the entire dialysis process lasted 48 h, whereas for TFF, the entire process lasted $ 1 h.Furthermore, dialysis method offers poor control over the rate of solvent exchange, as the rate may depend on multiple factors, such as stirring efficiency, tightness of the seal of the dialysis bag, and bag surface area.Therefore, it is conceivable that the high final loading degrees obtained with TFF based purification were enabled by rapid solvent removal, which prevented fast premature release of SFB from the NPs.However, further investigation would be needed to confirm whether drug precipitation as crystals started only during the solvent removal process or if a population of nanocrystals formed already during the microfluidic step.Regardless of the underlying mechanism, using TFF-based purification was useful in consistently obtaining a high final SFB loading degree. Effects of the formulation and microfluidic process parameters: TFF-based solvent removal experiments Selected optimization experiments were repeated, implementing the TFF method to reduce batch-to-batch variability in drug loading.Nanoparticles were prepared as previously described, with the exception that 8 mL of dispersion was collected from the micromixer outlet into 40 mL of ion-exchange water and the solvent was removed by tangential flow filtration using the protocol shown in the Methods.A full list of nanoprecipitation parameters is available in Table S3.Finally, the samples were filtered using 0.45 lm syringe filters, similarly to all previous experiments.Data obtained using the TFF-based process are summarized in Figs. 6 and 7.It is worth noting that only a single batch per treatment condition was prepared.It should also be mentioned that because the manufactured batch size was small compared to the maximum capacity of the TFF apparatus, loss of nanoparticles unavoidably occurred in the filtration cassette and tubing, which limited the maximum attainable NPs yield.Furthermore, it should be understood that the calculated encapsulation efficiency is also affected by this limited yield and thus slightly higher EE and yield values are expected to be obtained with larger batch sizes. Similarly to the dialysis-based method, increasing the nanoprecipitation temperature reduced PDI significantly (from 0.160 ± 0. 007 to 0.121 ± 0.006), whereas the effects of temperature on d H and f-potential were negligible (Fig. 6A).Loading degree was significantly improved by increasing the nanoprecipitation temperature and encapsulation efficiency and NPs yield also showed a small positive trend with increasing temperature (Fig. 7A). As previously seen with the dialysis-based process, fluid flow rate had a strong effect on d H and PDI, indicating that larger and less homogeneous particles are obtained at low Re (Fig. 6B).Furthermore, low Re resulted in minimal yield (13.7 ± 4.9 %) and encapsulation efficiency (7.7 ± 0.1 %) values (Fig. 7B), suggesting that due to poor mixing, significant part of the polymer was not able to self-assemble into nanoparticles, but may have rather formed larger aggregates or precipitate that was removed during the syringe filtration step.This hypothesis is supported by the significantly higher d H and PDI at low Re.These observations are expected for nanoprecipitation in a micromixer device, as faster flow improves the mixing efficiency (see 3.2 for discussion).The maximum flow rate studied (Re = 140) corresponded to 387.5 mL/h total device output. As expected, f-potential of the NPs decreased linearly with the amount of added PCL-b-PEG-COOH (Fig. 6C).Increasing the amount of PCL-b-PEG-COOH also reduced strongly the d H and PDI, which can be explained by electrostatic repulsion of the corona-forming blocks, as described above.Increasing the drug feed did not markedly affect d H , PDI or f-potential (Fig. 6D).Loading of sorafenib increased linearly with increasing drug feed up to a LD value of 15.9 ± 1.0 % at 30 % drug feed, and EE (%) and NPs yield remained remarkably constant with increasing drug feed values.This suggest that even higher loading values could be obtained by the present method by increasing the drug feed further. Drug solubilization Solubility values obtained for SFB in phosphate and acetate buffers at different pH with and without FBS supplementation are shown in Figure S8.For example, SFB solubility in PBS at pH 7.4 was determined to be 7.9 ± 0.9 lg/L.Since a 1.0 mg/mL dispersion of NPs at 15.9 % loading corresponds to 159000 lg/L drug, the achieved drug loading indicates a $ 20000-fold improvement in aqueous solubility.For comparison, previous work by Letchford et al. on PEG-b-PCL micelles reported a 1068-fold and 130100-fold solubility improvements for paclitaxel and curcumin, respectively [62]. Stability of NPs in water over storage Results from the stability study are shown in Fig. 8.The average d H and PDI remained constant for the 4-month study period at both + 7 °C and + 25 °C, indicating high colloidal stability of the PEG-b-PCL NPs as an aqueous dispersion (Fig. 8A-B).Colloidal stability of these particles is expected due to their small size and the steric stabilization effect of the PEGylated surface.However, as seen in Fig. 8C, SFB content of the NPs decreased rapidly upon storage, with a two-week drug loss of approximately 48 % at + 7 °C and 79 % at + 25 °C.This behavior is likely due to the small size of the NPs and the ability of this small hydrophobic drug to diffuse in the polymer matrix.Light microscopy investigation of the centrifuged stability samples revealed a large population drug crystals in the pellet (Fig. S7). Because particle size remained practically unchanged, it is possible to evaluate changes in particle concentration by scattering intensity, which was accomplished by plotting the derived count rate values calculated by the Zetasizer software (Fig. 8D).The data shows an initial drop in particle concentration, which correlates well with the reduction in SFB content at each temperature, followed by a stable plateau with no further reduction detected within the 4-month experiment.This suggests that no significant degradation of PCL occurred during the study, as the degradation would be expected to reduce particle size and concentration. Collectively, these results suggest that the particles did not significantly agglomerate or degrade during 4 months of storage in ion-exhange water at + 7 °C and + 25 °C.However, the majority of the loaded drug released within weeks of storage, suggesting that the loading of sorafenib was thermodynamically unstable and likely controlled by diffusion in the polymer matrix.Lyophilization of the formulation could be considered to improve storage stability [63]. Sorafenib release rate in vitro Release rate of sorafenib from the optimized formulation, prepared with the TFF-based method, was analyzed in PBS pH 5.5 and pH 7.4 supplemented with 10 % FBS.To allow rational design of the experiment, the solubility of SFB in the release media was first determined.The obtained solubility data in PBS and acetate buffers shown in Figure S8 demonstrate that solubility was increased significantly by the addition of 10 % FBS into the dissolution medium.For example, the drug solubility at PBS pH 7.4 was determined to be 7.9 ± 0.9 lg/L without FBS and 321 ± 22 lg/L with FBS.For dissolution and drug release testing, sink conditions should be ensured, which can be achieved by having at least 3-10 times larger dissolution volume than the saturation volume [57,64].To meet this condition, initial amount of drug inside the dialysis bag was adjusted to 25 ± 2 lg for all experiments, corresponding to one third of SFB solubility in 300 mL of PBS pH 7.4 + 10 % FBS.Further increases in dissolution volume would result in analytical difficulties due to extremely low final concentrations in medium. Results of the release rate experiments are shown in Fig. 9.The data demonstrates clearly that sorafenib was released from the PEG-b-PCL NPs in a sustained manner compared to the solution control, with 13.6 ± 9.0 % released in 8 h and 25.3 ± 22.7 % released in 24 h from the NPs at pH 7.4.The drug release rate was not significantly dependent on the pH-values tested.High standard deviations of the release values are likely due to losses in the diethyl ether extraction process.Precipitation of protein at the waterorganic interface rendered the efficient separation of the two phases difficult, causing variation in extraction efficiency.Release from the control samples (drug solution) stabilized at $ 70 %, which was likely caused by limited yield of the extraction step, since the solubility analyses confirmed that sink conditions were obeyed.Furthermore, the analysis of dialysis bags contents at 72 h (Figure S9) showed that a complete release (over 99 %) of SFB was achieved for the control samples, which suggests that the limited release values in Fig. 9 were mainly due to losses during the extraction step and not due to limited solubility in release medium. Binding of SFB to the dialysis membrane was quantified by acetone extraction, in a similar separate experiment, details of which are available in Supplementary Information, to be 0.34 ± 0.13 lg, which is 1.4 % of the total amount of drug in the dialysis system of the present experiment.This indicates, that at very low release percentages, binding of SFB may constitute a significant source of error.This corresponds with the small delay in the initial rise of release values, which can also be seen when looking at the early time points in Fig. 9 and in more detail in Figure S10.Another factor contribution to this delay may be the finite time required for SFB to diffuse inside the membrane to establish the concentration gradient and to reach the outside surface. It is also noteworthy that there was a significant overall time delay associated with the dialysis bag method, as full equilibration of the system for the control samples took several hours (Fig. 9), which is not expected for the actual in vivo application scenario of intravenous injection of a drug solution, where fast mixing will take place due to blood flow.Even though the dialysis bag method is the most widely utilized release rate analysis technique for nanoparticulate systems [57], it has several drawbacks.Besides the diffusional barrier caused by the dialysis membrane, another important consideration is that the environment the sample experiences is markedly different in the dialysis bag compared to the human bloodstream, as the inside and outside compartments remain separated and no mechanical mixing occurs inside the dialysis bag. These inherent limitations of the method may cause difficulties in accurate prediction of release in vivo based on in vitro data. Therefore, further development of the analytical method should be considered and other methodologies with higher in vitroin vivo correlation should be explored, as well as using different release media, such as blood plasma.Nevertheless, taking the limitations into account, the method showed clearly that the NPs released the drug in a sustained manner over several days, which is desirable for reducing the drug-related side effects caused by fluctuations in free drug concentration and minimizing the required frequency of injections. Conclusions In this work, a microfluidic process was developed and optimized for the controlled preparation of PEG-b-PCL NPs loaded with sorafenib free base, intended for i.v.administration for cancer therapy applications.The highly hydrophobic drug was successfully solubilized, with up to 16 ± 1 % loading degree at 54 ± 1 % encapsulation efficiency, corresponding to $ 20000-fold increase in solubility as compared to free drug in PBS.The optimized formulation exhibited spherical morphology, small particle size (d H < 80 nm) and uniform size distribution (PDI < 0.2).Overall, the developed formulation demonstrated sustained drug release over several days in vitro and the NPs showed high colloidal stability in water. We showed that particle size distribution, morphology, fpotential and sorafenib loading could be effectively controlled by adjusting the formulation and process parameters.Notably, the addition of carboxylic acid functionalized polymer was useful in driving the morphology distribution towards solid spherical particles, reducing the PDI and improving the drug loading.In addition, the process temperature and flow rate had the largest effects on formulation properties within the ranges explored.It was shown that increasing the nanoprecipitation temperature resulted in increased drug loading, increased mean particle size and reduced PDI.An optimal region of fluid flow rate was identified to maximize process yield and minimize PDI. Interestingly, the choice of organic solvent removal method had a strong effect on the final encapsulation efficiency.This effect was attributed to different rates of organic solvent removal from the dispersion.When combined with tangential flow filtration, the microfluidic process allowed a precise control over NP's properties, and showed high yield and batch-to-batch reproducibility.The optimized process was fast, as the complete production of a development batch from preparation of initial solutions to final syringe filtration was possible in<3 h.In conclusion, we have presented a feasible and well-controlled process for the preparation of sorafenib-loaded PEG-b-PCL nanoparticles, potentially applicable also for different drug-polymer combinations. Notes.V. Känkänen and V. Balasubramanian are employees at Bayer Oy (Finland).The other authors declare no conflict of interest. CRediT authorship contribution statement 4 g/mol at Ð = 1.26 for PEG-b-PCL-Me and 1.23 Â 10 4 g/mol at Ð = 1.28 for PEG-b-PCL-COOH.By comparing the integrated areas of PEO protons (3.6 ppm) and PCL protons (4.1 ppm) in NMR spectra, the weight fraction of PEO was calculated to be v PEO = 0.162 for PCL-b-PEG-Me and v PEO = 0.166 for PCL-b-PEG-COOH.Neither FTIR nor 1 H NMR were sensitive enough to verify the presence of the singular carboxylic end group of PCL-b-PEG-COOH.DSC data showed a melting event at + 54.4 °C for PCL-b-PEG-Me and + 54.8 °C for PCL-b-PEG-COOH.Slightly higher melting point for PCL-b-PEG-COOH may be explained by the higher molecular weight.All results were in good agreement with supplier batch data, and indicate that no significant hydrolysis of the polymers had occurred during storage. Fig. 2 . Fig. 2. Effects of the formulation and process parameters on average hydrodynamic diameter (d H ), polydispersity index (PDI) and zeta (f)-potential (ZP) at pH 7.4.Results are from single batches per condition, each analyzed in triplicates. Fig. 4 . Fig. 4. Effect of nanoprecipitation temperature on average hydrodynamic diameter and polydispersity index.The data is from three complete repetitions of the experiment.Average values ± SD are shown. Fig. 5 .Fig. 6 . Fig. 5. Effects of different NP collection methods (dilutive and non-dilutive) and acetone removal methods (evaporation, dialysis and tangential flow filtration) on the hydrodynamic diameter (A), polydispersity index (B), f-potential (C) and SFB loading degree (D) of purified NPs.Average values ± SD (n = 3) are shown for three parallel batches were prepared per condition. Fig. 8 . Fig. 8. Stability data of the aqueous dispersion sorafenib-loaded PEG-b-PCL NPs.Average values ± SD from three repeat measurements on a single sample are shown per time point. Fig. 9 . Fig. 9. SFB release profiles of PEG-b-PCL NPs in 1X PBS at pH 7.4 and pH 5.5 supplemented with 10 % of FBS.Sorafenib solutions in PEG400:water (1:1) are included as control samples to illustrate the diffusion delay of the dialysis bag method.Average values ± SD from three complete repetitions of the experiment are shown.
2022-11-26T16:48:58.694Z
2022-11-01T00:00:00.000
{ "year": 2022, "sha1": "d91a87196c97af60e2cf393644ee517eaa9ccec9", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1016/j.jcis.2022.11.124", "oa_status": "HYBRID", "pdf_src": "ScienceParsePlus", "pdf_hash": "7160c357167c2d6e31f12359d1f72357a240c42b", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [] }
261511000
pes2o/s2orc
v3-fos-license
Zn-Salphen Acrylic Films Powered by Aggregation-Induced Enhanced Emission for Sensing Applications Zn(II) complexes possess attractive characteristics for supramolecular chemistry, catalysis, and optoelectronic applications, while Zn-Salphen counterparts are also suitable as chemical sensors, although limited by solution-based to date. In this study, we report the synthesis of new polymers from methyl methacrylate, n-butyl acrylate, and a non-symmetrical Zn-Salphen complex. We show that this low-fluorescent complex exhibits aggregation-induced emission enhancement (AIEE) properties and that, the incorporation of AIEE complexes into a polymeric matrix make it possible to achieve fluorescent films with enhanced fluorescence suitable for sensing applications. As a proof of concept, these films could detect acetic acid, showing a decrease of up to 73% in the original fluorescence. Host/guest studies showed a subtle disruption of the emission in aggregates upon treatment with anion guests. These results indicate that an interaction between the guest and Zn-Salphen complex may occur, stabilizing or destabilizing the complex and causing a concomitant increase or decrease in emission. Supplementary Information The online version contains supplementary material available at 10.1007/s10895-023-03399-6. Tetra-coordination plays a critical role in the structure of the complex, providing a highly conjugated structure assisted by the phenyl ring bridging the iminophenyl groups.The Zn center provides rigidity and a coordination site for extra ligands, attracting attention in catalysis [13,14].In this sense, the geometry also facilitates intermolecular Lewis acid-base interaction [15,16].The unsaturated coordination sphere of the metal can be occupied by Lewis bases or other Zn-Salphen units by Zn•••O interaction [17,18], leading to self-assembly structures that are highly attractive in the supramolecular chemistry field [19,20].In addition, to study the behavior of metal complexes in a polymeric architecture, Zn-Salphen has been polymerized in different ways, from polycondensation between diamines and bissalicylaldehydes to metal-catalyzed cross-coupling [21].In many cases, highly ordered structures have been obtained.This approach is suitable, for example, for introducing binding sites for catalytic applications or generating chiral materials with tunable optical properties [22,23]. The photoluminescent properties of Zn(II) complexes have been extensively studied [2,4,23,24].The diversity of ligands is undoubtedly responsible for the differences in photophysical behavior.In particular, Zn-Salphen complexes present interesting luminescent properties, such as fluorescence and electroluminescence for applications in ion sensing [2], OLEDs [3], etc.It is well known that the photoluminescence of these complexes is generally due to transitions from the highly delocalized π system in the organic framework [25].Spectroscopic studies in solution have demonstrated the consequences of aggregates in solution, which in most cases produces a decrease in the fluorescence of many molecules.However, the structure plays a determinant role in the path of deactivation after aggregate formation [26,27].In many cases, for highly conjugated π systems, π-π stacking is involved in fluorescence quenching.It is important to emphasize that different interactions between aggregates are possible thanks to the planar geometry and the axial coordination with the metal center.These interactions are responsible, in other cases, for the enhancement of emission [24,28]. Aggregation-induced enhanced emission (AIEE) is a phenomenon of great interest in the field of fluorescent molecules, derived from aggregation-induced emission (AIE) [29].Contrary to most fluorophores, whose emission is quenched by concentration, the increase in fluorescence promoted by aggregation is less common in molecules.This phenomenon provides an important tool for multiple applications in the solid state [30].The mechanism of this phenomenon has been well studied, and the increase or turnon of the emission has been assigned in some cases to the restricted intramolecular motions (RIM) in molecules whose structure is capable of dissipating the energy of the excited state in solution, e.g., by rotation of phenyl or heterocyclic rings.However, in aggregates, these same rotations are sterically hindered. AIEE has also been reported in Zn complexes [24,28] and macrocyclic disalphens [31].However, it has never been reported in mononuclear non-symmetrical Zn-Salphen complexes.Furthermore, the copolymerization of complexes with acrylic monomers to generate thin films to study host-guest interactions or use as sensors in the solid state seems to be a reasonable proof of concept.Recently, we have studied the incorporation of metal-Salphen complexes (Ni, Cu, and Zn) into a polymer matrix by radical polymerization with acrylic monomers to generate materials with binding sites available to coordinate anions [32][33][34].It was found that the Zn-Salphen complex acts as a recognition unit for different guests, producing differentiated electrochemical responses against fluoride or acetate, depending on the nature of the guest and the supramolecular environment afforded synergistically by an acrylic polymer chain.However, to date, there are no reports of solid-state fluorescent sensors based on acrylic matrices and Zn-Salphen receptors.This is a significant gap in the literature, as such sensors would be highly desirable for the detection of metabolites of biological relevance in non-communicable diseases (NCDs), such as acetic acid.Acetic acid is a weak organic acid that is naturally produced in the body.It is also found in a variety of foods and beverages, such as vinegar, wine, and beer.Acetic acid can also be produced by bacteria that live in the mouth and gut.In NCDs, such as diabetes, cancer, and heart disease, the levels of acetic acid in the body can be altered.For example, people with diabetes often have higher levels of acetic acid in their blood.This is because the body produces more acetic acid when it breaks down glucose for energy.Therefore, the detection of acetic acid in NCDs can be used as a biomarker for these diseases.The development of solid-state fluorescent sensors based on acrylic matrices and Zn-Salphen receptors would be a significant advance in the field of biosensing.Such sensors would be portable, easy to use, and sensitive to low levels of acetic acid.They could be used to diagnose NCDs at an early stage, which would improve patient outcomes. Synthesis and Characterization Complex 1 was prepared as described elsewhere by the condensation of (E)-2-(((2-aminophenyl)imino)(phenyl)methyl)phenol (ketimine) and 2-hydroxy-3-allylbenzaldehyde and in situ coordination to Zn(OAc) 2 •2H 2 O; the identity of the final product was confirmed by 1 H-NMR and FTIR against the literature [34].The polymers were synthesized by chemically initiated bulk polymerization using azobisisobutyronitrile (AIBN) (Scheme 1).Polymerization proceeded smoothly at 80 °C, yielding the polymers p-1A (50:50 MMA:n-BuA) and p-1B (40:60 MMA:n-BuA), in which complex 1 was present in 2% of the total mass and the acrylic components were at 98%.The monomer proportions used in this study were selected using a wider full factorial design of experiments carried out as a pre-screening process.The goal of such a preliminary screening of systems was to obtain a balance of stiffness, flexibility, and cohesiveness in films prepared from each formulation under a casting from solution approach, so as to discard those specimens with a suboptimal performance.In this regard, the monomers methyl methacrylate (MMA) and n-butyl acrylate (n-BuA) were varied in weight ratios ranging from 30:70 to 60:40 (i.e.30:70, 40:60, 50:50, 60:40) so as to complete the rest of the composition apart from 1 (98%).The ratios MMA:n-BuA that afforded unviable films were those at 30:70 and 60:40, which were then discarded and are not further discussed in this work. Polymers were characterized by gel permeation chromatography (GPC).Analysis showed high molecular weight ( M w ) values for both polymers, p-1A (4.0 × 10 5 g mol −1 ) and p-1B (3.3 × 10 5 g mol −1 ), and a small decrease in dispersity ( M w /M n ) from 2.17 to 2.0 as the amount of MMA increased.Degrees of polymerization (DP) of 1665 ( X n ) and 3622 ( X w ) for p-1A and 1488 ( X n ) and 2985 ( X w ) for p-1B were esti- mated.Estimates of the average number of Zn-Salphen units ranged between 7.4-16.1 and 6.6-13.2 for p-1A and p-1B, respectively.Interestingly, the polymers showed good stability up to 342 °C in the case of p-1A and 318 °C for p-1B (5% weight loss).Differential scanning calorimetry (DSC) showed glass transition temperatures of 21.3 °C for p-1A and 34.3 °C for p-1B. Polymers possessed suitable characteristics to form films easily by simple solvent evaporation.The solutions were prepared by dissolving 70 mg of the respective polymer in 1.2 mL of filtrated THF.The solutions were then rapidly spread onto a dust-free glass microscope slide (7.6 cm × 2.6 cm).The glass slide was left in a Petri dish until the solvent evaporated, and then dried at 80° C in a preheated oven for variable time.Finally, the membranes were immersed in de-ionized water and separated by pulling out from the glass slide.The thickness of the films was (89 ± 11) μm and (77 ± 14) μm for p-1A and p-1B, respectively. Photophysical Properties of Zn-Salphen Complex The absorption spectra of complex 1 were recorded in THF, CHCl 3 , and mixtures of THF:H 2 O (Fig. 1).As observed in coordinating solvents such as THF, two bands were identified, the absorption maxima at 300-304 nm, and an intense band at 398 nm.On the other hand, absorption of 1 in the less coordinative solvent CHCl 3 results in a decrease in the absorption and a hypsochromic shift of the band from 398 to 382 nm, and a shoulder at ca. 430 nm.Absorption bands at 300 nm and 398 nm are assigned to the π-π* and n-π* intraligand transitions, respectively [9]. The trend of Zn-Salphen complexes to form aggregates in less coordinative solvents or at higher concentrations has been previously studied [18][19][20][21][22][23]31].Intermolecular Lewis acid-base interactions, e.g., between the Zn center in one complex and O of another complex, have been proposed.Concomitant hypsochromic (H aggregates) or bathochromic (J aggregates) shift of fluorescent emission has been observed depending on the relative orientation between aggregates.Thus, the decrease in the absorption of the band at 398 nm suggests interaction by aggregates, and considering the blue-shifted band measured in chloroform, it can be hypothesized that H aggregates make a major contribution to the spectrum.Then, the spectrum of 1 in aggregates, induced by THF/water mixture, should be consistent with the formation of that kind of aggregates, which is the case for this complex, judging by the decrease in absorbance and hypsochromic shift of the band at the same zone. The Zn-Salphen complex used in this work was found to be low emissive in THF solution, although in chloroform it displays an increase in fluorescence of almost 3 times (Fig. 2).The fluorescent emission λ max suffers minimal change, passing from 524 nm in THF to 523 nm in the less coordinative solvent chloroform, evidencing a poor solvatochromic effect.In this sense, many studies on Zn(II) complexes have pointed toward dimeric aggregates [9], intraligand charge transfer, or π-π stacking as the responsible mechanism for the fluorescence quenching [22].In any case, it is evident that unlike those previous Zn complexes, less coordinative solvent promotes the formation of aggregates that are significantly more fluorescent than the de-aggregated form of complex 1. AIEE Behavior of Complex 1 An aggregation-induced emission enhancement (AIEE) phenomenon was interestingly observed in the complex 1 in the solvent system THF/water (Fig. 3).These AIEE studies showed that increasing the fractional water content in a THF solution of complex 1 promotes the formation of aggregates, which in turn increase the fluorescence intensity.This enhancement in the fluorescence was observed in a first step upon adding 10% of water, then, a plateau was reached up to 60%.After 70% of water content, a slight increase was found again.Finally, the largest enhancement was measured at 80% of water content, and after 90% an increase in the original emission of about 18 times was observed. The wavelength of the emission maxima drifted at different water contents.For example, between 0 and 60%, it oscillated from 522 to 526 nm.A stronger hypsochromic shift was observed at higher water contents.For example, at 70%, the λ max (519 nm) is 5 nm shifted compared to complex 1 in THF (λ max = 524 nm).Although stronger changes are observed from 80% (λ max = 507 nm) and 90% (λ max = 511 nm) of water content with 17 nm and 13 nm, respectively, in these solutions the effect on the emission maxima and intensity of the aggregates is more pronounced due to their high presence in the solution. In accordance with similar studies in highly concentrated solutions [27], the changes in the UV-vis spectra and the behavior in fluorescence suggest that once the complex starts to aggregate due to insolubility, the intermolecular coordination (Zn•••O) and sterically demanding phenyl ring enable the formation of various aggregate species.Therefore, the fluorescent emission is principally related to the oligomeric aggregates.In addition, this result suggests that π-π stacking interaction is not sufficiently strong to deactivate the excited state upon aggregation.Hence, there is a mechanism for the enhancement of fluorescent emission dependent on the aggregate formation, as evidenced by the hypsochromic shift of the fluorescent and absorption maxima.This led us to consider the restricted intramolecular motion (RIM) [29] of the phenyl group attached to the imine as the driving force of the fluorescence enhancement.The results obtained in the AIEE experiment also suggest that the integration of complex 1 into a polymeric matrix would open the door to aggregation-induced enhanced emission in polymer-based optical sensors in either solution or the solid state, thus facilitating schemes of low-cost portable detection of suitable analytes. Photophysical Properties of Polymer UV-visible spectra of the polymers were obtained in films and solutions (Fig. 4).The spectra of p-1A and p-1B obtained in THF show two principal bands, one with a maximum (λ max ) at 300 nm and a low-energy absorption band at 400 nm.The spectra in films show an increasing absorption from approximately 480 nm to 290 nm and an apparent band at 382 nm and 386 nm for p-1A and p-1B, respectively.This indicates that the local maximum at 400 nm undergoes a hypsochromic shift (14-18 nm) when moving from solution in THF to film.The UV-visible spectra of the polymers and complex 1 in THF remain practically unchanged, which shows that there is no interaction by aggregation.Analogous spectra were observed for the polymers in chloroform and films, suggesting that in the solid state, interactions by aggregates like those found in non-coordinating solvents can also occur.Emission spectra of the polymers were recorded in solution and film (Fig. 5).The polymers in solution showed an intensity quite similar to that displayed by complex 1.Once again, chloroform promoted the increase in fluorescence, likely due to aggregates.According to previous observations for complex 1 in solution, polymer p-1A has a fluorescent emission centered at 510 nm in THF, while in chloroform it increases to 512 nm.A strong shift of 16 nm was observed in film, with the emission maxima centered at 526 nm.On the other hand, p-1B showed a modest shift of the λ max from THF (524 nm) to chloroform (522 nm), and finally, a small bathochromic shift of 1 nm was observed in film (λ max = 525 nm). Photoluminescence in the solid state exhibits an intense enhancement of as great as 38 times in the case of p-1A and 65 times for p-1B, compared to their solutions in THF.Such a significant increase in fluorescence is consistent if we consider the aggregation-induced emission enhancement shown by complex 1.In this sense, it can be argued that restricted intramolecular motion (RIM) of the Zn-Salphen unit in polymers could enhance the fluorescence in the solid state, given the low Zn-Salphen units (7.4-16.1)relative to the total chain size (1665-3622) for p-1A as an example. Sensing Properties The supramolecular recognition of a copolymerized Zn-Salphen scaffold has been previously studied using nonkinetically monitored steady-state electrochemical techniques against different guests [34].Such work showed that the binding of anions to the Zn-Salphen complex resulted in a change in the polymer's charge, which could be measured using potentiometry.On the other hand, interaction of Lewis bases with the metal center is known to produce de-aggregation of self-assembled Zn-Salphen complexes [19,26].This phenomenon is due to the fact that Lewis bases can donate electrons to the metal center, which weakens the pre-existent intermolecular metal-ligand bonds (M‧‧‧O) and leads to the dissociation of the assemblies.Since fluorescent response is a characteristic of the aggregation state of the complex and, as shown in an earlier section, copolymer films p-1A and p-1B maintain a photophysical emission trend compatible to the behavior shown by aggregated molecular counterparts (Fig. 5), then it is reasonable to infer that this phenomenon could be exploited in polymer films for optical detectors under a turn-off fashion.In such a device, the addition of a suitable Lewis-basic guest would cause the preexisting Zn-Salphen conglomerate domains to de-aggregate, which would result in a negative change in the complex's fluorescent emission at a local level.This change in fluorescence could be used to detect the presence or absence of a suitable analyte in a simplified monitoring format that does not require electrochemical monitoring schemes.However, the capacity of certain guests to spontaneously displace proton interchange equilibria, such as those with Brønsted-Lowry weak base behavior, remains unclear.It is unknown whether this capacity would drive an eventual host-guest response positively or negatively. To address these questions, we evaluated the optical sensing properties of p-1A and p-1B films in the solid state.We immersed the films in 10 mM solutions of selected guests, including fluoride (F − ), chloride (Cl − ), bromide (Br − ), acetate (OAc − ), and thiocyanate (SCN − ).The fluorescence Fig. 4 Normalized absorption spectra of Zn-Salphen polymers p-1A and p-1B in THF (dashed bold black), chloroform (dashed light gray) and film (solid bold).Concentration of 1 mg/mL of polymer at 2% nominal content of complex 1 Fig. 5 Fluorescent emission (λ exc = 365 nm) of p-1A (red) and p-1B (blue) in THF (dashed bold black), chloroform (dashed light gray) and film immersed in water (solid bold).Concentration of polymer in solution of 0.5 mg/mL with 2% of nominal content of complex 1 in the polymer.Inset images: fluorescence of p-1B dissolved in chloroform (right) and casted in film (left) upon irradiation with a 365 nm lamp emission spectra of the films were measured at different times for one hour (Figs. 6 and 7 and Supplementary Figs.S1-S10).We also tested the protonated counterpart of one of the earlier guests, acetic acid (AcOH), as a surrogate for a metabolically relevant derivative.This further clarified the remaining open questions. In general, small changes in emission were observed for the anions throughout the time of each experiment.For example, in the case of the film p-1A, a slight increase in fluorescence was observed against fluoride after 10 min, followed by a drop of about 15% of total emission (Figs.6a and 7).Thiocyanate showed a similar trend, where an overall increase of almost 10% was achieved after 20 min, followed by a return to lower intensities under a semi-asymptotical fashion (Fig. S1).Chloride and bromide produced an enhancement of 3% and 8%, respectively (Figs.S2 and S3) after 10 to 20 min, whereas, on the contrary, acetate was the anion with the highest turn-off response (Fig. S4), producing a quench of 24% of the total fluorescence after 60 min.On the other hand, film p-1B displayed a modest response, although the effect was the inverse to that shown by the earlier counterpart.For this film, chloride, bromide, and thiocyanate were quenchers, with quenching efficiencies of 1%, 9%, and 1%, respectively (Figs.S5, S6 and S7).Fluoride and acetate showed a fluorescence enhancement of only 3% and 5%, respectively, after approximately 20 min (Figs.S8 and S9).This was followed by a return to more conservative values over the longer term.However, the results also suggest that the response and kinetics is related to the composition of the films.De-aggregation apparently occurs more easily for MMA:n-BuA ratios of 50:50 than 40:60.Therefore, acetate is the most promising candidate to show a proofof-concept turn-off behavior in film p-1A.We propose as well using the protonated counterpart of this anion AcOH to explore whether its incorporation would promote a differential change in emission in both compositions that would follow a similar trend of larger and faster responses in p-1A than that in p-1B.In this regard, a significant difference was observed in the presence of 10 mM acetic acid.Film p-1A showed a remarkable case, with a 21% quenching of the total fluorescence at 10 min and a final quenching of 73% at 1 h (Figs.6b and 7).Film p-1B displayed a similar behavior, with a more conservative 49% quenching (Figs. 7 and S10).Further experiments revealed that the Limits of Detection (LODs) of the films against stepwise increased concentrations of AcOH also depended on composition.p-1A achieved a LOD of 1.02 × 10 -3 M, while p-1B reached and S12).In this sense, the quantum yields (%Φ) of the systems (Fig. S13) offered slightly different values.%Φ for p-1A was 0.13, while %Φ for p-1B was 0.15.Both values were determined in THF solutions, but it remains unclear how these values relate to the overall behavior in the solid state.In addition, competition experiments were performed on the films.In these experiments, pristine specimens were first exposed to the anions individually, and then exposed to AcOH (Fig. S14).This was done to determine whether or not AcOH could displace the former in the solid state and thus, yield a response in a trend compatible with the earlier results.The results showed that the systems mainly responded to the protonated counterpart of acetate. These results suggest that the interaction between anions and the Zn-Salphen complex can produce either a quenching or an enhancement of the fluorescent emission, depending on several possible reasons that may occur individually or collectively.These reasons include: 1) the capacity of each guest to interact with the binding sites of the Zn-Salphen complex; 2) the ability of each guest to disrupt interchain aggregation that occurs at the domains containing local assemblies of Zn-Salphen; 3) the ability of the films to effectively internalize each guest toward the Zn-Salphen units and to produce host-guest association events in a differential manner, depending on the internal supramolecular environment derived from composition.Given this scenario, it is reasonable to infer that the strongly marked quenching produced by acetic acid is more likely due to a facilitated migration of the neutral guest, in combination with proton exchange pathways that facilitate de-aggregation at the Zn-Salphen domains.However, further studies are needed to either confirm or refute this possibility.For example, such studies could use metal-Salphen counterparts that either remain fluorescent in solid state films via AIEE mechanisms or do not proceed under emission schemes as the latter one, and, simultaneously, exhibit association against guests in a similar manner.However, these studies would fall outside the scope of the present work and are thus not included in the study. Conclusions The fluorescence evaluation of the non-symmetrical Zn-Salphen complex 1 was reported for the first time.In a coordinating solvent, the complex exhibited very low fluorescence.However, in a less coordinating solvent, the emission was significantly enhanced due to the aggregated form of the complex.This magnification of the emission (18 times) in the aggregated state induced by the change in the solubility of the medium is indicative of AIEE behavior of molecule 1, which is likely due to restricted intramolecular rotation of the phenyl ring attached to the imine group.This finding opens the door for further investigation into the modification of the substituents in this position.Furthermore, the synthesis of new co-polymers was accomplished by radical polymerization of complex 1 and acrylic monomers MMA/ n-BuA at different compositions (i.e.30:70, 40:60, 50:50, 60:40) through a full factorial design of experiments, from which, functional formulations (MMA/n-BuA: 50:50 and 40:60) were selected for further evaluations.The new polymers p-1A (50:50 MMA:n-BuA) and p-1B (40:60 MMA:n-BuA) thus prepared possess excellent physical properties, and their respective films were also easily prepared.Fluorescent measurements of both polymers in solutions of coordinating and non-coordinating solvents showed no significant difference from the isolated complex 1.However, a large increase of up to 65 times in the photoluminescent emission of the films was found in comparison to their respective solution-based counterparts.The enhanced fluorescence in films was further explored to measure host-guest interactions.The results showed small changes in the emission of the polymers after treatment with guests.The difference in behavior between the guests is hypothesized to be dependent on the capacity of the anion to interact strongly with either the polymer or the Zn-Salphen unit, coordinate the complex, and generate de-aggregation.Finally, a remarkable decrease in fluorescence of more than 73% and 49% for p-1A and p-1B, respectively, in the presence of acetic acid was found for both films.This opens the possibility for the films to be used as portable, optical and binary (yes/no: turn-off/turnon) sensing platforms that eliminated the need for expensive measuring devices. Materials and Methods All starting compounds and solvents were purchased from Sigma-Aldrich and used without further purification, unless otherwise stated.Synthesis, purification and spectroscopical confirmation of 1 was carried out according to procedures reported earlier by our group [34].TGA characterization was carried out under air atmosphere at 10 °C min −1 on a Perkin Elmer TGA400 apparatus from 30 °C to 500 °C.DSC data was measured under N 2 with an increase of 10 °C min −1 on Mettler Toledo DSC1 calorimeter at Unidad de Servicios de Apoyo a la Investigación y a la Industria (USAII) of the School of Chemistry-UNAM; conditioning cycles: 1) r.t. to 130 °C, 2) isothermal at 130 °C for 10 min, 3) 130 °C to − 100 °C, 4) isothermal at − 100 °C for 10 min; measurement cycle: 5) − 100 °C to 250 °C.Gel permeation chromatography (GPC) determinations were made on an Agilent Technologies 1290 infinity UHPLC with a Q-TOF detector 6530 DUAL AJ ESI using chromatography-grade tetrahydrofuran as solvent at Unidad de Servicios para la Industria Petrolera (USIP) of the School of Chemistry-UNAM.All nuclear magnetic resonance (NMR) measurements were carried out on a Magritek Spinsolve 80 MHz spectrometer at room temperature, and chemical shifts are given in parts per million vs TMS.UV-Vis absorption measurements were performed in a Dynamica Halo XB-10 UV-Vis apparatus.All fluorescent measurements were performed in a Hinotek F96pro fluorescence spectrophotometer apparatus with an emission filter (λ exc = 365 nm). Synthesis and Characterization of Copolymers p-1A and p-1B A mixture of Zn-Salphen complex 1, MMA and n-BuA was homogenized.Then, 200 μL of THF, and azobisisobutyronitrile (AIBN) (10 mg/g total mass ) was added to the monomer mixture.The resulting solution was degassed by bubbling argon into the mixture.The reaction was sonicated at 75 °C in a Sonorex Digitec ultrasonic bath (Bandelin electronic GmbH & Co. KG, Berlin, Germany) during 30 min, then ultrasonic stir was turn off and the reaction was kept at 80 °C during 30 min. Spectroscopic and Fluorescence Measurements Study with Zn-Salphen complex 1. Measurements of the UV-Vis absorption and fluorescent spectra in solution were achieved at final concentration of 5.04 × 10 -5 M in THF, chloroform or the mixture THF:H 2 O. Spectra of polymers p-1A and p-1B in solution were obtained from 0.5 g/mL solutions of the respective polymer in THF or CHCl 3 .Film spectra were obtained by direct measure of absorption or fluorescent emission using special settings for solid state specimens. AIEE study of complex 1.A 1.01 × 10 -3 M stock solution of 1 in THF was prepared.Then, 2 mL of a solution with a final concentration of 5.04 × 10 -5 M was prepared, by adding 100 μL of the stock solution and the corresponding fraction of THF, followed by a dropwise addition of the corresponding fraction of water under stirring.Immediately, the fluorescence spectrum was acquired in the range between 390-700 nm. Fluorescence sensing of films p-1A and p-1B.Solutions of the corresponding guest at concentration 0.01 M were prepared using deionized water (> 18.2 MΩ cm) and the respective sodium or potassium salts: SCN − , OAc − , Cl − and Br − from potassium, F − from sodium.In a typical experiment, polymer strip film (ca.1.0 cm × 2.5 cm) was fixed inside the quartz cuvette using the setting for solid state specimens, then 3 mL of the corresponding analyte (anion guest or acetic acid) was poured inside the cuvette.Then, fluorescent emission was repeatedly measured every 10 min from 390 to 700 nm. Quantum yield determinations were carried out under compatible conditions, similar to those described for Fig. 4; for solubility reasons, they were done in THF in the case of the polymer samples, while methanol was employed for the reference Rhodamine B. These experiments were done using 365 nm as excitation wavelength and fluorescent integration was achieved for the ranges 430 nm to 680 nm for the reference and between 530 and 680 nm for the polymer samples to avoid biased analyses due to tailing trends. Fig. 1 Fig. 1 Normalized absorbance of complex 1 in chloroform and THF upon addition of fractional volumes of water up to 2:3 (THF:H 2 O).Final concentration of 5.04 × 10 -5 M of 1 in all solutions Fig. 3 Fig. 3 Normalized fluorescence (λ exc = 365 nm) of complex 1 upon addition of fractional amounts of water.Final concentration of each solution was 5.04 × 10 -5 M. Inset images: fluorescence of 1 in THF solution (left) and forming aggregates (right) upon irradiation with a 365 nm lamp
2023-09-05T06:17:20.890Z
2023-09-04T00:00:00.000
{ "year": 2023, "sha1": "f05253e065d7a52e42aef4e99567936e3503db0d", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s10895-023-03399-6.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "b1a0531614aa6b2f299f2a0850e29b3e86a97892", "s2fieldsofstudy": [ "Chemistry", "Materials Science" ], "extfieldsofstudy": [ "Medicine" ] }
56152153
pes2o/s2orc
v3-fos-license
Connected power domination in graphs The study of power domination in graphs arises from the problem of placing a minimum number of measurement devices in an electrical network while monitoring the entire network. A power dominating set of a graph is a set of vertices from which every vertex in the graph can be observed, following a set of rules for power system monitoring. In this paper, we study the problem of finding a minimum power dominating set which is connected; the cardinality of such a set is called the connected power domination number of the graph. We show that the connected power domination number of a graph is NP-hard to compute in general, but can be computed in linear time in cactus graphs and block graphs. We also give various structural results about connected power domination, including a cut vertex decomposition and a characterization of the effects of various vertex and edge operations on the connected power domination number. Finally, we present novel integer programming formulations for power domination, connected power domination, and power propagation time, and give computational results. Introduction Electrical power companies must constantly monitor their electrical networks in order to detect and respond to failures in the networks. To this end, they place devices called Phase Measurement Units (PMUs) at select locations in the system. A PMU can directly measure the currents and phase angles of all transmission lines incident to its location. Moreover, physical laws governing electrical circuits (e.g. Kirchhoff's circuit laws) can be leveraged to gain information about parts of the network which are not directly observed. Due to the high cost of PMUs, it is a problem of interest to find the smallest number of PMUs (and their locations) from which an entire network can be observed. This PMU placement problem has been explored extensively in the electrical engineering literature; see [6,7,16,35,39,41,44,48], and the bibliographies therein for various placement strategies and computational results. Haynes et al. [32] formulated the PMU placement problem as a dynamic graph coloring problem, where vertices represent electric nodes and edges represent connections via transmission lines. In this model, a set of initially colored vertices in a graph (corresponding to locations of PMUs) cause other vertices to become colored (i.e. to be observed by the PMUs); the goal is to find the smallest set of initially colored vertices which causes all other vertices to become colored. More precisely, let G = (V, E) be a graph and let S ⊂ V be a set of initially colored are adjacent, or neighbors, if {v, w} ∈ E. If v is adjacent to w, we write v ∼ w; otherwise, we write v ∼ w. The neighborhood of v ∈ V is the set of all vertices which are adjacent to v, denoted N (v; G); the degree of v ∈ V is defined as d(v; G) = |N (v; G)|. The closed neighborhood of v ∈ V is the set N (v; G) ∪ {v}, denoted N [v; G]. The dependence of these parameters on G can be omitted when it is clear from the context. The closed neighborhood of a set S ⊂ V is the set . Given S ⊂ V , the induced subgraph G[S] is the subgraph of G whose vertex set is S and whose edge set consists of all edges of G which have both endpoints in S. The number of connected components of G will be denoted by c(G); an isomorphism between graphs G 1 and G 2 will be denoted by G 1 G 2 . A leaf, or pendant, is a vertex with degree 1. A cut vertex is a vertex which, when removed, increases the number of connected components in G. A cut edge is an edge which, when removed, increases the number of components of G. A biconnected component, or block, of G is a maximal subgraph of G which has no cut vertices. The disjoint union of sets A and B will be denoted A∪B. A chronological list of forces F associated with a power dominating or zero forcing set S of a graph G is a sequence of forces applied to color V (G) in the order they are applied. A forcing chain for a chronological list of forces is a maximal sequence of vertices (v 1 , . . . , v k ) such that the force v i → v i+1 is in F for 1 ≤ i ≤ k − 1. Each forcing chain produces a distinct path in G, one of whose endpoints is in S; we will say this endpoint initiates the forcing chain. We also recall some terminology and notation from [15] which will be used in the sequel. Let G = (V, E) P n be a graph and v be a vertex of degree at least 3. A pendant path attached to v is a maximal set P ⊂ V such that G[P ] is a connected component of G − v which is a path, one of whose ends is adjacent to v in G. The neighbor of v in P will be called the base of the path, and p(v) will denote the number of pendant paths attached to v ∈ V . We will also say that p(u) = 1 if u is a cut vertex which belongs to a pendant path. Similarly, a pendant tree attached to v is a set T ⊂ V composed of the vertices of a connected component of G − v which is a tree, and which has a single vertex adjacent to v in G. Finally, for a connected graph G = (V, E) P n , define: When there is no scope for confusion, the dependence on G will be omitted. Note that the sets R 1 , R 2 , and R 3 partition the set of cut vertices of G. For convenience, we will say that M(P n ) = ∅. For other graph theoretic terminology and definitions, we refer the reader to [11]. Structural results We first present two technical lemmas about vertices which are contained in every connected power dominating set, and vertices which are not contained in any minimum connected power dominating set. Proof. Let v be a cut vertex of G which is in R 3 (G). If two or more components of G − v contain vertices of R, then since R is connected and since any path between two vertices from different components of G − v also contains v, R must contain v. Now suppose all vertices of R are contained in a single component If v is not contained in R, then no vertex outside G 1 ∪ {v} can be dominated or forced at any timestep since v will have at least two uncolored neighbors in the other components of G − v; this is a contradiction. Thus, every vertex of R 3 must be in R. Now let v be a cut vertex of G which is in R 2 (G). If both components of G − v contain vertices of R, then by the same argument as above, R must contain v. Now suppose that all vertices of R are contained in a single component If v is not contained in R, then since the other component of G − v is not a pendant path of G, it cannot be forced by a single forcing chain passing through v, a contradiction. Thus, every vertex of R 2 must be in R. Since Lemma 2. Let G be a graph different from a path. Then, no minimum connected power dominating set of G contains a leaf of G. Proof. Suppose for contradiction that there is a minimum connected power dominating set S of G which contains a leaf of G. Since G is not a path, S must contain another vertex u besides . Since S is connected, it must include the neighbor v of , since every path between u and passes through v. However, S\{ } is also a connected power dominating set of G, since v can dominate in the first timestep, and removing does not disconnect G[S]. This contradicts the minimality of S. Cut vertex decomposition A trivial block is a block consisting of two vertices, both of which belong to the same pendant path. A nontrivial block is a block B which is not trivial, together with all pendant paths attached to vertices in B. In this section, we present a technique for computing the connected power domination numbers of graphs with cut vertices in terms of the connected power domination numbers of their nontrivial blocks. We first introduce several definitions and lemmas which will be used in the main result of this section; some of these are adapted from [12,32]. Definition 1. Let G = (V, E) be a graph and let X ⊆ V . A set S ⊆ V (G) is a connected power dominating set of G subject to X if S is a connected power dominating set of G and X ⊆ S. The size of a minimum connected power dominating set subject to X is denoted γ P,c (G; X). The following observation is well-known in the power domination literature. It is often stated in terms of leaves, but remains valid for pendant paths. Observation 2. For any graph G, there exists a minimum power dominating set of G that contains every vertex that is incident to two or more leaves. Given a graph G = (V, E) and a set X ⊆ V , define r (G, X) as the graph obtained by attaching r leaves to each vertex in X. The following result establishes a relationship between γ P,c (G; X) and 3 (G, X). Proposition 1. For any graph G = (V, E) and any set X ⊆ V , a set S is a minimum connected power dominating set of G subject to X if and only if S is a minimum connected power dominating set of 3 (G, X). Proof. Let S be a minimum connected power dominating set of G subject to X. Then, S can dominate all the added leaves in 3 (G, X) in the first timestep, and then force the rest of the vertices of V (G) as in G. Hence S is a connected power dominating set of 3 (G, X). Suppose there is a minimum connected power dominating set S of 3 (G, X) with |S | < |S|. By Lemma 1, S contains all vertices in X. Moreover, since S is minimum, it does not contain any of the added leaves of 3 (G, X). Thus, S is a connected power dominating set of G subject to X, a contradiction to S being minimum. The other direction follows analogously. Let µ(v) denote the number of nontrivial blocks a vertex v belongs to. Proof. Let S i be an arbitrary minimum connected power dominating set of B i subject to A i . By Proposition 1, S i is also a minimum connected power dominating set of 3 (B i , A i ). We claim that S := ∪ k i=1 S i is a minimum connected power dominating set of G. Let u and v be two arbitrary vertices in S. Without loss of generality, suppose u and v are not in M(G); the case when one or both of u and v are in M(G) is handled analogously. If u and v belong to the same nontrivial block B i , then since S contains S i , and S i is a connected power dominating set containing u and v, there is a path between u and v in G[S]. Otherwise, let B i0 and B ip respectively be the blocks containing u and v. Let B i0 , a i1 , B i1 , . . . , a ip−1 , B ip−1 , a ip , B ip be the path between B i0 and B ip in the block tree of G, where a i1 , . . . , a ip are the cut vertices; note that by definition, each of these vertices is in M(G). Then since S contains S i0 , and S i0 is a connected power dominating set containing u and a i1 , there is a path between u and a i1 in G[S]. Likewise, there is a path between a ij and a ij+1 for 1 ≤ j ≤ p − 1, and there is a path between a ip and v. Thus, in all cases, there is a path between u and v in G[S], so S is a connected set. Moreover, S is a power dominating set, since by construction each S i can dominate the corresponding nontrivial block of G (because no uncolored vertex within a nontrivial block has uncolored neighbors outside that block). Finally, it remains to be shown that S is minimum. Suppose S is a connected power dominating set of G with |S | < |S|. Thus, there is some i such that |S ∩ B i | < |S ∩ B i | = |S i |. However, by Lemma 1, S contains all vertices in M(G), and hence S ∩ B i must contain A i ; this contradicts the minimality of S i . It follows that S is a minimum connected power dominating set of G. Thus, S is a minimum connected power dominating set of G. Moreover, Thus, γ P,c (G) = |S| is as claimed in (1). Vertex and edge operations In this section, we explore the effects of various vertex and edge operations on the connected power domination number. The effects of operations such as deletion and contraction of vertices and edges have been studied for other parameters as well. For example, the power domination spread of a vertex v and edge e is defined as γ P (G; v) = γ P (G) − γ P (G − v) and γ P (G; e) = γ P (G) − γ P (G − e), respectively. The analogously defined rank spread r, zero forcing spread z, path spread p, and connected forcing spread z c have also been defined as parameters which respectively describe the change in the minimum rank, zero forcing number, path cover number, and connected forcing number of a graph when a vertex v or edge e is deleted. In particular, it has been shown that: [26,33], [8,43], [8,9,46], [15]. We now define the connected power domination spread of a vertex v and edge e as γ P,c (G; v) = γ P,c (G) − γ P,c (G − v) and γ P,c (G; e) = γ P,c (G) − γ P,c (G − e), respectively. Since a disconnected graph cannot have a connected power dominating set, in the definitions above we will restrict v to be a non-cut vertex and e to be a non-cut edge. We now show that the connected power domination spread of a vertex or edge can be arbitrarily large. From Proposition 2, it follows that the connected power domination number can be arbitrarily increased or decreased with the addition or deletion of a single vertex or edge. Furthermore, the connected power domination number is not monotone with respect to either operation. We next show that the connected power domination number can be arbitrarily increased or decreased with the contraction of an edge, and it can be arbitrarily increased, but not decreased, with the subdivision of an edge. Given a graph G and edge e in G, let G : e denote the graph obtained by subdividing e in G. Figure 1: Deleting a vertex or edge, and contracting an edge, can change the connected power domination number arbitrarily. Subdividing an edge can arbitrarily increase, but not decrease, the connected power domination number. Proposition 3. For any integer c > 0, there exist graphs G 1 , G 2 , and G 3 and edges e 1 ∈ G 1 , Proof. Let C = (V, E) be a cycle on 2c + 1 vertices, let uv be an edge of C, and let x be a vertex of C at maximum distance from u and v. Figure 1, right, for an illustration. Proposition 4. Let G be a connected graph and uv be an edge in G. Then γ P,c (G : uv) ≥ γ P,c (G), and this bound is tight. Proof. Let w be the new vertex introduced after subdividing edge uv in G : uv. Let R be a minimum connected power dominating set of G : uv and fix a chronological list of forces associated with R. We will show that R or R\{w} is also a connected power dominating set of G. Suppose first that neither u nor v is contained in R. Since R is connected, w cannot be in R; thus, R remains connected in G. Moreover, in G : uv, w is either forced by u or v, say u, and v is forced either by w or by some other vertex. The same chronological list of forces remains valid in G, except that if w forces v in G : uv, u forces v in G. Thus, R is a connected power dominating set of G. Next, suppose that exactly one of u and v, say u, is contained in R. Then R remains connected in G, u can dominate v in G, and all other forces associated with R in G : uv remain valid in G. Thus, R is a connected power dominating set in G. Finally, suppose that both u and v are contained in R. If w is also contained in R, then R\{w} is a connected set in G, u can dominate v in G, and all other forces associated with R in G : uv remain valid in G. Similarly, if w is not contained in R, then R remains connected in G, and all forces associated with R in G : uv remain valid in G. It follows that γ P,c (G) ≤ γ P,c (G : uv). This bound holds with equality, e.g., when any edge of a path P n is subdivided. NP-completeness of connected power domination In this section, we show that computing the connected power domination number of a graph is NP-complete. To begin, we state the decision version of this problem. Proof. Given a graph G = (V, E) and a set S ⊂ V , clearly it can be verified in polynomial time that S is a power dominating set, that G[S] is connected, and that |S| ≤ k. Thus, CP D is in NP. For our reduction, we will use the problem of zero forcing, which was proved to be NPcomplete in [2,50]. The decision version of zero forcing is stated below. PROBLEM: Zero forcing (ZF ) INSTANCE: A simple undirected graph G = (V, E) and a positive integer k ≤ |V |. QUESTION: Does G contain a zero forcing set S of size at most k? Clearly, G can be constructed from G in polynomial time, so f is a polynomial transformation. See Figure 2 for an illustration of G and G . Figure 2: Obtaining G from G. We will now prove the correctness of f . Suppose I = G, k is a 'yes' instance of ZF , i.e., that G = (V, E) has a zero forcing set S = {v i : i ∈ J} where J ⊂ {1, . . . , n} is some index set of size at most k. We claim that S := {p i,n : i ∈ J} ∪ {v * } is a connected power dominating set of G . To see why, first note that since v * is adjacent to every vertex in S \{v * }, G [S ] is connected. Next, note that v * can dominate all vertices in {p i,n , q i,n : 1 ≤ i ≤ n} ∪ { 1 , 2 } at the first timestep, and the vertices in {p i,n : i ∈ J} can dominate the vertices in {p i,n−1 : i ∈ J} ∪ {w i : i ∈ J} in the first timestep. Then, the vertices in {q i,j , u i : 1 ≤ j ≤ n − 1, 1 ≤ i ≤ n} can be colored by forcing chains initiated at q i,n , 1 ≤ i ≤ n. Moreover, the vertices in {p i,j : 1 ≤ j ≤ n − 1, i ∈ J} can be colored by forcing chains initiated at p i,n , i ∈ J. When all these vertices are colored, the vertices in {u i : i ∈ J} will each have a single uncolored neighbor, and hence will be able to force the vertices in {v i : i ∈ J}. At that point, since {v i : i ∈ J} is a zero forcing set of G and since all neighbors of vertices in V (G) are colored, all vertices in V (G) can get forced. Finally, all vertices in Conversely, suppose f (I) = G , k+1 is a 'yes' instance of CP D, i.e., that G has a connected power dominating set of size at most k + 1. Let S be a minimum connected power dominating set of G . By Lemma 1, v * is in S . By Lemma 2, no leaf of G is in S ; thus, 1 , 2 , and w i , can be in S , since the shortest path between v * and any of these vertices passes through more than n + 1 vertices, v * is in S , S is connected, and |S | ≤ k + 1 ≤ n + 1. If S contains a vertex q i * ,j * for some 1 ≤ i * , j * ≤ n, then it must also contain all vertices in {q i * ,j : j * < j ≤ n}. Note that q i * ,j * can dominate or force only vertices in the path G [{q i * ,j : 1 ≤ j ≤ n} ∪ {u i * }]. However, since v * dominates q i * ,n and q i * ,n can initiate a forcing chain which colors all vertices in By the same argument as above, the vertices in S dominate or force the vertices in At the timestep when all these vertices get colored, no vertex in V (G )\V (G) can perform a force; the only way a vertex p i,j , 1 ≤ i ≤ n, i / ∈ J, 1 ≤ j ≤ n − 1 can get colored is for v i to get colored and then for u i to initiate a forcing chain containing p i,j . Moreover, the only way a vertex in V (G) can get colored is if it is forced by some other vertex in V (G). It follows that {v i : i ∈ J} must be a zero forcing set of G, since otherwise some v i ∈ V (G) will never be colored, contradicting that S is a connected power dominating set. Thus, if f (I) is a 'yes' instance of CP D, then I is a 'yes' instance of ZF . Characterizations of γ P,c for specific graphs While the connected power domination number is NP-hard to compute in general, in this section we show that the connected power domination numbers of cactus and block graphs can be computed efficiently. Trees We begin with a closed formula for the connected power domination number of trees, and characterize the trees whose connected power domination number equals their power domination number. Proof. By definition, M(T ) = ∅ if and only if T is a path; in this case, γ P,c (T ) = 1 and any vertex of the path is a minimum connected power dominating set. Assume henceforth that T is not a path; we will show that M is a minimum connected power dominating set of T . All vertices of T with degree at least 3 are in R 3 , and all vertices of T which have degree 2 and do not belong to pendant paths are in R 2 . Thus, any vertex of T which is not initially colored belongs to some pendant path, and all other vertices of that pendant path are also initially uncolored. Since deleting all vertices of a pendant path does not disconnect the graph, M is a connected set. Next, let v be a vertex to which a pendant path is attached. Since v is in R 3 and therefore in M, v can dominate all its neighbors in the first timestep; then, the base of each pendant path can initiate a forcing chain which forces the whole path. Thus, M is a connected power dominating set of T , so γ P,c (T ) ≤ |M|. Moreover, by Lemma 1, every minimum connected power dominating set of T contains M, so γ P,c (T ) ≥ |M|, and hence γ P,c (T ) = |M|. The vertices in R 1 (T ) can be found in O(n) time (e.g. by starting from the degree 1 vertices of the graph and applying depth-first-search until a vertex of degree at least 3 is reached). Thus, the set M(T ) = V (T )\R 1 (T ) can also be found in O(n) time. Since the set M(G) is uniquely determined, we have the following corollary to Theorem 3. Proof. It is easy to see if T is a path, γ P,c (T ) = γ P (T ); thus, we will assume henceforth that T is not a path. Let T be a tree such that γ P,c (T ) = γ P (T ), and let S be a minimum connected power dominating set of T . If T has a vertex v with d(v) = 2 which is not in a pendant path, v must be in R 2 and therefore in S. However, S\{v} is clearly also a (non-connected) power dominating set since any neighbor of v can force v, so γ P (T ) < γ P,c (T ). Similarly, if T has a vertex v with d(v) ≥ 3 which is adjacent to 1 or 0 pendant paths, we claim that S\{v} is a power dominating set. Indeed, since v has at least 3 neighbors, at most one of which belongs to a pendant path, v has a neighbor in R 2 or R 3 , and can be dominated by that neighbor. Moreover, the pendant path attached to v (if it exists) can be forced by v in later timesteps since all other neighbors of v are colored. Thus, it follows that γ P (T ) < γ P,c (T ). Now suppose T is a (non-path) tree which satisfies condition 2) in Proposition 5. Since all degree 2 vertices of T belong to pendant paths, R 2 (T ) = ∅. Thus, by Theorem 3, M(T ) = R 3 (T ) is a minimum connected power dominating set of T . Moreover, since every vertex of degree at least 3 has at least two pendant paths attached to it, by Observation 2, there exists a minimum power dominating set which contains every vertex in R 3 . Thus, γ P (T ) ≥ |R 3 | = γ P,c (T ) ≥ γ P (T ), so γ P,c (T ) = γ P (T ). Block graphs We now extend the result of Theorem 3 to block graphs. A block graph is a graph whose biconnected components are cliques. Proof. If M(G) = ∅, then either G P n , or G P n and all cut vertices of G are in R 1 (G). In the latter case, G consists of a single clique K of size at least 3, and at most one pendant path attached to each vertex of K. Then, a single vertex of K is a (connected) power dominating set of G, since it can dominate all vertices of K in the first timestep, and then any pendant paths attached to vertices of K can be forced by their respective bases. Thus, if M(G) = ∅, γ P,c (G) = 1. Now suppose that M(G) = ∅; we claim that M(G) is a connected power dominating set of G. Note that in a block graph, the shortest path between two vertices is unique; in particular, the shortest path between two cut vertices in M(G) is entirely composed of other cut vertices in M(G). Thus, M(G) is a connected set. If v belongs to a maximal clique K of G of size 2 which is part of a pendant tree T , then v is either in M(G), or is in a pendant path and gets forced by M(G) ∩ T by Theorem 3. If v belongs to a maximal clique K of G of size 2 which is not part of a pendant tree, then v is in R 2 (G) (and hence in M(G)) and is therefore initially colored. Now suppose v belongs to a clique K of G of size at least 3. Since M(G) = ∅, every clique of G of size at least 3 contains at least one vertex of M(G). Thus, a vertex in M(G) ∩ K can dominate v in the first timestep. Since v gets colored in every case, M(G) is a connected power dominating set of G. By Lemma 1, every connected power dominating set of G contains M(G), so M(G) is a minimum connected power dominating set of G. Cactus graphs In this section, we give a linear-time algorithm for finding a minimum connected power dominating set of a cactus graph. A cactus graph is a graph in which any two cycles have at most one common vertex; equivalently, it is a graph whose biconnected components are cycles or cut edges. We first establish some results which are applicable to arbitrary graphs containing a cycle block. Let G be a connected graph and C be the vertex set of a block of G such that G[C] is a cycle. Given vertices u and v of C, let (u → v) be the set of vertices of C (given a plane embedding) encountered while traveling counterclockwise from u to v, not including u and v. We will refer to (u → v) as a segment of C. Note that a segment of the form (u → u) is also well-defined. Observation 3. Let G = (V, E) be a connected graph and C be the vertex set of a block of G such that G[C] is a cycle. Then, any set R ⊂ V such that G[R] is connected can exclude at most one segment of C. In particular, a connected power dominating set of a graph G with cycle block C can exclude at most one segment of C. Proof. We will first show that if a segment of C is excluded from a connected power dominating set of G, then it contains either no cut vertices, or one cut vertex in R 1 (G), or two adjacent cut vertices in R 1 (G). Let R be an arbitrary connected power dominating set of G and (u → v) be a segment of C not contained in R. By Lemma 1, M ⊂ R, so each vertex in (u → v) is either a non-cut vertex, or a cut vertex in R 1 (G); in the latter case, the entire pendant path attached to the vertex is also not in R since otherwise R could not be connected. Suppose (u → v) contains three distinct cut vertices, p, q, and r, lying on C in this counterclockwise order. Every path from a vertex of R to a vertex in (p → r) passes through p or r. However, once p and r are dominated or forced by some forcing chains starting outside (u → v), each of p and r will have two uncolored neighbors and will not be able to force another vertex. Thus, the vertices in (p → r) cannot be forced; note that (p → r) = ∅ since q ∈ (p → r). This contradicts R being a power dominating set, so (u → v) can contain at most two cut vertices. Similarly, if (u → v) contains two cut vertices which are not adjacent, then the vertices in (p → r) cannot be forced. Thus, if (u → v) contains two cut vertices, they must be adjacent. Now let (u → v) be any segment of C which contains either no cut vertices, or one cut vertex in R 1 (G), or two adjacent cut vertices in R 1 (G). We claim that the set S obtained by removing (u → v) and all pendant paths attached to (u → v) from V is a connected power dominating set of G. Indeed, deleting the vertices in (u → v) from G together with all pendant paths attached to (u → v) does not disconnect G, so S is a connected set. Moreover, note that by definition, a segment cannot include all vertices of C; thus, the remaining vertices (or vertex) in C outside (u → v) can initiate two forcing chains which color (u → v) and the pendant paths attached to (u → v). Thus, (u → v) can be excluded from a connected power dominating set of G. Let G = (V, E) be a connected graph and C be the vertex set of a block of G such that G[C] is a cycle. We will say a segment (u → v) of C is feasible if it can be excluded from a connected power dominating set of G. We will denote by s(C) the maximum size of a feasible segment of C. More precisely, in view of Lemma 3, s(C) can be defined as follows. Let {p 1 , . . . , p k } be the set of cut vertices in C in counterclockwise order. For j ∈ {0, 1, 2}, if k ≤ j, let S j (C) = ∅; otherwise, define S j (C) as follows, with i read modulo k: Let S(C) = S 0 (C)∪S 1 (C)∪S 2 (C) and if k > 0, define s(C) = max S∈S(C) {|S|}. Note that S 0 (C) contains all maximal (with respect to inclusion) feasible segments which contain no cut vertices, S 1 (C) contains all maximal feasible segments which contain one cut vertex in R 1 , and S 2 (C) contains all maximal feasible segments which contain two adjacent cut vertices in R 1 . Thus, by Lemma 3, S(C) contains a feasible segment of maximum size. See Figure 3 for an illustration. Theorem 5. Let G = (V, E) be a cactus graph, C 1 , . . . , C k be the vertex sets of the cycles of G, and P 1 , . . . , P be the vertex sets of the pendant paths of G. Then, Moreover, γ P,c (G) can be computed in O(n) time. Proof. If G C n or G P n , clearly γ P,c (G) = 1; thus, assume henceforth that G C n and G P n . If G is a tree different from a path, by Theorem 3, γ P,c (G) = |M(G)| = n − i=1 |P i |, and (2) holds. If G is not a tree nor a cycle, then each cycle of G has at least one cut vertex, so s(C i ) is well-defined for all i. For 1 ≤ i ≤ k, let S i be a feasible segment of C i such that |S i | = s(C i ). We claim that R * : Clearly, deleting all pendant paths from G does not disconnect it; also, by Lemma 3, deleting feasible segments from G also does not disconnect G (given that all pendant paths attached to them are also deleted). Thus, R * is a connected set. By Lemma 3, for 1 ≤ i ≤ k, S i and any pendant paths attached to S i can be forced by the vertices in C i \S i . Moreover, all other pendant paths are attached to vertices in R * which can dominate their bases and thus force the entire paths. Thus, R * is a power dominating set. Now suppose there is a minimum connected power dominating set R of G with |R | < |R * |. The vertices in V can be partitioned into M, k i=1 C i \M, and i=1 P i . By Lemma 1, R contains all vertices in M; the rest of R consists of vertices in k i=1 C i \M, and i=1 P i . By Observation 3, for 1 ≤ i ≤ k, any vertices of C i \M not contained in R must form a single segment. Suppose first that R includes some vertices of i=1 P i . The pendant paths containing these vertices cannot be attached to vertices of segments excluded from R , since then R would be disconnected. Thus, these pendant paths are attached to vertices in R . However, the bases of these pendant paths can be dominated by the vertices in R to which they are attached, and then the entire pendant paths can be forced. Moreover, removing all vertices in pendant paths from R cannot disconnect the set. Thus, R \ i=1 P i is a smaller connected power dominating set than R , contradicting the minimality of R . It follows that R does not contain vertices of i=1 P i . However, since |R | < |R * |, it follows by the pigeonhole principle that there is some i ∈ {1, . . . , k} for which |(u i → v i )| < s(C i ), a contradiction. Thus, R * is a minimum connected power dominating set of G. To verify that the time needed to find γ P,c (G) is linear in the order of the graph, first note that the set of cut vertices in G, and hence the vertices in M, C 1 , . . . , C k , and P 1 , . . . , P can be found in linear time (cf. [49]). Then, the sets of cut vertices in each cycle of G can also be found in linear time, and each of the sets of segments S 0 (C i ), S 1 (C i ), and S 2 (C i ) can be found in linear time for 1 ≤ i ≤ k. Since each collection of sets contains O(n) elements, the set of values s(C 1 ), . . . , s(C k ) -and hence γ P,c (G) -can be found in linear time. 6 Integer programming models for γ P (G) and γ P,c (G) In this section, we propose novel integer programming formulations for power domination and connected power domination, and apply them to several power network test cases. The proposed models can also be used to compute the power propagation time of G, i.e., the minimum number of timesteps required to power dominate a graph by a minimum power dominating set. More precisely, for a graph G and a power dominating set S ⊂ V (G), define S [1] A related concept is the -round power domination number, defined to be the minimum number of vertices needed to power dominate G in power propagation time at most . The -round power domination number has been studied in [2,3,28,37]; it is NP-hard to compute even on planar graphs. Our proposed integer programming model can also be used to compute the -round power domination number of a graph. Model formulation Given a graph G = (V, E), we first transform G into a directed graph G over the same vertex set, with edges (u, v) and (v, u) in G for each edge {u, v} of G. For a directed edge e and vertex v, we use the the notation e ∈ δ − (v) to indicate that v is the head of e. For each v ∈ V , let s v ∈ {0, 1} be a decision variable such that s v = 1 if v is selected to be in the power dominating set, and s v = 0 otherwise. Let T be the maximum number of propagation timesteps allowed; note that any graph can be colored in at most n timesteps by any power dominating set. For each v ∈ V , let x v ∈ {0, 1, . . . , T } be an integer variable that indicates the timestep in which v becomes colored. Lastly, for each directed edge e, let y e = 1 if the tail of e power dominates the head of e, and y e = 0 otherwise. With this notation, the power domination problem can be formulated as in Model 1. Theorem 6. For any graph G, when T = n, the optimal value of Model 1 equals γ P (G). Proof. Let S be a power dominating set of G = (V, E). Then, for each v ∈ V , either v is in S and thus s v = 1, or v is colored by some other vertex u of G, i.e., v the head of some edge uv with y uv = 1. Hence, constraint (3) is satisfied. Let x v be the timestep at which vertex v gets colored. Since T is the maximum difference between the timesteps in which any two vertices are colored, for any edge e with y e = 0, constraints (4) and (5) are satisfied. Moreover, note that a vertex which is not in S cannot color another vertex until all-but-one of its neighbors are colored. Thus, for any edge e = (u, v) such that y e = 1, u must be colored before v, and therefore x u < x v . Additionally, x w < x v for all neighbors w of u, unless u ∈ S (if u ∈ S, all neighbors of u get colored, regardless of how many neighbors there are). Therefore, constraints (4) and (5) are satisfied, so all constraints are valid for an arbitrary power dominating set S. Now, let (x, y, s) be a feasible solution to Model 1, and let S = {v : s v = 1, v ∈ V }. Let F be the set of paths formed by the edges for which y e = 1. For an edge e = (u, v) such that y e = 1 and s u = 0, by constraints (4) and (5) there must be some integer x v ∈ {0, . . . , T } such that x w + 1 ≤ x v for w ∈ N (u)\{v}. By interpreting x w as the timestep in which vertex w is forced, it follows that there must exist some timestep x w ∈ {0, . . . , T } such that u and all its neighbors except v have been forced in previous timesteps. Thus, u can force v in timestep x w . Thus, the paths formed by the edges with y e = 1 are forcing chains of G. Additionally, by constraint (3) every vertex v is either in S, or has an edge e = (u, v) such that y e = 1, and thus must get colored at some timestep. Therefore, S is a power dominating set of G and F is a set of forcing chains associated with S. Additionally, since |S| is minimized by the objective of Model 1, S is a minimum power dominating set. The following two corollaries easily follow from the proof of Theorem 6. Corollary 2. For any graph G, when T = , the optimal value of Model 1 equals the -round power domination number. Corollary 3. For any graph G, ppt(G) can be found by re-running Model 1 O(log n) times, using binary search to find the smallest value of T for which the output is the same as for T = n. Another integer programming formulation for power domination and -round power domination is given by Aazami [3]. Fan and Watson [27] have also explored integer programming formulations for power domination, as well as connected power domination. However, their formulations model a slightly different problem than the one typically considered in the power domination literature. In particular, the model proposed in [27], shown in Model 2, has decision variables x i ∈ {0, 1}, i ∈ V , for containment in a power dominating set, and p ij ∈ {0, 1}, i, j ∈ V , for whether vertex i can power dominate vertex j. Further, a i,j is the i, j th element of the adjacency matrix of the graph, and Z i = 1 for i ∈ V if i ∈ V Z , where V Z is a set of zero-injection buses. Zero-injection buses are a set of nodes that can perform the zero forcing propagation in the graph. Therefore, in the model of Fan and Watson, the set of zero-injection buses is assumed to be known a priori. If the set V Z is not known, the model becomes nonlinear, and if it is assumed that all vertices are in V Z , the optimal solution to the model is x = 0. Model 2: Power domination (Fan and Watson) In order to solve the connected power dominating set problem, additional constraints must be added to ensure connectivity of the power dominating set. There are multiple methods for ensuring the connectivity of a set of vertices, such as the standard minimum spanning tree with sub-tour elimination, Miller-Tucker-Zemlin (MTZ) constraints [42], Martin constraints [40], single-commodity flow constraints [25], and multi-commodity flow constraints [25]. Fan and Watson [27] explored the effectiveness of various connectivity constraints applied to their formulation of power domination, and found that MTZ constraints offered the best computational results. Thus, we apply the MTZ constraints to Model 1 in order to solve the connected power domination problem. We used an implementation of these constraints following the method proposed in [45], with a modification as explained in [24]. Computational Results We computed the power domination numbers, connected power domination numbers, and power propagation times of six power graphs from a standard electrical network benchmark dataset [1]. Multiple edges and loops were removed from the graph instances. All integer programming formulations were implemented in Julia 0.6.0 using Gurobi 7.5.2; experiments were run on a 2014 MacBook Pro with a 2.6 GHz Intel Core i5, and 8 GB of 1600 MHz DDR3 RAM. The optimality gap in Gurobi was left as default, with a time limit of 7200 seconds. Computational results are summarized in Tables 1 and 2. Table 1 lists the order, size, power domination number, connected power domination number, and the associated runtimes for each graph in the dataset. T is taken to be equal to |V (G)| for each graph G, in order to ensure the feasibility of the integer program. As can be seen from Table 1, the connected power domination number is greater than the power domination for the six test graphs. Additionally, for a given T , it is generally faster to compute the power domination number than the connected power domination number. Similar results have been observed in other related problems. For example, while both domination and connected domination are NP-complete [30], the latter is generally harder to solve exactly. This disparity has been attributed to the non-locality of the connected domination problem, since exact algorithms are often unable to capture global properties like connectivity [29]. In some contrast, computational experiments in [13] have shown that algorithms for connected zero forcing are slightly faster than algorithms for zero forcing. Thus, in this aspect, power domination seems to behave more like domination than like zero forcing. In most test cases, the runtimes obtained using Model 1 are lower than the runtimes reported in [27] obtained using Model 2. Computational results for the integer programming model for power domination proposed in [3] are not available for comparison. Table 1: Computational results for finding minimum power dominating sets using Model 1, and minimum connected power dominating sets using Model 1 with MTZ constraints. Each run was completed with T = |V (G)|, with a time limit of 7200 seconds. For IEEE Bus 300, the reported γ c,P (G) is the best feasible solution at the point of timeout. Note that constraints (4) and (5) in Model 1 are disjunctive constraints of big-M form. Thus, in general, the model is expected to perform better if a there is a small upper bound on the number of steps required to power dominate G. We explored this further by re-running the models for different values of T ranging between n and ppt(G). Table 2 lists the power propagation time T p = ppt(G), the value of T associated with the minimum runtime, denoted T b , the runtimes associated with these values of T , and the average runtime over all T , for power domination and connected power domination. From Table 2, it can be seen that although the best runtimes are not achieved by T p , they are usually achieved by relatively small values of T (relative to the order of the graph), as expected from the big-M constraints. Conclusion In this paper, we presented several structural, algorithmic, and computational results on connected power domination. We explored properties of vertices which are contained in every connected power dominating set, and the effects of certain vertex and edge operations on the connected power domination number. We also gave a formula for computing the connected power domination number of a graph in terms of the connected power domination numbers of its biconnected components. We established the NP-completeness of connected power domination, but showed efficient algorithms for block graphs and cactus graphs. Finally, we gave integer programming models for computing the power domination number, connected power domination number, and power propagation time of a graph, and reported computational results. One direction for future work could focus on refining Theorem 2 and generalizing Theorems 4 and 5. For example, the power domination problem is NP-complete even for bipartite graphs, chordal graphs, planar graphs, and split graphs [31,32,38]; on the other hand, efficient algorithms are available for interval graphs [38], graphs of bounded treewidth [4,34], and several other families. It would be interesting to determine whether connected power domination is NPcomplete or polynomially-solvable for these classes of graphs. Another problem of interest is to generalize Theorem 1 to separating sets of larger size. Such theoretical refinements could also be leveraged in general-purpose solution approaches, such as the integer programming models given in Section 6.
2017-12-06T19:42:42.000Z
2017-12-06T00:00:00.000
{ "year": 2017, "sha1": "d45a5a9f1f9c0d7fdc3195cdf56470ea63753990", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1712.02388", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "d45a5a9f1f9c0d7fdc3195cdf56470ea63753990", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics", "Computer Science" ] }
261197559
pes2o/s2orc
v3-fos-license
Long-Term Periodic and Conditional Survival Trends in Prostate, Testicular, and Penile Cancers in the Nordic Countries, Marking Timing of Improvements Simple Summary Male cancers include common prostate cancer (PC) and the much rarer testicular (TC) and penile cancers. Recent survival data for these cancers are relatively good, but long-term studies are rare. To analyzed relative survival in these cancers, we used the NORDCAN database with information from Denmark, Finland, Norway, and Sweden over a 50-year period (1971–2020). Survival improved early for TC, and 5-year survival reached 90% after 1985. Towards the end of the follow-up, TC patients who had survived the 1st year survived the next 4 years with a comparable probability to the background population. For PC, 90% survival was reached after 2000. For penile cancer, 5-year survival never reached 90%, and the improvements in survival were modest at best. As conclusions, more than 90% of the patients diagnosed with PC and TC are alive 5 years later compared to men in general. For penile cancer, mortality is higher, and early symptoms should be discussed with the doctor. Abstract Survival studies are important tools for cancer control, but long-term survival data on high-quality cancer registries are lacking for all cancers, including prostate (PC), testicular (TC), and penile cancers. Using generalized additive models and data from the NORDCAN database, we analyzed 1- and 5-year relative survival for these cancers in Denmark (DK), Finland (FI), Norway (NO), and Sweden (SE) over a 50-year period (1971–2020). We additionally estimated conditional 5/1-year survival for patients who survived the 1st year after diagnosis. Survival improved early for TC, and 5-year survival reached 90% between 1985 (SE) and 2000 (FI). Towards the end of the follow-up, the TC patients who had survived the 1st year survived the next 4 years with comparable probability to the background population. For PC, the 90% landmark was reached between 2000 (FI) and after 2010 (DK). For penile cancer, 5-year survival never reached the 90% landmark, and the improvements in survival were modest at best. For TC, early mortality requires attention, whereas late mortality should be tackled for PC. For penile cancer, the relatively high early mortality may suggest delays in diagnosis and would require more public awareness and encouragement of patients to seek medical opinion. In FI, TC and penile cancer patients showed roughly double risk of dying compared to the other Nordic countries, which warrants further study and clinical attention. Introduction Global survival in many types of cancers has developed favorably over the last decades [1][2][3].From the Nordic countries with their long-term traditions of cancer registration, survival data are available over a half century, confirming the long-term success in cancer control, which, however, varies by cancer type [4].Although early diagnosis and treatment are considered key determinants of survival, there are numerous factors that directly or indirectly influence survival, including patient care, age, and comorbidities [5].Interpretation of survival data may be complicated even if changes in incidence can be excluded as a contributing factor [6].If survival improves shortly after the introduction of a novel therapy, the causal relationship is likely.Examples of undisputed therapeutic gains were the introduction of cisplatin-based therapy for testicular cancer (TC) and imatinib for chronic myeloid leukemia [7,8].Metastatic cancers confer poor survival, which should be seen as a low 1-year survival.The likelihood of metastatic spread is low in tumors diagnosed early, and thus an alert population, a well-functioning health care system, and improvements in imaging techniques facilitate early detection, which should be seen as improving 1-year survival [4,9].Even 5-year survival should consequently increase, but 5-year survival alone is not able to point out the time of the improvement without data on 1-year survival [9,10].Conditional 5/1-year survival describes the survival experience until year 5 in those who survived year 1 and indicates death rates between years 1 and 5 after diagnosis [11].Another related measure is the difference between 1-and 5-year survival estimates, which is small for cancers of good survival [11,12]. We will assess periodic relative survival in male cancers from Denmark (DK), Finland (FI), Norway (NO), and Sweden (SE) from 1971 to 2020.Risk factors for these cancers are best known for penile cancer, for which human papilloma virus (HPV) infection is the major cause [13].The origins of TC are thought to lie in the embryonic period, and endocrine disruptive chemicals are assumed to be important risk factors [14].For prostate cancer (PC), smoking is a risk factor associated with aggressive presentation and poor outcome [15].In the Nordic countries, patient access to health care is guaranteed with minimal costs, which is an important condition for true population-level survival studies, allowing assessment of "real-world" survival experience.Current survival in these cancers is known to range from excellent in TC and PC (5-year survival >90%) to moderate in penile cancer (70%), but long-tern survival trends are less known [4].In addition to the standard 1-and 5-year survival, we show data for conditional 5/1-year survival and differences between 1-and 5-year survival.The purpose of the present study is to characterize long-term trends in country-specific survival, estimated as breakpoints in trends and as annual survival changes, which are discussed in terms of the therapeutic and diagnostic landscape and incidence changes [6].As background to survival analysis, we show concurrent incidence and mortality data for these countries [6]. Methods The data were obtained from the NORDCAN database 2.0, originating from the Nordic cancer registries [16,17].The Nordic cancer registries are population-based with practically complete coverage of cancers and no loss to follow-up until death, end of follow-up, or emigration.The NORDCAN database was accessed at the International Agency for Cancer (IARC)'s website in fall of 2022/winter 2023 (https://nordcan.iarc.fr/en)[18].Using the NORDCAN tools, we accessed data of incidence, mortality, and 1-and 5-year survival, for which the follow-up was extended until death, emigration, loss of follow-up, or to the end of 2020.Incidence and mortality data were age-standardized for the world standard population.For incidence and mortality data, the starting date was 1961 (the earliest available for all countries).Survival data for relative survival were available from 1971 onwards, and the analysis was based on the cohort survival method for the first nine 5-year periods and a hybrid analysis combining period and cohort survival in the last period 2016-2020 [19,20].Age-standardized relative survival was estimated using the Pohar Perme estimator [11].Age-standardization was performed by weighting individual observations using external weights, as defined on the IARC website.Age groups 0 to 89 were considered.The DK, FI, NO, and SE life tables were used to calculate the expected survival.As the age distribution in any cancer differed, age adjustment for each was specifically performed using reference age distribution in each population defined by the International Cancer Survival Standards (ICSSs), with weights for specific age groups (https://nordcan.iarc.fr/en).Incidence and mortality data were obtained from NORDCAN; the stating year 1960 was selected as the first year of data from all countries.It preceded the starting date of survival data but was considered important as unique national incidence data from as early as the 1960s. Relative 1-and 5-year survival compared survival in cancer patients to the ageadjusted population survival, and 5/1-year survival similarly compared survival between years 1 and 5 after diagnosis.Survival difference between 1-and 5-year relative survival was calculated as 1-year survival % minus 5-year survival %. For statistical modelling and data visualizations R statistical software (https://www.r-project.org,accessed in winter 2023) was used in the R studio environment (https://posit.co/)[21].Relative survival trends (NORDCAN 5-year periodic %) were generated using the Gaussian generalized additive models (GAM) with thin plate regression splines in Bayesian framework [21].Methods for the estimation of the conditional relative survival are described elsewhere [21].Changes in survival trends were estimated through annual % changes and through "breakpoints", which marked times when annual changes in survival could be defined with at least 95% plausibility.These are described in the legends to the figures and the detailed estimation methods are available in the above paper [21]. Time trends of 1-and 5-year relative survival (in %; obtained from NORDCAN for each of the 5-year periods) were modelled using the Gaussian generalized additive models (GAM) with thin plate splines (5 knots) and identity links.Models were run in the Bayesian framework using the "brms" R package [22,23], which employed "Stan" software for probabilistic sampling [24].Separate models were used for different cancers and 1-and 5-year survival.The GAM models included the effect of the country and country-specific non-linear effect of time (timepoint = middle year of each 5 years period) as predictors, allowing estimation of the relative survival across a continuous time scale despite the discrete distribution of data points.As the input data (estimates of the 1-and 5-year survival in each of the 5-year periods) were variably uncertain, standard errors for each data point (obtained from confidence intervals shown in the NORDCAN database) were included in the model.See https://github.com/filip-tichanek/nord_malefor commented R code. Results The total numbers of PC, TC, and penile cancers in the Nordic countries in the 50-year period are shown in Table 1.Considering population sizes, the low number of TC cases in FI and the high number of TC cases in DK and NO is noticeable.The median ages of onset are at around 70 years for PC and penile cancer, 34 years in FI, and 38 years in DK and SE for TC.Data on incidence and mortality in TC, PC, and penile cancer are presented in Figure 1. ers 2023, 14, x FOR PEER REVIEW 4 of Data on incidence and mortality in TC, PC, and penile cancer are presented in Figure Relative survival in the male cancers in DK is shown in Figure 2.For TC, curves 1-and 5 years relative survival met each other at around 2005, with the consequence th 5/1-year survival reached close to 100% (Figure 2a).For PC, 1-year survival modestly creased through the follow-up period.The curves for 5-and 5/1-year survival run in p allel, first declined until 1985, turned to a steep increase, which culminated in 2010, an modest decline followed (Figure 2b).Survival of penile cancer showed a steady increa for all three survival measures (Figure 2c).Relative survival in the male cancers in DK is shown in Figure 2.For TC, curves for 1-and 5 years relative survival met each other at around 2005, with the consequence that 5/1-year survival reached close to 100% (Figure 2a).For PC, 1-year survival modestly increased through the follow-up period.The curves for 5-and 5/1-year survival run in parallel, first declined until 1985, turned to a steep increase, which culminated in 2010, and a modest decline followed (Figure 2b).Survival of penile cancer showed a steady increase for all three survival measures (Figure 2c). In FI, the pattern of survival in TC was similar to the pattern in DK, although the survival in FI was generally lower, and 5-year survival remained below 1-year survival throughout (Figure 3a).Also, survival in PC followed the DK pattern but at a higher level; in FI, there was no initial decline, and the steep increase started 10 years earlier in FI compared to DK (Figure 3b).Survival in penile cancer showed a slight increase, statistically supported only for 5-year survival from 1961 to 1995 (Figure 3c).In FI, the pattern of survival in TC was similar to the pattern in DK, although the survival in FI was generally lower, and 5-year survival remained below 1-year survival throughout (Figure 3a).Also, survival in PC followed the DK pattern but at a higher level; in FI, there was no initial decline, and the steep increase started 10 years earlier in FI compared to DK (Figure 3b).Survival in penile cancer showed a slight increase, statistically supported only for 5-year survival from 1961 to 1995 (Figure 3c).In NO, survival patterns for TC and PC resembled those for FI, except that early survival in TC was better in NO than in FI (Figure 4a,b).Survival in penile cancer in NO did not show any clear trend, but all survival plots were higher than in FI (Figure 4c). In SE, TC survival resembled the DK pattern, and the estimated 1-and 5-year survival curves reached each other before 2010 (Figure 5a).According to Supplementary Table S1, 5-year survival was even slightly higher than 1-year survival during the last 10 years.Survival trends for PC were similar to FI and NO (Figure 5b).Survival in penile cancer did not show any clear trend (Figure 5c). Supplementary Table S1 points out differences between the countries.Survival in TC was lowest in FI for most of the periods studied: estimated 5-year relative survival in FI exceed 90% around 2000, i.e., 15 years later than in SE (Figures 2-5).In the case of PC, there was a rapid increase in 5-and 5/1-year survival during the 1980s/1990s in all countries, but the increase started 10 years later for DK (Figures 2-5).In 2016-20, the DK 5-year survival had barely reached 90% compared to 95% for the other countries.In penile cancer, FI showed distinctly lower 5-year (and partially also conditional) survival compared to other Nordic countries: the starting level of 40% for FI was very low, and the final level of below 70% was distinctly below the other countries (Supplementary Table S1).In NO, survival patterns for TC and PC resembled those for FI, except that early survival in TC was better in NO than in FI (Figure 4a,b).Survival in penile cancer in NO did not show any clear trend, but all survival plots were higher than in FI (Figure 4c).In SE, TC survival resembled the DK pattern, and the estimated 1-and 5-year survival curves reached each other before 2010 (Figure 5a).According to Supplementary Table S1, 5-year survival was even slightly higher than 1-year survival during the last 10 years.Survival trends for PC were similar to FI and NO (Figure 5b).Survival in penile cancer did not show any clear trend (Figure 5c). Supplementary Table S1 points out differences between the countries.Survival in TC was lowest in FI for most of the periods studied: estimated 5-year relative survival in FI exceed 90% around 2000, i.e., 15 years later than in SE (Figures 2-5).In the case of PC, Discussion The use of three different survival metrics and their annual changes allows insight into the timing and the underlying factors boosting survival.The three male cancers displayed distinct distributions in these metrics.Survival improved early for TC, and 5-year survival reached 90% already in 1985 in SE, as the first country, and around 2000 in FI as the last country.For PC, reaching the 90% landmark took longer, as it was reached in FI shortly after 2000 but after 2010 in DK.For penile cancer, 5-year survival never reached the 90% landmark, DK came to 85% but FI remained at 70%.While the improvement in survival was overall similar for TC and PC between the Nordic countries, penile cancer survival improved only for DK and FI with low starting levels.We discuss the three cancers individually below. TC was the first solid cancer for which high cure rates in a metastatic state were reached and which was mainly associated with the application of cisplatin-based chemotherapy.Clinical trials with cisplatin regimens were conducted in the 1970s, and these were adopted as the standard therapy for advanced TC around 1980 [8,25].We can see from Figures 2-5 that 5-year survival in TC developed very well from 1971-75 onwards, and the slope for the 5-year survival curve started to bend down after 1980 (later in FI), probably indicating that the 90%+ cure rates with cisplatin therapy were slowly reached.Additionally, treatment and cure have been individually adapted to the patient's needs, In Supplementary Table S2, we show the absolute survival differences between 1-and 5-year survival during the 50 years, during which time a large reduction was observed for TC and PC.For TC, the difference in DK and FI was over 2% units and about 0% units at the end of the follow-up in NO and SE.For PC, the remaining difference was about 5% units.For penile cancer, the difference declined in time for DK and FI but not for NO or SE. Discussion The use of three different survival metrics and their annual changes allows insight into the timing and the underlying factors boosting survival.The three male cancers displayed distinct distributions in these metrics.Survival improved early for TC, and 5-year survival reached 90% already in 1985 in SE, as the first country, and around 2000 in FI as the last country.For PC, reaching the 90% landmark took longer, as it was reached in FI shortly after 2000 but after 2010 in DK.For penile cancer, 5-year survival never reached the 90% landmark, DK came to 85% but FI remained at 70%.While the improvement in survival was overall similar for TC and PC between the Nordic countries, penile cancer survival improved only for DK and FI with low starting levels.We discuss the three cancers individually below. TC was the first solid cancer for which high cure rates in a metastatic state were reached and which was mainly associated with the application of cisplatin-based chemotherapy.Clinical trials with cisplatin regimens were conducted in the 1970s, and these were adopted as the standard therapy for advanced TC around 1980 [8,25].We can see from Figures 2-5 that 5-year survival in TC developed very well from 1971-75 onwards, and the slope for the 5-year survival curve started to bend down after 1980 (later in FI), probably indicating that the 90%+ cure rates with cisplatin therapy were slowly reached.Additionally, treatment and cure have been individually adapted to the patient's needs, and the last 5-year survival in the present study was well over 90% in all countries, reaching 98.8% in SE.In the present survival figures, we could see a remarkable time-dependent approaching of the curves for 1-and 5-year survival (in SE, 5-year survival was as high as 1-year survival during the last 10 years) and a 100% concomitant approaching of the curves for 5/1-year conditional survival.The implication is that TC patients had reached the 5-year survival level of the background population; no TC-related extra deaths occurred after year 1.With achieved high survival rates, concerns have arisen about the long-term consequences of successful therapy, as second primary cancers and other medical conditions are recorded in TC survivors [26,27].The situation is analogous to Hodgkin lymphoma, where well-designed chemotherapy achieved high cure rates, but mortality in second primary cancers called for a revision of the applied chemotherapy regimens [28,29]. For PC in all countries, 1-year survival approached 100%, implying that early deaths were rare.However, 5-year survival improved slower, and, finally, NO and SE reached almost 95% but DK only 90% (DK survival was significantly lower than that for the other countries).The 5/1-year survival closely followed the 5-year survival, indicating that most deaths occurred in the period past year 1.The opportunistic PSA testing, which started around 1990, introduced incidence changes that impeded survival evaluations of a large increase in 5-year survival since 1990 [30,31].Other studies suggested that PSA testing has led to earlier treatments and decreased PC mortality by around 30% compared to the pre-PSA [32,33].However, simultaneously, PSA testing has led to significant overtreatment.In DK, the PSA era apparently started later than in the other countries, and, curiously, 5-year survival appeared to decrease before the PSA surge.Another plausible explanation might be national differences in how elevated PSA values were interpreted and when it triggered prostate biopsies.The Nordic countries have national guidelines for diagnostics and treatments of PC, which are adjusted according to the European Society for Medical Oncology (ESMO) and European Association for Urology (EAU) guidelines but, for example, in SE, with some unique recommendations [34][35][36][37].Generally, risk/stage adapted therapies and diagnostics are recommended.Active surveillance is preferred in less aggressive PC, whereas more aggressive local cancers should typically be treated with prostatectomy or radiotherapy [34,35].In more advanced states, androgen deprivation therapy and chemotherapy should be applied with the addition of novel agents in castration-resistant cases [34,35]. Penile cancer is a rare cancer, and the treatment recommendations in a metastatic state may not be evidence-based [38].The relatively poor 1-year survival suggests that diagnosis is derived late in the disease causation.It is known that the prognosis is worsened if more than one inguinal lymph node is affected [38].Reasons for delayed diagnosis may be prudishness and embarrassment for seeking medical opinion, lack of urological resources, or simply poor knowledge of this rare but easily visible cancer [39].Accordingly, living alone and in poor socio-economic conditions is linked to poor prognosis [39,40].Men with unhealing wounds in the sulcus of the penis or growing tumors should seek for immediate medical opinion.Poor development in survival is shared by female HPV-associated cancers of the cervix, vagina, and vulva [41].For penile cancer, surgical techniques have improved, and radiotherapy and chemotherapy have additionally been used [42].In accordance with female HPV-related cancers, immunotherapy may be an option in metastatic penile cancer [43].Nevertheless, the increase in survival is evidenced only for DK according to our results.The current 5-year survival for FI was only 68.6%, NO and SE around 77%, and DK 85.7%, indicating that FI penile cancer patients have double the risk of dying compared to their DK mates. We tried to find an explanation for the poor survival in penile cancer in FI.Our present resulted showed that there was no large difference between the countries in the median diagnostic ages (FI 69 years, DK and NO 70 years, and SE 71 years).According to the Scandinavian Penile Cancer Group, the major differences in treatment were centralized to only two hospitals in DK and SE, whereas NO and FI maintained a decentralized policy, and FI had as many as 20 surgical departments for penile cancer [44].However, since 2016, treatment in FI has also been centralized to two hospitals.Disease biology may offer a clue to the poor FI survival (country of lowest incidence) and good DK survival (country of highest incidence).As HPV is the main risk factor of penile cancer, one may assume that DK cases are more often HPV-related than the FI cases.Survival in HPV-associated penile cancers is assumed to be better than in cancers not associated with HPV [45].This also the case in other HPV-associated cancer, such as oropharyngeal cancer [46]. While we observed differences in the development of survival for the male cancers, the only statistically significant differences for 5-year survival in the final period were for PC, for which DK survival was lower than that in the other countries, and for penile cancer, for which FI survival was lower than that for the best country DK.For many solid cancers, recent developments in survival have been best for DK and NO and worst for FI among the Nordic countries, for reasons that may be related to health care funding and organization [4,35].Why DK survival in PC was below the other countries requires further investigation; one contributing factor may be in DK treatment guidelines, which recommend antiandrogen monotherapy as primary hormonal therapy for locally advanced, non-metastatic prostate cancer, where curative therapy is not an option [47]. We also observed large incidence differences in the countries, including penile cancer and TC.As for penile cancer, HPV is the main risk factor, and the high incidence in DK and low incidence in FI is probably related to prevalence of HPV infections.Similar differences between DK and FI were for cervical cancer, another HVP-related cancer [41].The national incidence differences in TC are not well understood, but the DK rates are known to be some of the highest in the world [48]. The limitations in the present study are lacking pathological information of the cancers at diagnosis (particularly relevant for PC) and any treatment information.Another limitation of the NORDCAN data is that it is not possible to carry out age-specific survival analyses.According to literature, diagnostic age is an important determinant of survival in PC, with elderly men being a disadvantaged group [49,50].For penile cancer, data are sparse, but survival for old patients is worse than for young patients [51].For TC, survival is better for seminoma than for nonseminoma, both of which are common subtypes, but nonseminoma is an earlier onset disease [52,53].Histological data are not available in NORDCAN.The advantages of the NORDCAN data are its uniquely long follow-up time from high-level cancer registries.It is not feasible to assume that comparable pathological data were available over 50 years, as it has turned out that even the closely collaborating Nordic cancer registries have difficulties in comparing data on tumor characteristics (stage) for example [54]. Conclusions The three male cancers showed different survival histories in the Nordic countries.Survival rates increased constantly for TC and could reach a population level of survival after year 1 of diagnosis.For PC after 2000, mortality by year 1 was nil, but late mortality requires attention.For penile cancer, no or small survival improvements could be observed over the 50 years, whereas data from DK indicate that progress could be made.The relatively high early mortality may suggest delays in diagnosis, which may be due to medical and social factors in a "neglected" cancer.In FI, TC and penile cancer patients had a 2-fold risk of dying compared to their Nordic mates, which warrants clinical attention. Figure 1 . Figure 1.Incidence (a-c) and mortality (d-f) in male-associated cancers of following localizatio (a,d) testis, (b,e) prostate, and (c,f) penis.The figure was created in R using data from Nordc Lines were smoothed via cubic smoothing spline. Figure 1 . Figure 1.Incidence (a-c) and mortality (d-f) in male-associated cancers of following localizations: (a,d) testis, (b,e) prostate, and (c,f) penis.The figure was created in R using data from Nordcan.Lines were smoothed via cubic smoothing spline. Figure 2 . Figure 2. Relative 1-, 5/1-(conditional), and 5-year survival in Danish men with (a) testicular, (b) prostate, and (c) penile cancer.The vertical lines mark a significant change in the survival trends ("breaking points"), and the bottom curves show estimated annual changes in survival.The curves are solid if there is >95% plausibility of the growth or decline.Shadow areas indicate 95% credible intervals.All curves are color coded (see the insert). Figure 2 . Figure 2. Relative 1-, 5/1-(conditional), and 5-year survival in Danish men with (a) testicular, (b) prostate, and (c) penile cancer.The vertical lines mark a significant change in the survival trends ("breaking points"), and the bottom curves show estimated annual changes in survival.The curves are solid if there is >95% plausibility of the growth or decline.Shadow areas indicate 95% credible intervals.All curves are color coded (see the insert). Figure 3 . Figure 3. Relative 1-, 5/1-(conditional), and 5-year survival in Finnish men with (a) testicular, (b) prostate, and (c) penile cancer.The vertical lines mark a significant change in the survival trends ("breaking points"), and the bottom curves show estimated annual changes in survival.The curves are solid if there is >95% plausibility of the growth or decline.Shadow areas indicate 95% credible intervals.All curves are color coded (see the insert). Figure 3 . Figure 3. Relative 1-, 5/1-(conditional), and 5-year survival in Finnish men with (a) testicular, (b) prostate, and (c) penile cancer.The vertical lines mark a significant change in the survival trends ("breaking points"), and the bottom curves show estimated annual changes in survival.The curves are solid if there is >95% plausibility of the growth or decline.Shadow areas indicate 95% credible intervals.All curves are color coded (see the insert). Figure 4 . Figure 4. Relative 1-, 5/1-(conditional), and 5-year survival in Norwegian men with (a) testicular, (b) prostate, and (c) penile cancer.The vertical lines mark a significant change in the survival trends ("breaking points"), and the bottom curves show estimated annual changes in survival.The curves are solid if there is >95% plausibility of the growth or decline.Shadow areas indicate 95% credible intervals.All curves are color coded (see the insert). Figure 4 . Figure 4. Relative 1-, 5/1-(conditional), and 5-year survival in Norwegian men with (a) testicular, (b) prostate, and (c) penile cancer.The vertical lines mark a significant change in the survival trends ("breaking points"), and the bottom curves show estimated annual changes in survival.The curves are solid if there is >95% plausibility of the growth or decline.Shadow areas indicate 95% credible intervals.All curves are color coded (see the insert). Figure 5 . Figure 5. Relative 1-, 5/1-(conditional), and 5-year survival in Swedish men with (a) testicular, (b) prostate, and (c) penile cancer.The vertical lines mark a significant change in the survival trends ("breaking points"), and the bottom curves show estimated annual changes in survival.The curves are solid if there is >95% plausibility of the growth or decline.Shadow areas indicate 95% credible intervals.All curves are color coded (see the insert). Figure 5 . Figure 5. Relative 1-, 5/1-(conditional), and 5-year survival in Swedish men with (a) testicular, (b) prostate, and (c) penile cancer.The vertical lines mark a significant change in the survival trends ("breaking points"), and the bottom curves show estimated annual changes in survival.The curves are solid if there is >95% plausibility of the growth or decline.Shadow areas indicate 95% credible intervals.All curves are color coded (see the insert). Table 1 . Case numbers of male cancers in the Nordic countries 1971-2020 and their estimated median ages at onset in 2011-2020.
2023-08-27T15:10:46.726Z
2023-08-25T00:00:00.000
{ "year": 2023, "sha1": "8f2d8800924c4fb9d5f824548abe2a4023ec143d", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2072-6694/15/17/4261/pdf?version=1692960580", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "00678dcaa2f33735c473cb353dfb707b10889180", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
18218125
pes2o/s2orc
v3-fos-license
Finite element simulation of textile materials at the fiber scale A general approach to simulate the mechanical behavior of textile materials by taking into account all their constitutive elementary fibers and contacts between them is presented in this paper. A finite element code, based on an implicit solver, is develop to model samples of woven fabrics as assemblies of beams submitted to large displacements and developing contactfriction interactions between themselves. Special attention is paid to the detection and modeling of the numerous contacts occurring in this kind of structures. Robust models and efficient algorithms allow to consider samples made of few hundreds of fibers. The unknown initial configuration of the woven fabric is obtained by simulation according to the chosen weaving pattern, providing useful informations on fibers trajectories and tows crosssections shapes. Various loading cases can then be applied to the textile sample, in presence of an elastic matrix, so as to identify its mechanical behavior under different solicitations. Results of simulations performed on coated fabric samples are given to illustrate the approach. INTRODUCTION Because the global behavior of textile composites depends on local phenomena occurring within the fabric at the scale of tows or fibers, a good understanding of mechanisms taking place at the scale of fibers is important. The nonlinear behavior of dry fabric or fabric-reinforced composites often originates in phenomena such as transverse compression of yarns or tows, sliding between tows or fibers, or locking between fibers. Predicting these phenomena is very hard without focusing the analysis at the scale of fibers. A textile composite is made of components of different level. At a mesoscopic level, it can be seen as an assembly of yarns or tows, coupled with an elastic matrix. Modeling the structure at this mesoscopic level requires to be able identify the global behavior of tows or yarns. Because it involves nonlinear couplings between loads and strains in different directions due to interactions between fibers, the behavior of yarns is complex to identify. Going down to the scale of fibers allows to bypass the identification of intermediate models at upper levels and to base the global model only on mechanical properties of fibers and matrix and of contact-friction interactions between fibers. Some similar approaches can be found in the literature. At the scale of fibers, for the computation of the initial configuration for braided structures or 3D interlock woven fabric, digital elements have been used by Miao and al. [1]. Since these digital elements have neither bending nor torsional stiffness, fibers must be tightened to find a solution. Finckh [2] proposed to simulate the weaving process and to apply dynamic loading cases using an explicit solver. Other approaches tackle the problem at the scale of yarns, representing yarns by beams or 3D models and studying interactions between them [3,4,5]. The simulation approach developed in this paper has been previously applied to other kinds of entangled media [6,7]. To introduce the global approach, a first section is dedicated to the beam model employed to represent fibers. The contact modeling, including geometrical and mechanical aspects, is then described. In a third section, the way the initial configuration is computed according to the weaving pattern is explained. After presenting how an automatically meshed elastic matrix can be added and coupled to the woven fabric, results of different loading tests on dry fabrics and composites, for both plain weave and twill weave are given. Introduction Beam models are usually based on the assumption that their cross-sections remain rigid. The motions of these cross-section can therefore be described by two kinematical fields and six degrees of freedom : one translation of the center of the section, and one rotation. However, these usual models have two main disadvantages : the handling of large rotations requires complex formulations, and they can not account for deformation of cross-sections. To overcome these difficulties, we use a richer model described by nine degrees of freedom. Enriched kinematical beam model The enriched kinematical beam model is based on a Taylor expansion of the placement of particles of the beam with respect to the line of centroids. If we denote z (z 1 ,z 2 ,z 3 ,) a material particle defined in the reference configuration (see Fig. 1), the placement x (z) of this particle can be expressed as follows : x (z 1 ,z 2 ,z 3 ) = x (0,0,z 3 ) + z 1 ∂ 1 x (0,0,z 3 ) + z 2 ∂ 2 x (0,0,z 3 ) + o(z 1 ,z 2 ). [1] Denoting respectively x 0 , g 1 and g 2 the three kinematical vectors introduced as first terms of the above expansion, we assume as kinematical model that the placement of any material particle of the beam is expressed as function of these vectors in the following way : x (z 1 ,z 2 ,z 3 ) = x 0 (z 3 ) + z 1 g 1 (z 3 ) + z 2 g 2 (z 3 ). [2] With this model, three vectors are used to describe any cross-section of the beam : one vector to give the position of its center, x 0 , and two other vectors, g 1 and g 2 , called section vectors, which determine the orientation of the section. Corresponding to this model for the placement, we assume that the displacement u (z) of any particle is also expressed by means of three vectors as follows : Since no conditions are set for the two section vectors, neither in norm, nor in angle, plane deformations of cross-section can be considered by the beam model, in addition to usual deformations (stretching, torsion, bending and shear). Strains and constitutive law for the beam model With the above kinematical beam model, because it is able to reproduce plane deformations of cross-section, no term of the Green-Lagrange strain tensor is constrained to be zero, and fully 3D effects, such as Poisson's effect (strains in transverse directions induced by stretching), can be caught. Because the strain tensor is fully three-dimensional, an usual 3D elastic constitutive law, taking into account strains in all directions, can be used to model the behavior of each beam. Introduction Contact modeling is crucial point since contact-friction interactions make the specificity of the behavior of textile structures. The main difficulties are related to the fact that contact areas can be numerous, and are constantly evolving, as contacts may appear or disappear anywhere and at any time. Geometrical detection of contact Since we are working in a large displacements framework, and because the rearrangement of fibers during the simulation, the process of contact detection needs to be regularly repeated during the calculation. For this reason, this process must be efficient and with low CPU cost. Contact configurations between fibers can be very different depending on their relative positions. Very localized and reduced contact areas can be observed at crossings between fibers from two different tows, but there can also be long and continuous contact lines between two neighboring fibers of the same tow. To face these different configuration, we suggest a general method to handle contact, based on the generation of contact elements, made of pairs of material particles attached to the surface of fibers, and based on geometrical criteria. Whereas in usual methods, contact interactions are determined by fixing a first point on one of the structures and by searching a target point on the opposite geometry, using generally a normal direction to one surface, we suggest do define contact elements from an intermediate geometry. This geometry has to be generated for each region where two parts of fibers are close enough and where contact is likely to produce. This way of searching contact from a third geometry offers a symmetrical treatment of contacting structures. The first step to create contact elements is to determine in the global collection of beams, a set of proximity zones (see Fig. 2), defined as pairs of parts of lines holding beam meshes, which are close to each other. The determination of proximity zones is performed, for each couple of beams in the assembly, by distributing test points on one of the lines and calculating the nearest point on the opposite line. The intermediate geometry is then defined, for each proximity zone, as the average between the two close parts of lines. Contact elements are generated from this intermediate geometry, using planes orthogonal to this geometry to determine couples of beam cross-sections candidates to contact. Material particles are then accurately located on the border of these cross-sections. Definition and generation of contact elements We define a contact element E c (ij) (x G ) as a pair of material particles, z (i) and z (j) , belonging to two different beams (i) and (j), and predicted to enter into contact at a given location of the intermediate geometry : The equality between positions in the above equation must be understood in the sense of a prediction. The discretization of the contact problem is performed by distributing on all intermediate geometries in the structure series of discrete locations where contact element are created. The fact that the construction of contact elements depends on the geometry of fibers, and therefore on the displacement solution of the problem, introduces in the formulation an additional nonlinearity, and requires the determination of contact elements to be repeated as the solution evolves. Adding of a normal contact direction In order to formulate linearized contact conditions, a normal direction, denoted N (E c (ij) ), is associated to each contact element E c (ij) . This direction sets the direction according to which distance between interacting particles will be measured. This contact direction is generally taken as the one between the two centers of cross-sections involved in contact. Once again, such a definition introduces a nonlinearity in the global formulation since the normal contact direction depends on the solution itself. Kinematical contact condition In order to prevent interpenetration between beams, kinematical conditions are set for all contact elements. They aim at imposing that the distance between particles of a contact element, measured according to the normal contact direction, remains positive. This distance, defined as the gap, is expressed for each contact element as follows : gap (E c (ij) ) = (x (z (i) ) -x (z (j) ),N (E c (ij) ) ≥ 0. [5] 3.5 Constitutive laws for contact and friction A constitutive law for normal contact is considered in order to link normal reactions to the gaps measured at contact elements. This constitutive law has the form of a penalty law, and is regularized by a quadratic part for very small penetrations in order to stabilize the contact algorithm (see Fig. 6). The penalty coefficient is adjusted for each proximity zone in order to limit the maximum penetration for each zone. As far as the friction is concerned, we use a Coulomb's law including a small reversible elastic displacement before sliding occurs. Global algorithm to solve the nonlinear problem To solve the problem for each loading increment, imbricated loops are introduced to iterate on the different nonlinearities. For each increment, with a first level loop, iterations are made on the process of contact determination to generate contact elements. Then, these contact elements being fixed, iterations are made at a second level on the normal contact directions. The third level loop is dedicated to iterations of a Newton-Raphson algorithm on all other nonlinearities (contact status, friction direction, finite strains). Because of these three embedded loops, the total number of iterations may be high if a good rate of convergence is not obtained for the Newton-Raphson algorithm. Models characteristics need to be carefully adjusted for this purpose. Way of determining the initial configuration One important task of the simulation is to determine the unknown initial geometry of the woven structure. Instead of simulating step by step the actual weaving process, which would require a large amount of CPU time, the initial configuration is computed by starting from an arbitrary flat configuration where crossing tows are interpenetrating each other (Fig. 8), and moving gradually fibers until tows are moved apart from each other at crossings. For each crossing, the weaving pattern provides a superimposition order which indicates which tow should be above the other. The goal of the transient stage of computation of the initial configuration is to make this superimposition order fulfilled by fibers from different tows at crossings. To achieve this, while standard normal contact directions (see 3.4.1) are considered between fibers of the same tow, between fibers belonging to different tows, we take as normal contact direction a vertical direction oriented according to the superimposition order defined at the crossing. By this means fibers of crossing tows are moved step by step upon each other. Once tows do not interpenetrate any longer, standard normal contact directions are considered for all contact elements to find the equilibrium configuration. Computation of the initial configuration for a plain weave and a twill weave sample In the following, all applications are made for the same initial arrangement of 12 glass fiber tows (6 tows in both fill and warp directions), described on Table 1. Different tows are employed in the fill and warp direction so as to produce unbalanced fabrics. Two weaving patterns, corresponding respectively to a plain weave and a twill weave, are applied to this initial arrangement made of 408 fibers (Fig. 8). During the process of determination of the initial geometry, around 80 000 contact elements are generated. Figure 8. Initial configuration of tows before weaving. The view of slices at some steps during the process of determination of the initial configuration ( Fig. 9) shows how fibers of different tows are gradually moved apart. Figure 9. Slices at different steps during the process of determination of the initial configuration for the plain weave and the twill weave. Many informations about the fabrics geometry are obtained as results of this initial simulation. Trajectories of both fibers and tows, varying shapes of tows cross-sections and local curvatures along fibers are very useful data which can be recovered from this first computation. Figure 10. Computed initial configurations computed for plain and twill weaves. Figure 11. Details of the computed initial configurations for plain and twill weaves. ADDITION OF AN ELASTIC MATRIX In order to consider a composite sample, two layers of an elastic material are added on both sides of the fabric. The mesh of the layers is carried out automatically by the software. This mesh is relatively coarse compared to the size of the fibers, and overlaps slightly the external fibers of tows. Doing this way, the meshes of the fibers and of the matrix are non-conforming. To ensure the mechanical coupling between the fabric and the fibers, junction elements are generated in the overlapping region between the matrix and the fibers. These elements couple pairs of material particles, one belonging to a fiber and the other belonging to the matrix, by means of an elastic spring whose stiffness is calculated as function of the Young's modulus of the matrix. Presentation Various loading cases can be simulated by applying different loads (displacements or forces) on the borders of the reconstructed samples of dry fabric or textile composite. Rigid bodies are introduced in order to drive globally sets of fiber ends or tow ends. The nonlinear effect at the start of the loading, which is stronger for the twill weave sample, can be mainly explained by a change in the fabric geometry due to a deformation of tows and a possible rearrangement of fibers in tows. This can be observed on Fig. 14, where slices of the twill weave at the beginning and at the end of the test show the changes in tows cross-section shapes and tows trajectories. Figure 14. Slices of the twill weave fabric before (above) and after (below) a 2 % elongation test, showing deformation and rearrangement within tows cross-sections. Shear test on the twill fabric composite A shear test is performed on the composite textile sample for twill weave. The view of the deformed mesh is presented on Fig. 15. The shear deformation is possible until reaching a kind of locking when neighboring parallel tows come to contact. On a zoom view on Fig. 16, the deformation of tows can be observed. Figure 15. View of the textile composite sample submitted to a shear test. Figure 16. Slices at the beginning and at the end of the shear test showing changes is the shape of tows cross-sections. Bending test on the twill fabric composite The last test conducted on the composite sample is a bending test, for which opposite rotations are imposed to two borders of the sample. A global view of the sample is presented on Fig. 17, and a corresponding slice on Fig. 18. This test shows the ability of the model to handle large displacements. CONCLUSIONS A general approach for the simulation by the finite element method of the mechanical behavior of fabrics and fabric-reinforced composites has been presented. This approach, formulated in a large displacements and finite strains framework, is based on the representation of all fibers constituting woven structures by means of 3D beam models, and on the taking into account of contact-friction interactions between fibers. The detection of the numerous contacts occurring in general assemblies of fibers is one of the key points of the model. Based on the determination at a first level of proximity zones between fibers, and on the construction of intermediate geometries to approximate the actual contact zones, the method generates automatically contact elements made of pairs of material particles. This method is general enough to apply to many contact configurations as encountered between fibers in woven materials. Since the process of determination of contact elements depends on the relative positions of fibers, it has to be repeated during the solving of the problem for each loading step. Robust models and efficient algorithms are required to get a good convergence rate for the solving of the nonlinear problem. Thanks to these optimized models and algorithms, samples of woven fabrics involving few hundreds of fibers and about 100 000 contact elements can be studied with a reasonable CPU time. The model is first applied to the determination of the unknown initial geometry, by making the superimposition order at crossings defined by the weaving pattern be gradually fulfilled by fibers. Meaningful informations related to the yarns trajectories and shapes, and to the trajectories and curvatures of fibers, are obtained as results of this initial simulation. In order to create a sample of textile composite, an elastic matrix is added to the numerically manufactured fabric, and is automatically meshed. Various loading cases can be than applied to the textile composite sample in order to identify its behavior under different kind of solicitations. Thanks to the nonlinearities considered in the model, and its ability to handle large loading increments, typical nonlinear effects of the complex behavior of textile materials can be rendered for a wide range of loadings. The proposed simulation, based on the identification of very few parameters (mechanical properties of fibers, definition of tows arrangement and of the weaving pattern), appears as a suited tool to describe and to understand complex phenomena occurring at the scale of fibers, and to predict the complex global mechanical behavior of textile composite materials at a macroscopic scale, as well as very localized phenomena such as breakings at the microscopic scale of fibers.
2014-10-01T00:00:00.000Z
2009-12-07T00:00:00.000
{ "year": 2009, "sha1": "352380a5de5fc92e8a939723307b75a2de37f6e6", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "ca8cdc9b2ba727d928436f0a1ae8c5eae73c727b", "s2fieldsofstudy": [ "Engineering", "Materials Science" ], "extfieldsofstudy": [ "Physics" ] }
233207997
pes2o/s2orc
v3-fos-license
Identification and distribution of Brachyspira species in feces from finishing pigs in Argentina Background and Aim: Brachyspira are Gram-negative, aerotolerant spirochetes that colonize the large intestine of various species of domestic animals and humans. The aim of this study was to determine the presence and distribution of different species of Brachyspira presents in feces from finishing pigs in Argentina. Materials and Methods: Fecal samples (n=1550) were collected from finishing pigs in 53 farms of the most important swine production areas of Argentina, and Brachyspiras species were identified by bacteriological and molecular methods. Results: The regional prevalence of Brachyspira spp. was at the level of 75.5% (confidence interval 95%, 62.9-87.9), and it was lower among those farms with >1001 sows. One hundred and twenty-eight isolates of Brachyspira were properly identified and the species found were: Brachyspira hyodysenteriae, Brachyspira pilosicoli, Brachyspira innocens, and Brachyspira murdochii. B. hyodysenteriae and B. pilosicoli had low prevalence (1.9% and 7.5%, respectively), B. innocens was isolated from 34% of the farms and B. murdochii was found in 39.6%. Conclusion: The present study provides epidemiological data about herd prevalence of the different Brachyspira species in Argentina, showing that the prevalence figure seems to be higher than that reported in other countries. Introduction Brachyspira are Gram-negative, aerotolerant spirochetes that colonize the large intestine of various species of domestic animals and humans. Brachyspira hyodysenteriae has been recognized as the etiological agent of swine dysentery, a swine disease characterized by a severe muco-hemorrhagic colitis in growing pigs, and Brachyspira pilosicoli is responsible for a condition known as porcine colonic spirochetosis, which has a negative impact on pig production as a consequence of the muco-catarrhal colitis, green or brown diarrhea, and the poor performance of fattening pigs [1]. Other known species of the genus Brachyspira are, Brachyspira innocens, Brachyspira murdochii, and Brachyspira intermedia which are also found in pigs colon; these species are also responsible for mild colitis in pigs except B. innocens that is regarded as non-pathogenic. However, the previous reports on clinical cases and experimental studies [2,3] have found that one single species of Brachyspira or mixed infection, that is, various species of Brachyspira in a host could cause mild degrees of colitis. Other reports showed B. innocens, B. murdochii, and B. intermedia individually associated with pathological colitis [4], or in mixed infections with other infectious agents [5]. Recently, Brachyspira hampsonii has been recognized as a new species of Brachyspira that is pathogenic as it has been isolated from clinical cases of swine dysentery in Canada [6]. Isolation and identification of Brachyspira spp. by bacteriology are not a straightforward approach, because of their slow growth and the difficulty of getting isolates in pure culture. At present, different molecular methods are available for detection and identification of Brachyspira species from fecal samples [7,8], or from bacteriological cultures [9], but the protocols need to be updated continuously [10]. One of the most useful genes to assess the diagnosis of Brachyspira spp. is the NADH oxidase (nox), a relatively conserved gene amongst Brachyspira spp., which allows differentiation and proper identification of the different species [9]. In Latin America, limited studies were carried out concerning the presence of different species of Brachyspira. In Brazil, some License (http://creativecommons.org/licenses/ by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver (http:// creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated. regional studies identified B. hyodysenteriae and B. pilosicoli in pigs [11,12]. In Mexico, strongly and weakly hemolytic spirochetes were identified in 22% of pig farms [13]. In Argentina, although both pathogenic and non-pathogenic Brachyspira spp. have been identified by different diagnostic methods, very few epidemiological studies have been conducted, and thus little is known about the presence and distribution of the different Brachyspira species [14]. The aim of this study was to determine the presence and distribution of different species of Brachyspira present in feces from finishing pigs in Argentina. Ethical approval The study was approved by "Comité de Ética de la Investigación" (RR 852/11) of the National University of Río Cuarto (UNRC). All animal procedures carried out in our study were performed in accordance with international regulations. Study period and location In order to determine the prevalence of infection in the Pampean region of Argentina, the main pig producing area in the country, 53 commercial farrow-to-finish pig farms were sampled between October 2011 and March 2013. Farms, animals, and samples According to the 2012 Integrated System for Animal Health Management, provided by the National Animal Health, Food Safety and Quality Service of Argentina, at the moment of the present study, the number of confinement pig farms with more than 200 sows in the region was 322, with a total figure of 153,350 sows. Most farms (84%, 270 farms) were located in the provinces of Buenos Aires (37%), Cordoba (24%), Santa Fe (16%), and Entre Ríos (7%). The number of farms necessary to assess the prevalence of Brachyspira spp. positive farms in the region was calculated according to Thrusfield [15], considering an expected prevalence of 50%, with 95% of confidence, and a precision of 13%. Then, a stratified sampling method was designed according to the number of farms in each province. The number of fecal samples to be collected was n=30 as to detect the presence of at least one positive sample for intestinal spirochaetes [15]. Samples were collected from pigs of 22 weeks of age, with or without diarrhea and taken directly from the rectum of pigs by manual stimulation, placed in polyethylene bags and put at 4°C until reaching the laboratory for processing, which was achieved no longer than 48 h after collection. A total of 1550 fecal samples were collected. A farm was considered positive when Brachyspira spp. growth characteristic on selective culture plates was confirmed by the observation of Gram-negative stained spirochaetes under optical microscopy. Bacteriology and biochemical identification The feces sample were plated onto Brachyspira selective medium (BSM) made up of Columbia Agar (Oxoid Ltd., Hants, UK) supplemented with 7.0% horse blood and the antibiotics colistin (25 mg/mL), vancomycin (25 mg/mL), and spectinomycin (400 mg/mL) (Rosco Diagnostica, Taastrup, Denmark). Inoculated plates were incubated at 42°C for 7 days in anaerobic jars, using AnaeroGen GasPak system (Oxoid Ltd., Hants, UK). After the incubation, smears were made from positive culture plates that showed the growing characteristic of Brachyspira (strongly or weakly hemolytic). Brachyspira isolation was confirmed by observation of negative Gram-stained spirochetes using light microscopy examinations. Prime isolates were subcultured on BSM to obtain pure cultures. Brachyspira isolates were identified by biochemical testing (Rosco Diagnostica, Taastrup, Denmark) for the preliminary identification of species according to a method previously described [16]. Pure isolates were at −70°C for further purposes. Identification by polymerase chain reaction (PCR) DNA from isolates was extracted using an organic extraction method (DNAzol, Invitrogen, USA). Two PCR approaches were used for species-specific identification of the Brachyspira isolates: (i) Duplex PCR for the identification of B. pilosicoli and B. hyodysenteriae [7] and (ii) for any other Brachyspira species restriction fragment length polymorphism-PCR (RFLP-PCR) [9,17], with some modifications. For the RFLP-PCR, the products were digested with Dpn II and Scf I and the fragments were separated on a 3% agarose gel by electrophoresis and stained with ethidium bromide. The standardization of the PCR testing was done using DNA from reference strains of B. hyodysenteriae, B. pilosicoli, B. murdochii y, and B. innocens provided by Dr. Enrique Corona-Barrera (Mexico). Identification by sequencing Sequencing was carried out particularly on isolates for which biochemical identification was not clear, so the species identification was done by nox gene PCR amplification followed by sequencing using primers as described previously [18]. Briefly, the PCR products were purified by the commercial kit Puriprep S (Inbio Highway, Tandil, Argentina), quantified (NanoDrop, ND 1000, Thermo Fisher Scientific, USA), and sequenced (ABI PRISM® 3130xl Genetic Analyzer, Applied Biosystem, CA, USA ). The sequences were edited using BioEdit, aligned with ClustalW and compared with GenBank database using BLAST. Statistical analysis Brachyspira prevalence was calculated by dividing the number of positive farms by the total number of sampled farms in a region or province. The prevalence of a particular Brachyspira species was calculated by dividing the number of positive farms for that particular species by the total number of sampled farms. Statistical analysis, including interquartile range (IQR), Pearson's Chi-squared test, confidence interval (CI), and standard deviation (SD), was carried out using Epidat software version 3.1 (Xunta de Galicia, OPS-WHO, Spain). Results The number of herds necessary to estimate the prevalence was n=49. In addition, four farms of 150 sows located in the area of the study were also included as they sent samples to the diagnostic laboratory at university, giving them a total of 53 farms in the study. Farms with more than 200 sows represented 15.2% of the total of farms, for which a figure of 40,585 sows was recorded. The mean of herd size in this study was 500 sows (IQR=300-1000). The regional prevalence of herds infected with different Brachyspira species was 75.5% (CI 95%, 62.9-87.9); in other words, 40 of the 53 sampled farms were culture positive for Brachyspira. The highest occurrence of Brachyspira spp. was observed in the province of Córdoba. Many species of intestinal spirochetes were found in that region (Table-1). In the provinces of Buenos Aires, Córdoba and Santa Fe only B. pilosicoli was found, and in Córdoba Province, B. hyodysenteriae was also detected (Table-2). Farms were categorized in quartiles (C1-C4) according to the number of sows, and the proportion of positive herds for each quartile was calculated (Table-3). No statistical difference between herd size and presence of Brachyspira was found. The prevalence of Brachyspira spp. was lower (37%) into the upper quartile (C4), among the larger farms (>1001 sows) (Chi-square Pearson=7.36, p=0.0613). Characteristic growth of intestinal spirochetes on cultured plates was confirmed by Gramstained smears, followed by microscopy. Isolation of Brachyspira on bacteriological culture was achieved on 19% (290/1550) of the cultured samples. The mean of Brachyspira positive cultures per farm was 18.9% (SD=17.3, range 0-62.1), and the highest figure of positive culture was 50-62% observed in four farms with a herd size between 680 and 1200 sows. One hundred and 28 Brachyspira isolates were properly identified by the different testing procedures; the Brachyspira species found in this study were B. hyodysenteriae, B. pilosicoli, B. innocens, and B. murdochii (Table-2). Although samples (n=30) from eight farms were recorded as Brachyspira culture positive, these isolates were not possible to identify by the molecular techniques. It was also observed that more than one species of Brachyspira was found in 22.5% (9/40) of the farms. B. pilosicoli was isolated from farms of Buenos Aires (n=1), Córdoba (n=2), and Santa Fe (n=1). In some farms, B. pilosicoli was found alone in 16.7% (5/30) of the samples, whereas in other farms, this The most frequent Brachyspira species found were B. innocens and B. murdochii. B. innocens was found in 45% (24/53) of the farms and in ten farms, it was identified as the only Brachyspira species ranging from 3% to 30% of the positive samples, this species was also found in 8 (12.2%) of the farms together with B. hyodysenteriae (n=1), B. pilosicoli (n=2), and B. murdochii (n=8). The most frequent Brachyspira species found in this study was B. murdochii, as it was present in 53% (28/53) of the Brachyspira culture positive farms. It was also recorded that some of the farms (n=4) had mixed Brachyspira infections, as found in four different farms where at least one sample of each farm had more than one Brachyspira species. Discussion This study was conducted on pig farms located in the most important swine production areas of Argentina, where there was no previous data recorded on the prevalence of Brachyspira. Four additional farms located in neighboring provinces with small number of sows were also included in the study. Brachyspira was isolated from feces of pigs of all the provinces included in this study, which gave an isolation rate of 76% (40/53) of the samples taken from 53 farms with more than 200 sows. The prevalence of Brachyspira in pig farms has been reported in other countries. For instance, in 2000, it was found that 50% of the farms in Denmark were positive to isolation of intestinal spirochetes as Brachyspira was found in 79 pig farms; moreover, in that study, other porcine enteropathogens were also reported [19]. In Mexico, out of a total of 73 farms from pig production areas, 16 (21.9%) were culture positive for intestinal spirochetes showing strong and weak hemolysis on blood agar plates [13]. However, the prevalence of Brachyspira positive farms in Argentina seems to be higher than that of Denmark and Mexico, whereas, in a study in Poland including finishing pigs from 20 farms, the prevalence of Brachyspira spp. was higher than 85% [20]. In other studies, intestinal spirochetes have been found in 63.2% (24/38) of pig farms in the state of Rio Grande do Sul (Brazil) and a remarkable prevalence of 100% was found in 22 farms with no use of antibiotics in the feed [11]. The relationship between the use of in-feed antibiotics and Brachyspira prevalence has been suggested previously [21,22]. Thereby, the use of antibiotics in the feed may be related to the high isolation rate found in our study, particularly in smaller farms (<300 sows), where the use of antibiotics was less frequent (data not shown). The age of pigs at the time of sampling in this study may also influence the high prevalence found, as it has also been reported in a recent study [20]. Although no statistical association was found between herd size and isolation of Brachyspira spp., the prevalence of Brachyspira was lower in farms with >1001 sows (C4). That could be due to more effective biosecurity measures and strategic programs on the use of antibiotics coupled with more technology applied to production. In this study, a total of 290 samples were positive isolated and out of those, 128 isolates (44%) were properly identified at the species level (Tables-1 and 2). In some studies, bacteriological culture has shown higher sensitivity for detecting B. hyodysenteriae and B. pilosicoli as compared to PCR [8] or other available diagnostic tests [23]. In our study, samples were considered positive when the characteristic hemolysis on culture plates was seem and confirmed by observing Gram-negative spirochetes on microscopic preparations. However, for some samples, it was not possible to identify them fully at species level. Brachyspira is regarded as a fastidious microorganism, which is a feature that reduces the success of identification of prime isolates as the growth sometimes is so poor that its replication ends up in no further growth on subcultured plates, so the amount of growth might not be sufficient in the first instance. In other cases, it was not possible to get pure culture after repeated subculturing. Several studies have shown drawbacks on the identification of Brachyspira isolates. A study reported that out of 876 samples, only 67 intestinal spirochaetes isolates were obtained and just a few were identified but not so accurate due to variations on biochemical testing [13]. It has been pointed out that as there are known variations and discrepancies on the identification of Brachyspira species using biochemical tests, genotype identification is preferred and absolutely necessary for the proper identification of Brachyspira species [9]. Other studies have also reported issues on the identification of Brachyspira isolates by PCR, as there was trouble on 15% of the isolates in a study on Brachyspira from swine [4]. The issues on the identification of Brachyspira isolates could be due to genetic alterations on the bacterial genome, which may lead to no amplification of genetic material or no detection by the primers used in the PCR test. It is also known that more than one species of Brachyspira could be present in the same sample, so this might interfere with the punctual identification of the species by PCR techniques. The combination of bacteriological and molecular techniques has been used in several studies to address the difficulty of getting mixed results for the identification of samples at the species level [8], or to achieve proper detection of Brachyspira in mixed culture samples, or to identify a new species, such as Brachyspira suanatina which showed the same phenotype as B. hyodysenteriae [24]. Phenotypic variations are frequent in strains of B. hyodysenteriae; weakly hemolytic strains or negative indole strains may be present in a low percentage, as identified in the present study, which suggests that they must be confirmed by molecular techniques or sequencing [25][26][27]. But also, as Hampson et al. [28] have proposed, the diagnostic methodology needs to be reviewed regularly but should include both culture and molecular techniques. In our study, the combination of diagnostic testing allowed us the identification of more isolates at species level, which was more encouraging for our research. The novel Brachyspira species, B. hampsonii and B. suanatina, were not identified by sequencing among the strong hemolytic isolates in our study. This is an important outcome as those species have been recognized as emergent species in clinical cases of swine dysentery [6,24]. In our study, B. hyodysenteriae and B. pilosicoli were found in low prevalence (1.9% and 7.5%, respectively) at the Pampean region of Argentina, even lower than that described in other studies (8.3% and 16.6%, respectively) [14]. This may be related to the sampling strategy in the present study, where feces samples were taken from fattening pigs with and without diarrhea. In fact, isolation of B. hyodysenteriae from herds with clinical signs of swine dysentery was previously reported (data not shown). In other countries, B. hyodysenteriae was identified in 30.4% (24/79) of the isolates from pigs with diarrhea, while other Brachyspira species were isolated at a lower rate [4]. However, in a study from 600 samples of 20 farms with the previous infection history for B. hyodysenteriae, 24% of the samples were identified among this species, while 45% corresponded to B. innocens and a lower percentage for B. murdochii and B. pilosicoli, 13, 5, and 9.4%, respectively [20]. The prevalence of B. pilosicoli in our study is similar to that reported in Brazil, where the isolation rate of B. pilosicoli was 4.36% in 46 farms in the area of Minas Gerais during a survey on intestinal pathogens [29]. The previous findings in the region reported a higher prevalence (45.5%) from a total of 22 confined "wean to finish" and fattening farms that were on the no feed medication category [11]. A recent study in Poland also reported that the prevalence of herds infected with B. pilosicoli was 13.7% and with B. hyodysenteriae was 18.9% [30]. However, the differences in diagnostic strategies may also influence the results. The most prevalent species detected in our study were B. innocens and B. murdochii. B. innocens was isolated from 34% of the sampled farms, being the only species recovered from 19% of the total farms. These results are similar to those found in Denmark [19], where these species were found in 34.2% and 19%, respectively, also from herds with no clinical signs observed at the time of sampling. Another study also found those species as the most prevalent from animals with diarrhea [3], but other studies that collected sampled from pigs diarrhea found B. hyodysenteriae and B. pilosicoli as the most prevalent species [4]. As it has been mentioned above, the prevalence of B. murdochii was higher than that of B. innocens in this study; these results are consistent with previous studies where B. murdochii was the most prevalent among the weakly hemolytic Brachyspira species found in Austria [5]. Our findings show that more than one species of Brachyspira were found in nine farms, all the cases had in common the presence of B. innocens. In agreement with the previous studies [4,8,9], different species were simultaneously identified in a farm, but mixed Brachyspira species were found in only four fecal samples. It was previously proposed [1], that more than one species of Brachyspira could be found in a sample, of which some might not be pathogenic, so this makes the diagnosis of pathogenic Brachyspira more difficult. Several studies associate the presence of B. innocens and B. murdochii with mid catarrhal diarrhea and colitis in pigs with the presence of bugs in intestinal tissue samples [4]. Such enteric scenario has been reproduced experimentally [2]. In this context, the high prevalence of B. innocens and B. murdochii found in our study, as in other studies, emphasizes the need for further studies to determine the pathogenic potential of these spirochetes in healthy and diarrheic pigs, as it has been suggested previously [3]. To the best of our knowledge, this is the first study of herd prevalence and identification of the different Brachyspira species in the central region of Argentina, where most of the important pig production farms are located. The combination of different diagnostic tests allowed the identification of a larger number of isolates and this approach will help us to conduct further studies on pathogenicity, genotyping, and antibiotic susceptibility of Brachyspira species in Argentina. Conclusion The present study provides epidemiological data about herd prevalence of the different Brachyspira species in Argentina, showing that the prevalence figure seems to be higher than that reported in other countries. Authors' Contributions AC, JP, AA, and GZ designed the research work. AC, JP, MFL, and PC carried out the laboratory work. AC and PT analyzed the Bioinformatic data. AC and JP made the statistical analysis and drafted the manuscript. GDC, GZ, and EC helped in manuscript preparation. All authors have read and approved the final manuscript.
2021-04-12T05:47:43.507Z
2021-03-01T00:00:00.000
{ "year": 2021, "sha1": "5d6e2d4602db43170009b9a6b334f254c439794e", "oa_license": "CCBY", "oa_url": "http://www.veterinaryworld.org/Vol.14/March-2021/9.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "5d6e2d4602db43170009b9a6b334f254c439794e", "s2fieldsofstudy": [ "Medicine", "Agricultural And Food Sciences" ], "extfieldsofstudy": [ "Medicine" ] }
132056731
pes2o/s2orc
v3-fos-license
Environmental, Behavioral Factors and Filariasis Incidence in Bintan District, Riau Islands Province Microfilaria rate of filariasis in Bintan District remains high, especially in Teluk Bintan, Teluk Sebong, and Sri Kuala Lobam Subdistricts. This study aimed to determine relation between environmental risk factors (physical, biological, chemical, socio-cultural, economic) and behavioral factors with filariasis incidence. The study was an analytic observational study conducted on May – September 2015 using case control design, which consisted of a total of case as many as 33 filariasis sufferers and a total of control as many as 65 non filariasis sufferers as taken by cluster sampling technique. Population of study was people in Bintan District. Data obtained were then analyzed by using chi square and logistic regression test. Results showed correlation of knowledge (p value= 0.045; OR = 1.365), wire-net use (p value = 0.048; OR = 1.381), stockyard (p value= 0.018; OR = 3.5), swamp (p value = 0.038; OR = 1.358), plantation/forest (p = 0.035; OR = 0.373) and mosquito-net use (p value = 0.036; OR = 1.417) as risk factor of filariasis incidence. In conclusion, variables most related to filariasis incidence in Bintan District are knowledge (OR = 6.154), mosquito-net use (OR = 3.861) and distance to swamp (OR = 3.668). Introduction Filariasis known as elephantiasis, up to now still becomes one of public health problems in the world, especially in Indonesia. In 2004, filariasis infected 120 million people in 83 countries, mainly in tropical and sub-tropical regions. It is estimated that one fifth of world's inhabitants or about 1.1 billion people are at risk of getting filariasis infection. 1 The rapid survey of filariasis in 2000 reported that this disease has spreaded to all provinces in Indonesia consisting of 231 regencies, 674 community health centers, and 1,533 villages, with the number of chronic clinical cases (elephantiasis) around 6,500 people. Meanwhile, in 2004, the number of clinical cases both in the form of acute and chronic filariasis increased to 11,969. 2 The case of clinical filariasis is that sufferers of Filaria worm infection show the clinical symptoms that attack lymph duct and lymph gland, damage lymphatic system, and the manifestation of swollen hands, legs, glandula mamae and scrotum. Differ from Malaria and hemorrhagic fever, filariasis can be infected by 23 mosquito species from the genus of Anopheles, Culex, Mansonia, Aedes, and Armigeres. Therefore, this disease can circulate very rapidly. 3 The results of a study conducted by Ramadhani,4 showed a high of microfilaria number and morbidity rate of acute filaria (0.4 %), as well as the high density of microfilaria parasite of Wuchereria bancrofty as one of filariasis agents. This disease may lead to permanent physical defect, social stigma, and psychosocial barrier as well as the decrease of work productivities of individual, family and the community, so that leads to huge economic lost. 5 Filariasis is triggered by the condition of physical environment encompassing climate, geographical situation, geological structure, etc. Physical environment factors highly relate with breeding and resting places of the mosquito vectors. The environment of breeding places (swamp) with water plants and the existence of reservoir host animals (such as monkey, langur and cat) intensely affect the spread of filariasis by Brugia malayi of both sub-periodic nocturna and non-nocturna types. 5 A publication of Srividya, et al, 6 titled "A Geostatistical Analysis of the Geographic Distribution of Lymphatic Filariasis Prevalence in Southern India" yielded a prevalence disparity between filariasis cases among people living in mountain and coastal areas. Bintan District, Batam City and Lingga District are filariasis-endemic areas in Riau Islands Province. 7 A finger blood survey was held in Tembeling Village and Bintan Buyu Village in 2012 in which the blood samples were examined in the laboratory of Environmental Health Technique (Balai Teknik Kesehatan Lingkungan/BTKL) of Batam. The examination found 53 microfilaria positive blood samples, for instance, 28 out of 318 samples in Tembeling Village (8.8%) and 25 out of 343 samples in Bintan Buyu Village (7.3%). Microfilaria rate in Bintan District by 8% meant that the district is categorized as filariasis-endemic area that should follow the eliminating program. Mass medication is conducted once a year for a period of five years. It aims to reduce the prevalence of microfilaria to less than 1%, and to improve the management of clinical cases, so the disease no longer becomes public health problem. In 2014, the number of cases increased to 66 in Teluk Bintan, Teluk Sebong and Sri Kuala Lobam Subdistricts, and were comprised of 45 males and 21 females. 7 The geographical condition of Bintan District constitutes of highlands, forest, plantation and swamp areas. Major occupations of the local people are farmer, plantation worker, fisherman and trader. There is a habit among the people to come and sit together in food stalls, especially during the night. 7 The high of fillariasis cases and the still unknown factors related to those cases were the main reasons for this study that aimed to find out relation between environmental and behavioral risk factors with filariasis incidence in Bintan District in 2015. Method This study was an analytic observational using casecontrol design. Location of study was in Bintan District as conducted from May to September 2015. Study population was 66 filariasis cases taken from the data of Bintan District Health Agency in 2014. Samples were all those population, and some inclusion criteria applied were receiving health services from the government and still living in Bintan District at the time of study. However, persons who refused to be interviewed were excluded from the sample list. Based on those criteria, 33 samples of filariasis cases were selected by following cluster random sampling technique. To fulfill the case control design, a number of non-filariasis sufferers were also involved in. By using a ratio of 1:2 between the case and the control, 65 people were selected as the control group. The condition in study location made the number of control hard to reach its optimal size, for instance, there was one case that only had one neighbor. The distance from respondents' houses to sub-variables of stockyard, swamp, bushe, seashore, plantation/forest (supportive and not supportive) were measured by using measuring-tape and stated in meter unit. The sub-variables of salinity (normal and high) and pH (high and low) were measured by using salinity-meter and universal indicator respectively. The sub-variables of wire-net, ceiling, and ditch/sewerage (meet and unmeet the requirements), mosquito-net (in-use and not in-use), and reservoir animals (present and not present) were observed by using check list. The sub-variables of income (high and low), gender (male and female), age (produc-tive and non-productive), education (high and low), occupation (employed and unemployed), knowledge (good and bad), attitude (good and bad), mosquito repellant (in-use and not in-use), clothes hanging (yes and no), night going-out (yes and no) were measured by using questionnaire. Filariasis cases as the dependent variable were obtained from documents of Bintan District Health Agency. Meanwhile, as the controls were neighbors whose houses nearest to the cases' houses and who were not suffered from filariasis. The data were then analyzed by using chi square test and logistic regression test at confidence level of 95%. Results Results of statistical analysis concluded that independent variables which had relation with filariasis incidence were knowledge (p value = 0.045; OR: 1.365), ventilation installed with wire-net (p value = 0.048; OR: 1.381), and distance between house and stockyard (p value = 0.018; OR: 3.500), swamp (p value = 0.038; OR: 1.358), and plantation/forest (p value = 0.035; OR: 0.373) as well as the use of mosquito-net (p value = 0.036; OR: 1.417). Meanwhile, sex, age, education, oc- cupation, the presence of ceiling and ditch/sewerage, the distance to bush and seashore, water pH, income, mosquito repellent use, the habit of night going-out and outfit hanging as well as presence of reservoir animals had no significant relation with filariasis incidence (all p value > 0.05). The complete results could be seen in Table 1. Table 2 presented multiple logistic regression to variables with chi square test results that were eligible (p value < 0.25). Based on the results, the most influential variables to the incidence of filariasis in Bintan District were knowledge (p value = 0.002; OR = 6.154); netting (p value = 0.016; OR = 3.861); and swamp (p value = 0.017; OR = 3.668). The equation is y = -3375 + 1.245 (education) +1.817 (knowledge) + 1.345 (gauze) + 1.300 (marshes) + 1.108 (gardens / forest) + 1.351 (bed nets) + 1.098 (night going-out). The equation was applied to predict the probability of suffering from filariasis disease incidence. Discussion In general, number of respondents with good knowledge was 61 persons (62.2%). Based on the statistical test, there was a relation between knowledge and the incidence of filariasis in Bintan District (p value = 0.045), and respondents with low knowledge were 1.365 times more likely to contact with filariasis. Based on results of the interviews, on average the respondents could answer the questions about filariasis. This was because the health officers often provided information to people living in high risk areas. The presence of some people who had low knowledge was because they lived in remote areas and had low education, so this condition was potential to influence their knowledge and understanding. Therefore, information sharing has to be raised continuously, but accompaniment by optimalizing roles of health cadres among the community has to be implemented as well. Results of this study was in line with Agustiantiningsih's, 8 study which stated that a relation was found between knowledge and preventive measures of filariasis (p value < 0.001). Preventive efforts could be applied by doing applicative yet simple elucidation activities which comprise of advices for avoiding contact with filariasis mosquito vectors by means of using mosquitonet, closing house ventilation with wire-net, and applying mosquito repellent. 9 A study conducted by Uloli, 10 in Boneraya Subdistrict of Bone Bolango District obtained result that education and filariasis incidence was related (p value = 0.042; OR = 2.004). However, different result was found in study of Ambar, et al, 11 in Pangku Tolole Village that knowledge of filariasis (p value = 0.431) and the prevention of filariasis (p value = 0.159) had no relation. Theoretically, variation of specific elucidation method for filariasis sufferers and people living around and near to them is needed to be implemented. It can be done, for example, through video screening in coffee shops, before and after community activities, and information propagation through radio stations that is designed specifically to be inserted in favorite programs for general people, such as musical or news ones. This motivating effort hopefully can reach communities in the remote areas about the information of filariasis impact that is not immediately treated, and therefore either the surroundings people or neighbors or family members with filariasis clinical symptoms cases can be directly active bringing them to the nearest community health center. 12 Good knowledge is hopefully to build good attitude, so that individuals or the community are able to solve health problems they face. People who still maintain bad attitude to filariasis may be caused by the lack of knowledge and education level they attain. It may be caused as well through less appropriate socialization activities about the disease and its corresponding preventive measures which is performed by health officers. 13,14 This is because the foundation of attitude consolidation is positive behaviors of which the trust and believe about the gained advantages grow from. 15 The proportion of those whose houses were far from swamps (not supportive) was 54.1% or 53 persons. The statistical test showed that swamp was significantly related to filariasis incidence (p value = 0.038), and the risk of getting filariasis was 1.358 times higher. A study by Uloli,10 stated that living near swamp environment was correlated with filariasis incidence (p value = 0.017; OR: 3.563). Study conducted by Jontari, 16 also concluded that there was a relation between human settlement and swamp in the surrounding (< 500 meter) and the filariasis incidence (p value = 0.008). These conditions depicted that transmission of filariasis was so influenced by interaction between human behavior and the surrounding environment which had possibility to support filariasis infection. 17 Wire-net installed at house ventilation aimes to reduce the frequency of mosquito bite, therefore will prevent the potential risk of contracting filariasis. 18 Ministry of Health states that habit of using wire-net for mosquito protection is highly needed, especially for people living in endemic areas or near swamps, plantations and rice fields where mosquito bite is very intense. 19 A study conducted by Sipayung, et al, 20 concluded that existence of biological environment around houses is related to lymphatic filariasis incidence in the endemic areas of Sarmi District (p value = 0.005). Upadhyayula's, 21 study in India found relation between presence of mosquito's breeding places and filariasis incidence (p value = 0.002). Meanwhile, Ike's, 22 study found that in Pekalongan District, relation between biological environment and filariasis incidence was identified. Lasbudi's, 23 study stated that mosquito density was found more in place that has suitable temperature, humidity and illumination for mosquito's growth and development, so potential for filariasis incidence. The existence of water puddle will increase the risk of being infected by filariasis because this condition can increase mosquito population. The Ministry of Health says that endemic location for Brugia malayi is areas with forest and swamps along river flow, or water body that full with water plants. 2 A study conducted by Ashari, 24 found relation between presence of water plants and filariasis incidence. The study also found that people living in houses with mosquito habitat were eight times more likely to get filariasis. In addition, a study of Mulyono, et al, 25 concluded that water puddle was the risk factor of filariasis 4.14 times higher. Therefore, swamp is highly related to bionomics of mosquito vectors since this type of environment is used as both breeding and resting places for the insects. Water plants are breeding places for Mansonia mosquitoes as the larva's and the pupas of this species breathing by means of the plants' roots beneath the water surface, and through the floating stalks and leaves. 26 The need of high humidity level affects mosquitoes to seek damp and wet places outside people's houses as their resting place during day time. Anopheles farauti, one of filariasis vectors, enters houses just for blood sucking and afterward go out of the houses to perch for maturing their eggs. One of the preferred outdoor places is shady spots with trees. 27 Management of the environment is very important for controlling the vector mosquitoes of diseases. Immediate intervention is needed to lessen the swamps, to treat the unoccupied yards, and to install mosquito traps, so that can force the vector control to run more optimal. 2 The government hopefully can do communication with plantation companies about reinvestment and Corporate Social Responsibility (CSR), such as development of healthy houses, the control and size reduction of swamps, and the management of unoccupied yards as breeding places for mosquitoes in order to diminish the biting intensity of filariasis-causing mosquitoes. Based on the interviews and observations which has been carried out, the proportion of those using mosquito-net was 68.4% or 67 people. Test of relation showed that habit of not using mosquito-net had significant relation to filariasis incidence (p value = 0.036), and risk of contracting filariasis was 1.417 times higher. Results of Jontari's study concluded that sleeping without using mosquito-net (p value = 0.029; OR = 1.170) was the risk factor of filariasis. 16 A study conducted by Ambar,11 identified relation between prevention and self-protection methods by using mosquito-net or repellent, and filariasis incidence (p value = 0.038). It was identified as well that 61.25% of respondents owned mosquito-net and used it. Results of study conducted by Juriastuti, 28 is different that found no relation between the use of mosquito-net and filariasis incidence. The main effort of filariasis prevention is keep away from the biting of mosquito vectors, such as by using mosquito-net when sleeping, covering house ventilation with wire-net, and rubbing skins with mosquito repellent. 3 Based on the results of multiple logistic regression test, four variables showed significant relation to filariasis incidence, such as knowledge, mosquito-net use, swamp, and night going-out. Similar results were also obtained by Febrianto's, 18 study which concluded that the dominant factors for filariasis incidence were mosquitonet and ceiling construction. Meanwhile, a study of Nasrin,12 found that the most dominant risk factors of filariasis incidence in West Bangka were occupation, income, swamp presence, and respondents' knowledge level. Main focus for Filariasis handling in Bintan District starts from knowledge increasing efforts by means of health promotion activities equipped with elucidation and information dissemination through pictorial banners, as well as by socialization about the importance of mosquito-net use as preventive measure from mosquito bites, and the distribution of the nets to people, especially those living in case areas. Prevention action for filariasis can be carried out by cleaning breeding places of the mosquitoes, burying used stuffs potential for becoming water containers, draining water containers, mass insecticide spraying, wearing self-protective devices when working at plantation, such as long sleeve apparel, applying mosquito repellent on skin, using mosquito-net when sleeping, not going-out home at night, and covering ventilation with wire-net. 29,30 These actions should be integrated held through coordination with all stakeholders among the community, private sectors and the government (cross program and cross sector). Community empowerment is also needed for up-leveling the behaviors of clean and healthy living. 31,32 At the end, all those activities can contribute to the success of filariasis eliminating program that has been declared by the local government of Bintan District. Conclusion There is no relation found between sex, age, occupation, education, attitude, ceiling, ditch/sewerage, salinity, water pH, bush, seashore, income, mosquito repellent, night going-out, clothes hanging and reservoir animals, and the incidence of filariasis in Bintan District. On the other hand, knowledge, wire-net use, stockyard, swamp, plantation/forest, and mosquito-net are related to the incidence of the disease. Factors most related to filariasis incidence in Bintan District are knowledge, mosquito-net use and swamp. Recommendation The community health centers should keep strengthening the surveillance system, especially for the subsidiary health centers throughout and in remote areas of Bintan District. Then people should use mosquito-net or repellent when sleeping or going out at night. Vectors and environment (swamp and plantation/forest) control should be implemented in an integrated manner by strengthening cross-sectoral coordination including the mining companies and plantations around filariasis-endemic areas.
2019-04-26T14:24:13.600Z
2016-08-31T00:00:00.000
{ "year": 2016, "sha1": "2086ee2c536d20b3626490cae7a6d9e29d54f7e7", "oa_license": "CCBYSA", "oa_url": "http://journal.fkm.ui.ac.id/kesmas/article/download/546/513", "oa_status": "GOLD", "pdf_src": "Neliti", "pdf_hash": "b475411ea51a3c43748b8b4897e616b18b13701f", "s2fieldsofstudy": [], "extfieldsofstudy": [ "Geography" ] }
198035024
pes2o/s2orc
v3-fos-license
The Environmental Performance of a Remote-Region Health Clinic Building, Australia, Based on Instrumental Monitoring Both environmental (e.g., energy use) and human sustainability (occupant wellbeing/productivity) need to be considered in building design and operation. The challenging climatic and socioeconomic conditions in remote regions of Australia mean that achieving sustainability is difficult and costly. Currently, the energy use patterns, thermal performance, and indoor atmospheric quality (IAQ) of remote health clinic buildings are unknown, meaning that there is an information gap in the design and operation of such buildings. This paper reports the results of an investigation into the environmental performance of a clinic in the remote clinic of Numbulwar. Climate variables, energy consumption, and IAQ variables were instrumentally monitored at the clinic from April 2017 to March 2018 at 10-minute intervals, with data uploaded to a cloud database now holding 3 million values. Analyzed temporal variations in the measured variables for the clinic and the relationships between them reveal the performance of the building. The results obtained provide a basis for the formulation of strategic interventions, design guidance, and further investigation, including: (i) the range of indoor atmospheric conditions needs to be narrowed to provide more consistent occupant comfort; (ii) an occupancy profile needs to be developed to determine user behaviours with respect to energy use; (iii) the heat-exhaust/aircon systems need to be reviewed for more efficient use; (iv) the cycling of air, heat, moisture, and pollutants through the building needs to be further investigated; and (v) BIM should be undertaken using the data as input to test future design solutions. Introduction and background Sustainable building practices have made significant advances in the last two decades [1]. However, a truly sustainable building addresses not only environmental impacts but also human sustainability and well-being. Environmental sustainability is the ability to maintain the qualities of the natural environment, including the processes involved in producing energy, as well as the impact of buildings and human activity on the environment. Human sustainability involves specific strategies and methods for enhancing the well-being and productivity of building occupants/users [2]. Remote parts of Australia are defined based on the physical road distance to the nearest town or service centre. Remote Australia is home to around 500,000 people or 3% of the Australian population. The remote regions of Australia have common systemic problems including persistent social and economic disadvantage and weak infrastructure. These remote-region characteristics when combined with changing climate and different energy futures result in particular challenges faced by remoteregion buildings and their occupants with respect to environmental and human sustainability compared with urban areas, including generally harsher climates with greater temperature extremes and much higher electricity costs (up to $5/kWh versus $0.25/kWh) [3]. Climatic conditions in the remote regions of Australia are acknowledged to be harsh, characterized by extreme temperatures, wide temperature variation, and high seasonal variation in rainfall. Such conditions pose difficulties for achieving environmentally efficient buildings [4] and for establishing high levels of IAQ for sustaining human comfort and well-being. These challenges are likely to become greater, given projections of climate change (higher temperatures and general declines in precipitation) and energy requirements (increasing demand and higher prices). Remote Australia is still highly dependent on fossil fuels for transport and for household and public service energy needs. Energy consumption is rising, and the use of air-conditioning will increase with increasing temperatures in inland Australia, which will lead to even higher demand for energy to maintain current lifestyles and to address the changing requirements of an ageing population. Energy prices are likely to continue to rise as the cost of production of fossil fuel increases, and electricity production plants will be obliged to purchase emissions permits under legislation. Renewable energy sources have low uptake in Remote Australia, with barriers including the costs of development and maintenance in remote locations, perceptions, relatively immature technology, and the distance between energy sources and markets [1][5] [6]. Remote Australia is served by a network of health clinics and associated staff. Hospitals and clinics need to generate conditions that are conducive to medical treatment and recovery by mitigating the spread of disease and improving occupant comfort, as well as allowing staff to work productively. Indoor atmospheric quality (IAQ) provides a critical foundation for meeting this target. However, strategies to enhance human health and well-being have unfortunately played a minimal role in the evolution of building standards and practices in remote communities. Further, there is a continuing drive to reduce the environmental impact of buildings, particularly with respect to energy use, as well as to reduce operational costs. Therefore, it is imperative that human health and productivity, as well as environmental quality, take centre stage in building design, function, and operation so that buildings can perform better for both people and the environment. This can be achieved through a dedicated focus on evidence-based research and the measurable performance of sustainability metrics. Research purpose and objectives Climatic conditions impact significantly on a building's ability to provide the necessary conditions for occupant well-being as well as on its environmental efficiency, particularly its energy consumption. Climatic conditions have changed significantly over the last century, and predictions indicate that changes will continue to occur in the near and mid-term futures. Therefore, it is important that climatic and building performance data be available not only to assist in measuring and managing energy use and IAQ in existing health clinics, but also provide a base upon which future clinics can be planned, designed, constructed, and operated. Currently, the Northern Territory Government Department of Health (NTGDoH) has no data regarding environmental sustainability (as measured through energy use) and human sustainability (as measured through IAQ) for its remote health clinics. Such data should be a valuable tool for developing strategic interventions in existing buildings and for indicating sustainable design solutions in new buildings. This study examines the energy use, thermal performance, and IAQ of the Numbulwar health clinic building based on monitored data. The term "environmental performance" here refers to three aspects: 1. The energy use patterns of the building with particular attention to diurnal-nocturnal and seasonal patterns as well as relationships with other variables. 2. The thermal performance of the building. This covers, in particular, the ability of the building to minimise energy consumption by reducing the need for cooling (and potentially heating, if needed) in the interior of the building. This is a function of the design of the building and the construction materials used, and the extent to which these moderate the external ambient conditions. 3. The IAQ of the building, referring to the properties of the indoor air and the associated comfort and well-being of the building's occupants with respect to these properties. In this study, this covers the temperature and humidity of the air as well as chemical and particulate pollutants. The objectives of the study are: 1. To establish and analyze a database of variables (indicators) measuring aspects of environmental and human sustainability by instrumental monitoring at the Numbulwar clinic; Based on the monitoring and measurement of the chosen indicators, to inform decision-making and strategic interventions in the operation of the Numbulwar health clinic; and 3. To evaluate the Numbulwar clinic data with respect to understanding the operation of other existing health clinics as well as the design and construction of future health clinics. Method and data The health clinic building is located in Numbulwar in the Northern Territory ( fig. 1) and was built in early 2017. The clinic ( fig. 2) has 22 rooms in total and has a floor area of around 300 m 2 , serving around 2000 people in Numbulwar. Numbulwar has a long-term average maximum temperature of 28.8°C, a dry winter season (May-September) with humidity of 20%-50% and temperature of 15-33°C, and a wet summer season (October-April) with humidity of 30%-95% and temperature of 22-35°C. This study involved a quantitative empirical approach, with monitoring instrumentation being installed both within and outside the clinic building to continuously measure energy use, IAQ data, and external climatic data for a 12-month period. Analysis of these quantitative data provides an account of the temporal variation in the selected environmental and human sustainability indicators as well as of the relationships between the indicators. The clinic building had been constructed by 1 April 2017 but was unoccupied and non-operational prior to mid-September 2017. In mid-September, the clinic started its operational phase with staff and equipment being used for the usual duties and functions performed by a health clinic. As the measurements made cover both the pre-occupancy and post-occupancy phases of the Numbulwar clinic building, the data from each phase contribute different insights into the environmental and human sustainability of the building, as well as indicating comparative differences between the two phases. A suite of 24-hour electronic monitoring equipment was set up to record climatic, energy consumption, and IAQ data for the Numbulwar clinic for a period of 12 months. The monitoring equipment was installed in late March 2017. The monitoring devices were installed in various rooms/corridors inside the building as well as inside the ceiling/roof-space cavity. Devices were also installed on the eastern and northern external walls of the building. An energy consumption meter was were installed on the switchboard. Once installed, data were uploaded continuously at 10-minute intervals via the Telstra network to a cloud-based platform, from which data could be accessed via a secure portal either live or as downloads. The following data were collected from April 2017 to 31 March 2018: Climate (external) variables -temperature and humidity; total building energy consumption; and IAQ variables -temperature, humidity, and airborne chemicals and particulates. The 12-month database contains around 2,880,000 values. The database was first inspected thoroughly for data consistency and quality. A number of periods were discovered during which data were not recorded because of transmittance or other disruption (accounting for <0.1% of the total data). Selected results and findings Selected graphs (with data frequency of 10 mins) are presented that summarize some of the key variables and data trends for the clinic building, with accompanying results, interpretations, and findings. Local climate External temperature and humidity values reflect the tropical location of the Numbulwar clinic. Daytime temperatures generally exceeded 35 °C and sometimes 40 °C ( fig. 3). Maximum temperature variations were observed in April-May and December-March. Minimum (night-time) temperatures were most variable between April and September (17.5-28 °C) and more stable (~25 °C) between October and March. Monthly mean temperatures were lower from June to September compared with the other months, with the range being 26.7 °C in June to 31.0 °C in December. A typical diurnal cycle during December sees temperature starting to increase from its night-time level at about 06:00, reaching its maximum typically between 11:30 and 13:00, from which time it slowly decreases during the later afternoon, evening, and night-time to reach a short stasis period at ~05:00. Humidity values varied from 25.9% to 97.5% during the 12-month study period ( fig. 4). Maximum humidity levels are closely controlled by temperature, with maximum humidy values of 45% at 40 °C and 95% at 27 °C. On a diurnal scale, humidity was highest during the night-time. In December, for example, humidity peaked at ~04:30-06:00 (e.g., 90%) and then reduced to reach a minimum at ~11:30 to 13:00 (e.g., 35%), following which it rose through the afternoon, evening, and night to reach its next peak at the same time the next morning. Energy consumption of the building As a result of the external temperature variation, energy consumptions associated with air-conditioning are higher during the months of October to May and lower from June to September (fig. 5). Day-time temperatures have a major influence on the energy consumption of HVAC units, with higher external temperatures causing higher energy consumptions because of the greater differential between indoor and outdoor temperature. This is shown, for example, by the higher energy consumptions recorded for higher outside temperatures in maintaining a constant indoor temperature ( fig. 6). Despite the broad relationships observed between energy consumption, external temperature, and internal temperature, there is a large amount of scatter in the data ( fig. 6). The very wide range of external temperatures associated with the corresponding internal temperature range for the same energy consumption level, and the very wide range of energy consumptions associated with maintaining the internal temperature at 21-25.5 °C, suggest that there is potential for efficiencies to be made in the operation of the HVAC and heat exhaust systems of the clinic building and for an investigation to be made into human activity and behaviour in the clinic regarding the generation of heat and moisture. From mid-September, the clinic building became occupied and operational, with total daily energy consumption increasing from that point. September was a transitional month for the clinic, marking the start of occupancy/operation and the associated adjustments in building mechanical and electrical systems, including HVAC. Therefore, the period October-March best represents normal occupancy and building use. For this period, for energy consumptions of <8 kWh per hour (at which level it is assumed that HVAC is not running), there is a horizontal band of data for which external temperatures up to ~40 °C are associated with internal temperatures of 22.5-27 °C ( fig. 6). The horizontal band represents times before ~09:30 and after 15:00 during some weekends when a reasonable internal temperature range was maintained in the building without the use of HVAC. On these weekends, the clinic was open for shorter times and HVAC was being run only between ~09:30 and 15:00. The October-March data show a positively sloped band of external temperature data, extending from energy consumptions of ~18 kWh per hour and temperatures of ~25 °C to consumptions of ~28 kWh per hour and temperatures of ~40 °C ( fig. 6). This band covers a range of internal temperatures from ~21 to ~27 °C with most between 21 and 25.5 °C. Assuming that the variation in energy consumption above 8 kWh is due mainly to HVAC operation, then for every 3 °C rise in the external temperature, an additional 2 kWh per hour is needed to maintain the observed range in internal temperature. Data scatter means that there is a very broad range of external temperatures associated with the corresponding internal temperature range for the same energy consumption level. For example, high energy consumptions of ~25 kWh per hour occur for external temperatures of between 25 and 40 °C. In addition, energy consumptions of 10-55 kWh per hour are associated with maintaining the internal temperature at 21-25.5 °C, although most of the data lie within the 15-35 kWh range. The scatter may be due to inefficiencies in the HVAC system or variations in occupant behaviour and equipment use. Thermal performance of the building Inferences about the thermal performance of the building and the effectiveness of the building skin can be made by investigating temperature characteristics when the building is unoccupied, the HVAC system is not running, and the weather is hot and sunny. The energy use pattern during the operational period from mid-September onwards indicates that the HVAC system was running every day during the day-time. However, the period prior to mid-September, while the building was unoccupied and nonoperational, contained several weekends where HVAC was clearly not running. For one such weekend in June ( fig. 7), the 24-hour variation in external temperature of ~10 °C is reflected by a variation in ceiling/roof-space temperature of ~1 °C (with the peak temperature lagged by ~3.5 hours) and by a variation in indoor temperature of ~2 °C (peak temperature lagged by ~4 hours). For a similar period in early September, the variation in external temperature is ~16 °C, in ceiling/roof-space temperature is ~3.5 °C, and in internal room temperature is ~2.5 °C. The peak temperature lag time is the same as the June data, ~3.5-4 hours for both ceiling/roof-space temperature and internal room temperature. On the above basis, the building skin and thermal envelope of the Numbulwar clinic seem effective at dampening solar gain and conduction of heat into the building from the outside. The ceiling/roofspace performs particularly well, given that it is the part of the building that is most exposed to direct sunlight. The daily increase in external temperature from its night-time value is typically ~15 °C for most of the year, and the typical rise in building indoor temperature (unoccupied, no HVAC) during the day-time is determined to be ~2 °C and the rise in ceiling/roof-space temperature ~2.5-3.5 °C. Figure 7. Variation in external and internal temperature (northern part of building) and in ceiling/roof-space temperature for the clinic for 9-13 June, during which the building was unoccupied and no air-conditioning or other mechanical/electrical systems were operating (apart from fridges/freezers and night lighting). . 3). During April and May, the temperature was quite variable, ranging between 26 and 32 °C with a mean of 28.0 °C. This is interpreted as representing testing of the HVAC system or minor works occurring in the building. At the beginning of June, the internal temperature dropped, probably as a result of a manual adjustment to the air-conditioning thermostat. From 3 October, the internal temperature dropped and became more variable, ranging between 19 and 28 °C from then until the end of March, although more generally between 20.5 and 26.5 °C, and with a mean of 24.2 °C. This drop is presumed to be an adjustment to the thermostat control of the air-conditioning system at the beginning of the occupied/operational period of the clinic. Consistent temperature differences exist between indoor locations through the operational months of October-March. In March, for example, the day-time temperatures in the waiting room were ~2 °C cooler than those in the northern consulting room, which in turn were ~2.5 °C cooler than the corridor area in the central-west part of the building. This may indicate variable airflows, variable distances from HVAC vents, and/or zoned control of airconditioning throughout the building. The indoor air temperature range that meets ASHRAE 55:2013 "Thermal Environmental Conditions for Human Occupancy" standards is 21.0 to 24.9 °C (the range in which 90% of occupants feel fairly comfortable). For day-time (06:00 to 18:00) hours, the clinic's internal temperature for October-March varied between 20.1 and 28.7 °C but more usually between 21.0 and 26.5 °C, with a mean of 23.4 °C. The minimum and mean temperatures appear to be acceptable, but ~10% of the data exceed 24.9 °C. Internal humidity. The internal humidity in the northern part of the building varied from 39% to 83% between April 2017 and March 2018 ( fig. 4). The humidity behaviour from late September to the end of March (building occupied) is characterized by high shorter-term variability but with fairly stable moving-mean-value trends. Day-time data for October-March ( fig. 8) show that internal temperatures of ~21 °C are associated with humidity values of ~65% to 70%. The range in humidity for a particular temperature increases with increasing temperature until a temperature of ~23 °C is reached, with the lower bound for humidity decreasing from ~65% to ~40% as temperature increases from ~21 °C to ~25 °C. Temperatures of 22-26 °C are associated with a wide humidity range, from 40% to 75%. Day-time humidity for October-March ranged mainly from 45% to 75% (mean 59.9%). Temperature and humidity are important IAQ metrics of a building as they largely determine the physical comfort of an occupant. It is generally regarded that the optimum humidity range for human comfort is 35%-65%, which, if applied to the Numbulwar clinic, would mean that around 15% of the day-time humidity values are too high. ASHRAE thermal environmental conditions for human occupancy show that if the humidity is 60%, then a suitable temperature range is 23-25.5 °C and if it is 30% then a suitable temperature range is 24.5-28 °C. The clinic's temperature-humidity value distribution includes this former value range ( fig. 8) but has considerable amounts of data outside it. Conclusion This study monitored external climate variables, energy consumption, and IAQ variables for a health clinic building in the remote community of Numbulwar, producing ~2.9 million data over a 12-month period. It is the first case where a suite of sustainability variables have been measured over such a time-frame for a healthcare facility in Australia. An analysis of temporal variations in the monitored data, including at the diurnal and seasonal scales, as well as differences between the non-operational and operational phases of the clinic, has allowed inferences to be made regarding building energy use patterns, thermal performance, and IAQ. Relationships between climatic variables, energy consumption, and IAQ metrics highlight where interventions might be made to optimise the building systems as well as showing where further investigation would be most beneficial. Strategic interventions and further investigation include: (i) the range of indoor atmospheric conditions needs to be narrowed to provide more consistent occupant comfort; (ii) an occupancy profile needs to be developed to determine user behaviours with respect to energy use and other aspects (e.g., [7]; (iii) the heat exhaust and air conditioning systems need to be reviewed for more efficient use; (iv) the cycling of air, heat, moisture, and possible pollutants through the building needs to be further investigated; and (v) BIM should be undertaken using the data as input to test design solutions and discover which design features of the current clinic are contributing most to building performance (e.g., [8]). As part of a wider ongoing investigation, these results will help to develop key clinic building performance indicators, inform improvements in the energy-use efficiency, thermal performance, and IAQ of remote clinics, and optimize future building design solutions buildings via BIM simulations to optimize sustainability with respect to climate, environmental performance, and occupant well-being.
2019-07-22T22:31:22.028Z
2019-06-21T00:00:00.000
{ "year": 2019, "sha1": "864221c67f775220f4896c7212e78641dd2a3e1e", "oa_license": null, "oa_url": "https://doi.org/10.1088/1755-1315/290/1/012071", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "080ecff9a79795beb347eb28b89dbdbbc5ab647f", "s2fieldsofstudy": [], "extfieldsofstudy": [ "Geography" ] }
220584776
pes2o/s2orc
v3-fos-license
Recent Advances in Catalytic Hydrosilylations: Developments beyond Traditional Platinum Catalysts Abstract Hydrosilylation reactions, which allow the addition of Si−H to C=C/C≡C bonds, are typically catalyzed by homogeneous noble metal catalysts (Pt, Rh, Ir, and Ru). Although excellent activity and selectivity can be obtained, the price, purification, and metal residues of these precious catalysts are problems in the silicone industry. Thus, a strong interest in more sustainable catalysts and for more economic processes exists. In this respect, recently disclosed hydrosilylations using catalysts based on earth‐abundant transition metals, for example, Fe, Co, Ni, and Mn, and heterogeneous catalysts (supported nanoparticles and single‐atom sites) are noteworthy. This minireview describes the recent advances in this field. Introduction Considering silicon as the second most abundant element on earth and the considerable number of organosilicon compounds used in our daily life,o rganosilicon chemistry is of significant importance for the development of more sustainable and greener chemistry.I ng eneral, organosilicon compounds are characterized by their stable and inert carbon-silicon bonds. [1][2][3] In particular,organosilanes,organosilyl halides,a nd the corresponding ethers are readily available and offer straightforward possibilities for versatile functionalizations.C ompared to their ordinary pure carbon analogues,organosilicons have complementary physical properties,w hich make them attractive for av ariety of industrial applications.H ence,o rganosilicon compounds are widely found in adhesives and coatings and used as oils,rubbers,and resins. [4] Among the different methods available for the creation of C À Si bonds,c atalytic hydrosilylations allow the straightforward addition of silanes (SiÀH) to multiple bonds,for example, olefins and alkynes. [5][6][7][8][9][10][11][12][13][14][15] These reactions are not only used on the laboratory scale,but have been also implemented in the chemical industry for the production of functional organosilicon compounds.I nf act, they have been proven to be one of the most efficient reactions in the silicone industry.T heoretically,h ydrosilylations are 100 %a tomic economic without generating other products or wastes. Performing hydrosilylations with functionalized silanes (Figure 1a)o ffers direct prospects to modify the properties of polymers or inorganic materials.F or example,p oly(dimethylsiloxane), an important kind of silicone rubber, can be functionalized by crosslinking with another vinyl silicon reagent ( Figure 1b). [16] Moreover,o rganosilicon compounds offer versatile properties as bonding or bridging agents in the preparation of composites from organic polymers and inorganic materials,f or example,g lass,m inerals,a nd metal oxides. [17] Thus,a part from their importance for applications in the silicone industry,h ydrosilyations are increasingly attractive for basic material sciences. [9,[18][19][20][21][22][23][24][25] Since the first hydrosilylation reaction appeared in the academic literature in 1947, platinum-based catalysts dominated this area. [26] Originally,t he introduction of Speiers catalyst (H 2 PtCl 6 )w as am ajor breakthrough. Later on, Karstedt made an important contribution to this area by developing ap latinum(0) complex containing vinyl-siloxane ligands. [27] Today,t his lipophilic complex represents an efficient benchmark catalyst in industrial hydrosilylation processes.D espite the efficiency of this system, obviously there are certain disadvantages of homogeneous Pt-based catalysts in some applications.F or example,p latinum complexes can be easily trapped in the product and it is difficult to recover them due to the viscous properties of the resulting Hydrosilylation reactions,whichallowthe addition of Si À HtoC = C/ C Cbonds,a re typically catalyzed by homogeneous noble metal catalysts (Pt, Rh, Ir,and Ru). Although excellent activity and selectivity can be obtained, the price,p urification, and metal residues of these precious catalysts are problems in the silicone industry.T hus, astrong interest in more sustainable catalysts and for more economic processes exists.Int his respect, recently disclosed hydrosilylations using catalysts based on earth-abundant transition metals,for example, Fe,C o, Ni, and Mn, and heterogeneous catalysts (supported nanoparticles and single-atom sites) are noteworthy.T his minireview describes the recent advances in this field. products. [28][29][30][31][32] Notably,i twas estimated that consumption of platinum accounts for up to 30 %o ft he cost of silicones. [33] Hence,f rom an industrial point of view,t he high price of platinum strongly motivates researchers to develop recyclable and less expensive catalysts to reduce the precious metal consumption. Thed evelopment and importance of hydrosilylation reactions has been discussed intensively in the scientific literature,a nd an umber of comprehensive reviews,a rticles, and books were published mainly before 2015. [3,4,34] Since then, av ariety of robust homogeneous catalysts with welldefined ligands have been developed. [35] More recently,a lso heterogeneous catalysts evolved for this process. [36] Interestingly,heterogeneous single-atom catalysts (SACs), which are considered to combine the advantages of molecular-defined and heterogeneous catalysis,w ere disclosed and displayed comparable activities and selectivities to their homogeneous counterparts. [37] This minireview covers the most important catalyst developments from the past five years ( Figure 2). To make it easy for the reader,i ti so rganized according to catalyst developments into two sections:a )the use homogeneous non-noble metal complexes specifically Co,F e, and Ni derivatives;b )the usage of recyclable heterogeneous catalysts focusing on supported noble/non-noble nanoparticles (NP) and single-atom catalysts.F inally,w ew ill give indications for future progress in this field. The Development of Homogeneous Non-noble Metal Catalysts Ther eplacement of traditional platinum-based catalysts in alkene hydrosilylations by more sustainable and economic metals is al ong-standing goal of the silicone industry. [31,[38][39][40][41] Following this goal, several molecularly defined catalysts based on earth-abundant metals (Fe, Co,Ni, et. al.) have been developed in the past decade. [5-7, 9, 13, 42-46] Moreover,t he regioselectivity can be controlled by modification of ligands leading to anti-Markovnikov or Markovnikov selective products.T he resulting silane products are widely used as silicon fluids and silicon curing agents.F urthermore,t he anti-Markovnikov silanes are also important moieties for life science applications. [47][48][49] Catalysts with Monodentate Ligands In 2017, Deng et al. reported cobalt N-heterocyclic carbene (NHC)-catalyzed anti-Markovnikov hydrosilylations of aliphatic alkenes with tertiary silanes (Scheme 1). [50] Specifically,they developed acobalt(II) amide/NHC catalyst system, which facilitated the selective hydrosilylation of monosubstituted aliphatic alkenes with HSi(OEt) 3 to linear products in moderate to very high yields (42-98 %). However, when highly reactive Ph 3 SiH was used with 1-octene,alower yield of only 28 %w as observed. According to mechanistic studies Co-silyl intermediates are involved in the catalytic cycle with cobalt(I) species proposed as the active intermediates. Here,m echanistic studies suggested that cobalt(I) silyl species are the key active species for the hydrosilylation process.Onthe basis of their results,apreliminary mechanism of this transformation was proposed (Scheme 3): Initial reaction of cobalt(I)-NHC chloride with Ph 2 SiH 2 gives acobalt(I) hydride intermediate,which interacts with Ph 2 SiH 2 to form the corresponding cobalt(I) silyl intermediate and H 2 . Subsequent reaction of the cobalt(I) silyl intermediate with an alkene via migratory insertion forms ac obalt alkyl complex that further reacts with Ph 2 SiH 2 to give the desired hydrosilylation products and regenerates the active cobalt(I) silyl species for the next catalytic cycle. In 2016, Petit and colleagues described the HCo(PMe 3 ) 4catalyzed highly regio-and stereoselective hydrosilylation of internal alkynes (Scheme 4). [52] Ther eaction was applied to av ariety of hydrosilanes and symmetrical as well as unsymmetrical alkynes,giving in many cases asingle hydrosilylation isomer in varying yields (19-96 %). Thea uthors suggested that the regio-and stereocontrol of the reaction is predominantly governed by steric features of the substrates. According to mechanistic studies,dihydridocobalt species are most likely involved in this catalytic process (Scheme 5). Oxidative addition of the silane to aCo I hydride center forms Scheme 1. Hydrosilylation of aliphatic alkenes with tertiary silanes using acobalt-NHC catalyst. [50] Scheme 2. Catalytic behavioro fcobalt(I)-NHCcomplexesinthe hydrosilylation of alkenes with diphenylsilane. [51] Scheme 3. Proposed catalytic cycle for the Co-NHC-catalyzed hydrosilylation. [51] the dihydridocobalt(III) intermediate,w hich undergoes alkyne insertion into one of the CoÀHbonds after coordination of the alkyne.D irect reductive elimination releases the vinylsilane as the major product with the observed regioselectivity and regenerates the catalytically active hydridocobalt(I) species. As shown in Scheme 7, Fe-1 and Fe-2 showed very high Markovnikov selectivity (! 98 %) in the hydrosilylation of terminal styrenes,1 -substituted, and 1,1-disubstituted buta-1,3-dienes and led to the corresponding products in high yields (88-95 %). However,i nt he presence of Fe-3,a nti-Markovnikov products were obtained in the hydrosilylation of 1-alkyl ethylene derivatives in 72-98 %y ields.K inetic isotope effect experiments and density functional theory (DFT) calculations suggest direct Si migration as the ratedetermining step.U nfortunately,t hese iron catalyst systems seem to be limited to hydrosilylations with PhSiH 3 and required Grignard reagents (EtMgBr) for catalyst activation. Scheme 5. Proposed mechanism for HCo(PMe 3 ) 4 -catalyzed highly regio-and stereoselective hydrosilylation of internal alkynes. [52] Scheme 6. Ligand-and silane-dependent cobalt-catalyzed regiodivergent hydrosilylation of vinylarenes and aliphatic alkenes. [53] acac = acetylacetonate. [54,55] Scheme 7. Iron-catalyzed hydrosilylation of alkenes with phenylsilane. [56] Angewandte Chemie Minireviews 554 www.angewandte.org Compared to the hydrosilylations of simple alkenes with primary or secondary silanes (vide supra), hydrosilylation of alkenes or vinylsilanes with tertiary alkoxy-or siloxyhydrosilanes for silicone synthesis are considered more challenging from an industrial perspective.T he resulting silicones represent industrially relevant fluids and curing materials.H owever,c ompetitive hydrogenation of alkenes and alkynes can easily occur during the hydrosilylation process when the ligands are changed slightly.Inthis respect the work of Chirik and co-workers is notable.Here anickel(II) bis(carboxylate) catalyst was used, which displayed high activity in the hydrosilylation of alkenes with av ariety of industrially relevant tertiary alkoxy-and siloxy-substituted silanes.Under optimal conditions,s elective anti-Markovnikov hydrosilylation of aliphatic alkenes with commercially relevant silanes and siloxanes was achieved in ap ractical manner (Scheme 8). [57] In 2017, Lee and co-workers synthesized more than 25 cobalt-(aminomethyl)pyridinec omplexes and explored their catalytic performance for anti-Markovnikov hydrosilylations (Scheme 9). [58] With Co-3 as the optimal catalyst system, various alkoxy(vinyl)silanes including mono-, di-, and triethoxy(vinyl)silanes and their corresponding methoxy derivatives reacted with alkoxy-or siloxyhydrosilanes to afford the desired anti-Markovnikov products in 70-99 %yield with > 98 %anti-Markovnikov selectivity. Complementary to that work, Jin and co-workers developed an efficient cobalt-catalyzed Markovnikov-selective hydrosilylation of alkynes using bidentate CImPy ligands such as L1 in 2019 (Scheme 13). [63] Theh ydrosilylation of aromatic and aliphatic alkynes with primary and secondary silanes proceeded well to form the corresponding products in 36-98 %y ield with moderate to high regioselectivities (a/b 63:37 to 99:1). Thecomparably high catalytic activity enabled by the CImPy ligand is ascribed to the high stereoelectronic tunability and rigid environment, which suppresses the deactivation of the catalyst. Recently,Z hu and co-workers described also an Fecatalyzed dihydrosilylation of aliphatic terminal alkynes and primary silanes for the synthesis of geminal bis(silanes) (Scheme 14). [64] Reactions of PhSiH 3 and n-C 12 H 25 SiH 3 with ar ange of aliphatic terminal alkynes generated the corre-sponding geminal bis(silanes) as the sole hydrosilylation products in 85-95 %yield. This method allowed the efficient synthesis of previously unreported geminal bis(silanes) with secondary silyl groups.Mechanistic studies demonstrated that the reaction proceeds via two iron-catalyzed hydrosilylation reactions,t he first generating b-(E)-vinylsilanes and the second producing geminal bis(silanes). Based on their work on alkyne hydrosilylation, Ge and coworkers also disclosed ah ighly regio-and stereoselective hydrosilylation of allenes by employing ab ench-stable catalyst system consisting of Co(acac) 2 and binap or xantphos ligands (Scheme 15). [65] Allyl-or vinylsilanes can be easily prepared by selective hydrosilylation of allenes with hydrosilanes in the presence of transition metal catalysts.The major difficulty in these reactions is to control the regio-and stereoselectivity preventing the formation of different vinyland allylsilane products.H ere,avariety of mono-and disubstituted terminal allenes reacted with primary and secondary hydrosilanes to produce the desired disubstituted (Z)-allylsilanes in high yields (57-95 %) with excellent stereoselectivity (Z:E = 99:1). Unfortunately,hydrosilylation of allenes did not occur when tertiary hydrosilanes were used. Ther egio-and stereocontrol of the reaction is explained by the steric repulsion between the substituent on the allyl group and the ligand of the cobalt catalyst. Catalysts with Pincer Ligands In the past two decades,c atalysts with so-called pincer ligands have been extensively exploited for all kind of catalytic reactions,i ncluding hydrosilylations.I ng eneral, the corresponding complexes contain tridentate ligands,w hich bind to the metal center with three adjacent coplanar sites in am eridional configuration. Often pincer ligands have ac entral, s-donating moiety that contains two side donor groups, typically either amino-or phosphinomethyl groups in orthoposition. Advantages of pincer complexes are their high thermal stability and well-defined reactivity with remaining coordination sites.I nt his section, recent studies on hydrosilylation using pincer catalysts will be discussed. As an early example of alkene hydrosilylations,C hirik and co-workers used an ickel complex bearing ab is-(amino)amide NNN-pincer ligand. Their work was inspired by arelated iron pincer catalyst, which, however, tolerated no carbonyl groups in the reaction. [67] Hydrosilylation of 1octene with Ph 2 SiH 2 using 0.025 mol %o fNi-1 achieved a9 8% conversion in only 3m inutes,t hus resulting in an impressive TOFof8 3000 h À1 . Ni-1 was capable of efficiently converting differently substituted alkenes and cyclic alkenes with up to 94 %yield. Interestingly,selective hydrosilylations of alkenes containing ketones and aldehydes groups were successfully conducted with yields from 71-74 % (Scheme 17). In these reactions Ni hydride species are assumed to be potential intermediates. [6] In general, low-valent Fe pincer complexes are known to be more unstable and difficult than iron(II/III) complexes. With this in mind, Thomas and co-workers prepared as eries of Fe II NNN-pincer complexes bearing Cl, Br, and OTf as counterions.A ctivation of such higher valent complexes was possible using tertiary amines at room temperature.T hus,i n the presence of (iPr) 2 NEt the Fe-pincer catalyst bearing OTf exhibited the best activity for the hydrosilylation of 1-octene with PhSiH 3 ,a chieving 95 %y ield for the anti-Markovnikov product. This behavior explained by the fact that the ion/ counterion bond strength of Fe-halides is stronger than that of Fe-OTf.W ith exception of nitro and nitrile groups,ab road scope of substituents was tolerated in the presence of this catalyst system, including carbonyl groups. [69] An interesting one-pot cascade-like approach for alkane dehydrogenation/isomerization/hydrosilylation yielding terminal silanes was developed using ad ual iridium/iron pincer catalytic system. Here,initially 1-octane is dehydrogenated in the presence of 1mol %i ridium catalyst and 1.2 mol %o f NaO t Bu at 200 8 8C. Fort he isomerization/hydrosilylation reactions,a nti-Markovnikov products were obtained (67 % yield) in the presence of 10 mol %ofF e-NNN pincer catalyst and 20 mol %ofNaHBEt 3 (Scheme 18). Control experiments revealed that the iridium catalyst played no role in the tandem isomerization/hydrosilylation reaction. Notably,t he ironcatalyzed alkene hydrosilylation occurred with ah igh reaction rate,t hus inhibiting reversible isomerization of the alkene. [68] Theapplication of pincer ligands with less hindered imine groups favored hydrosilylation instead of dehydrogenative hydrosilylation of olefins. [7] In this context, Chirik and coworkers developed aC o-NNN pincer catalyst derived from Co-5 a for hydrosilylations.T he catalyst bearing aC H 2 SiMe 3 group showed high activity for 1-octene hydrosilylation with HSi(OEt) 3 at room temperature.U nfortunately,t his catalyst is not bench-stable.Hence,more stable complexes Co-5 b and Co-6 b were prepared as catalysts by ligand modification. These stable catalysts exhibited excellent activity for hydrosilylations with alkoxysilanes.A sa ne xample,t he hydrosilylation of sensitive allyl glycidyl ether was successfully conducted in the presence of Co-6 b on a10gscale with 98 % yield for the trialkoxysilane product, which finds widespread applications in industry (Scheme 19). [70] Apart from olefins and alkynes,the hydrosilylation of 1,3and 1,4-dienes was studied in the presence of Co-5.T hus,a n anti-Markovnikov hydrosilylation of (E)-1,3-dodecadiene with phenylsilane occurred selectively.T he following activity trend was observed for substituents on the pincer framework: 2,4,6-tri-Me < 2,6-di-Et < 2,6-di-iPr. With this catalytic system, primary and secondary silanes (PhSiH 3 ,PhMeSiH 2 ,and Ph 2 SiH 2 )w ere successfully converted into the desired anti-Markovnikov products in up to 92 %y ield, while tertiary silanes showed no activity for hydrosilylation, but instead led to hydrogenation of the corresponding dienes. [72] Recently,m anganese complexes including pincer ligands have become relatively popular in homogeneous catalysis. [73] Nevertheless,t he use of such complexes for hydrosilylations of alkenes is still quite rare.Asanexception several Mn-NNN pincer complexes (Mn-2 to Mn-4)w ere used for hydrosilylation of terminal alkenes.T hese catalysts displayed excellent regioselectivity for ab road scope of alkenes and silanes,y ielding the corresponding products up to 95 %w ith > 99 %r egioselectivity in the presence of NaO t Bu. Apparently,the increased steric bulk of the ligand is favorable for this transformation. Hence, Mn-4 led to higher hydrosilylation yields,which was also proven for the hydrosilylation of 1octene with HSi(OEt) 3 in ag ram-scale reaction (Scheme 20). [71] In 2019, as eries of quinoline-derived Fe-PNN pincer complexes (Fe-5 to Fe-7)were prepared, aiming for abenchstable,a ctivator-a nd solvent-free non-noble metal catalyst system (Scheme 21). In the presence of stoichiometric amounts of LiCH 2 SiMe 3 and KOPiv, Fe-5 can be transformed to Fe-6 and Fe-7, which are active catalysts for the hydrosilylation under base-free conditions.U sing 1-octene and PhSiH 3 as starting materials, Fe-6 and Fe-7 displayed excellent catalytic performance affording TONs of 480 000 and 100 000, respectively.I nterestingly, Fe-7 proved to be stable under air exposure,w hile Fe-6 immediately decomposed upon air exposition. It was confirmed that Fe-H species, which are believed to be the active intermediates in the catalytic cycle,w ere produced in the presence of Fe-6 and PhSiH 3 . [74] While pyridyl-derived NNN and quinoline-derived PNN pincer complexes favored the formation of the hydrosilylation products from aliphatic olefins (vide supra), [75,76] Lu and coworkers demonstrated aM arkovnikov (enantio)selective hydrosilylation of terminal alkenes using different Fe-PNN pincer catalysts (Fe-8). In their elegant studies,t hey investigated as eries of pyridine-based ligands with different oxazoline and imino substituents.A ni ncreased steric bulk of the imino group not only increased the selectivity of the hydrosilylation, but also improved the regio-and enantioselectivity.C omparing diverse oxazoline moieties,t he iPrsubstituted one exhibited better activity than that with Me and tBu groups.U nder optimal conditions,a lkenes bearing bioactive molecules,such as naproxen, ibuprofen, and desloratadine,gave the desired products in ahighly enantioselective manner in 91 %, 97 %and 88 %yield, respectively.Moreover, an experiment with 1,5-hexadiene with 2.4 equiv of PhSiH 3 provided the disilylated product in 90 %y ield with 96/4 branched/linear ratio and 97 % ee (Scheme 22). With respect to asymmetric catalysis this development is noteworthy because it allows highly enantioselective functionalization of plain aliphatic olefins without any additional coordination sites. [77] Compared to most known pincer ligands,p hosphiniteiminopyridine ligands can be easily decomposed due to P À O bond cleavage. [9] Obviously,related phosphino-iminopyridine (PNN) ligands are more stable and have been explored with respect to their steric hindrance (Scheme 23). More specifically, Fe-9 and Co-7 pincer catalysts were prepared for regioselective Markovnikov alkene hydrosilylations.Interestingly,i nt he hydrosilylation of 1-octene with PhSiH 3 ,t he Markovnikov product was mainly obtained using Fe-9,while the anti-Markovnikov was the major one in the presence of Scheme 20. Manganese-catalyzed alkene hydrosilylation. [71] Scheme 21. Iron pincer-catalyzed 1-octene hydrosilylation. [74] Scheme 22. Iron-catalyzed enantioselective Markovnikovh ydrosilylation of alkenes. [77] Scheme 23. Cobalt and iron PNN pincer catalysts used in alkene hydrosilylations. [78] Angewandte Chemie Minireviews 558 www.angewandte.org Co-7. Thed ifferent product formation with the use of Co-7 was ascribed to adifferent silyl migration mechanism involving aC o I -silyl intermediate.L igand screening experiments revealed increased activity with the bulkier ligand for Fe-9, whereas the less bulky ligand was more active for Co-7. [78] Thep incer catalysts also exhibited high activities in the selective hydrosilylation of alkynes. [79,80] In 2016, Lu and coworkers described an improved sequential hydrosilylation/ asymmetric hydrogenation of terminal phenylalkynes to obtain chiral silanes using chiral Co-NNN pincer catalysts. After detailed investigations of the steric bulk of substitutents on the pincer ligand, Co-8 was identified as the optimal complex, which exhibited excellent catalytic performance for hydrosilylations of phenylacetylenes with Ph 2 SiH 2 with up to 91 %y ield and > 99 % ee. Mechanistic studies revealed that cobalt-hydride species are intermediates in this hydrosilylation reaction. [81] Notably,t he hydrosilylations of phenylacetylenes proceeded extremely fast. In fact, ar eaction with Ph 2 SiH 2 using 1mol %o fCo-8 was completed in 5s econds, which corresponds formally to aT OF of 65 000 h À1 (Scheme 24 a). When the hydrosilylation of 4-vinylphenylacetylene was performed, ac hemoselective reaction for the alkyne moiety was observed, although in lower yield (Scheme 24 b). [81] Later on, sequential double hydrosilylation of aliphatic alkynes to yield highly enantioenriched gem-bis(silyl)alkanes was achieved by the same group.Both experimental results and DFT calculations were analyzed to understand the reaction mechanism. It was shown that the highly enantioselective synthesis of gem-bis-(silyl)alkanes is ar esult of the sequential asymmetric double 1,1-hydrosilylation of aliphatic alkynes in the presence of CoBr 2 ·Xantphos,a nd CoBr 2 ·OIP (OIP = oxazoline-iminopyridine). [82] Moreover,a symmetric hydrosilylation of unsymmetric alkynes with dihydrosilanes producing silicon-stereogenic vinylhydrosilanes was realized with high regio-and enantioselectivity by Co-NNN catalysts.M ore specifically,H uang and co-workers reported the Markovnikov hydrosilylation of terminal alkynes with diphenylsilane in the presence of pyridine-bis(oxazoline) Co catalysts.L ater on, the same group described alkyne hydrosilylations with prochiral dihydrosilanes in the presence of the same kind of cobalt/pyridine-bis(oxazoline)) complex producing Markovnikov silanes with up to 99 %s electivity. [83,84] In 2017, aseries of cobalt pyridine-2,6-diimine complexes were prepared and tested for the Z-selective hydrosilylation of terminal alkynes with PhSiH 3 .I nt his work the less sterically hindered ligands favored the formation of a-vinylsilanes,w hile (Z)-b-vinylsilanes were preferred using mes PDI and ipr PDI.This behavior was attributed to the accessibility of the respective cobalt center. Ab road range of alkynes including phenylacetylene and aliphatic alkynes were successfully converted to the desired products with Z/E ratios higher than 91:9 (Scheme 25). However,highly reactive nitro and aldehyde substituents were not tolerated. Moreover, aliphatic alcohol and carboxylic acid moieties inhibited the reaction. [85] Recyclable Heterogeneous Catalysts In classical liquid-phase reactions,h eterogeneous catalysts are more easily recycled than homogeneous ones. Although this advantage is also also holds for many catalytic hydrosilylations, [31,86] in an umber of (industrial) cases,t he corresponding products are either highly viscous or even polymers.I ns uch cases,t he practical benefit of ah eterogeneous material is less obvious.Here,instead, the avoidance of (costly) ligands might be the main driver to search for heterogeneous catalysts.F or this reason, an increasing number of heterogeneous catalysts have been explored in recent years.Inthis section, heterogeneous catalysts based on supported nanoparticles,including so-called supported singleatom catalysts,f abricated by different processes and their applications in hydrosilylation will be emphasized. Pt Nanoparticle Catalysts Foralong time,the molecularly defined Karstedt complex has been regarded as the state-of-the-art catalyst for hydrosilylation reactions.T hus,also several attempts were made to immobilize this complex or derivatives on different supports. Fore xample,asilica-supported Karstedt complex, which displayed initial high activity,w as first prepared in 2003. However,f or this material poor catalytic performance was observed after several reuses due to its low stability.S ince then, other heterogeneous Pt catalysts have been broadly investigated. [87] In 2017, am esostructured silica framework was used as as upport where Pt nanoparticles (NPs) were embedded into walls of the silica matrix. Ther esulting material exhibited excellent catalyst turnover numbers of TON = 10 5 for the hydrosilylation of 1-octene with polymethylhydrosiloxane (PMHS). Due to the physical trapping of the Pt NPs in the silica framework, no Pt leaching was observed after recycling. [88] Pt 0 NPs were also embedded in am odified silica xerogel (SiliaCatPt(0)), which is obtained by sol-gel polycondensation of organosilanes.T he resulting catalyst has au niform spherical morphology and proved to be highly active and selective for ab road scope of olefins.H owever,t he activity deceased sharply to 65 %a fter the fourth cycle,p robably caused by the slight size increase of Pt NPs. [89] To obtain effective catalyst separation from liquid silicone products, magnetic silica particles (Fe 3 O 4 @SiO 2 )were used as support. After the material was modified by addition of ethylenediaminetetraacetic acid (EDTA) or diethylenetriaminepentaacetic acid (DTPA) and immobilization of Pt, the resulting catalysts displayed good activity even for the isomerizationhydrosilylation of internal alkenes. [90] Recently,agraphene-supported platinum catalyst was synthesized using electrostatic adsorption techniques with solventless microwave irradiation. Using this special method, additional defects or holes in graphene are formed and simultaneously small Pt nanoparticles are stabilized with an average diameter of 6.8 nm. Ther esulting catalyst material displayed as uperior efficiencyi nt he hydrosilylation of 1,1,1,3,5,5,5-heptamethyltrisiloxane (MDM) and 1-octene, leading to aT ON of 9.4 10 6 which was tenfold higher than that obtained by the parent Karstedt catalyst (0.9 10 6 ). [36] Pt Single-Atom Catalysts (Pt SACs) Although several supported Pt nanoparticles were developed for catalytic hydrosilylations (vide supra), in some cases poor recyclability caused by leaching was observed. This was attributed to the weak binding of Pt particles to the support. Complementary to materials based on supported nanoparticles,single-atom catalysis provides anew concept to prepare materials with isolated metal centers,which are stabilized by neighboring sites of the support. In 2017, aP ts ingle-atom catalyst was synthesized by impregnation of platinum salts on aluminum oxide nanorods.T he resulting Pt-SACwas applied for the selective hydrosilylation of all kinds of terminal olefins and exhibited excellent activity comparable to the original Karstedt system (Scheme 26). Interestingly,t his Pt-SAC displayed also high stability,which is explained by the strong binding of the individual Pt atoms to their neighboring oxygen atoms. [37] Later on, superparamagnetic Fe 3 O 4 -SiO 2 core-shell nanoparticles (NPs) were used as the support in Pt singleatom catalysts.T he material was easily separated from highviscosity products by applying am agnetic field (Scheme 27). Notably,the Pt loading decreased from 1.5 %to1.26 %after four cycles. [91] Furthermore,apartially charged Pt single-atom catalyst was fabricated on the surface of anatase TiO 2 (Pt 1 d+ / TiO 2 )b ya ne lectrostatic-induction ion exchange.D FT calculations explained the excellent catalytic performance of this material by the intrinsic nature of partially charged Pt(d +)a toms on TiO 2 .T he authors also concluded that the lower oxidation state of Pt is favorable for the desired transformation compared to platinum in higher oxidation states (II or IV). [92] In order to prevent metal agglomeration in SACs,t he metal loading is often (very) low.R ecently,a ni nteresting material was disclosed which contains isolated Pt single atoms in adense distribution. It was successfully synthesized by the NaCO 3 -assisted one-pot pyrolysis of an EDTA-Pt complex on N-doped graphene (Pt-ISA/NG in Figure 4). Here,t he Pt centers are coordinated to Nspecies instead of O. ThePt-ISA/ NG exhibited microstructure and morphology features typical for atomically thin 2D graphene-like analogues,w ith aspecific surface area of 1892 m 2 g À1 and 5.3 wt %Ptloading. Pt-ISA/NG displayed high selectivity,a ctivity,a nd stability for anti-Markovnikov hydrosilylation of different terminal alkenes with silanes under mild conditions. [93] In the case of Pt NPs and SACs,the leaching of the metal is an important aspect, which has to be avoided. Although the average loss of Pt was calculated to be several ppm in each round of reaction, the involvement of the leached Pt could not be excluded because the hydrosilylation easily initiated even Scheme 26. Pt SACs used for the selective hydrosilylation of various alkenes. [37] Scheme 27. Pt-SAC for hydrosilylation. [91] Angewandte Chemie Minireviews 560 www.angewandte.org by ap pm amounts of Pt. Moreover,t he leaching problem leads to difficulties in mechanistic studies and contamination in the final silicon products. Other Precious Metal Catalysts Apart from platinum, other precious metal catalysts based on Au,R h, and Ru have been reported for hydrosilylation reactions of olefins and related substrates.F or example, highly regioselective alkene hydrosilylation is possible in the presence of gold nanoparticles on TiO 2 or Al 2 O 3 .Inthis case, ionic gold(I) species at the interface between the nanoparticles and the support were suggested as the active sites. [86] Recently,R h I complexes were co-immobilized with tertiary amines on the surface of SiO 2 (referred to as SiO 2 /Rh-NEt 2 ). SiO 2 /Rh-NEt 2 exhibited excellent turnover numbers for the hydrosilylation of olefins with aw ide range of substrates.T he good catalytic performance was ascribed to the electron donation from amine groups to the Rh complex, which promotes both the oxidative addition and insertion steps during the hydrosilylation cycle. [94] Despite significant progress with heterogeneous catalysts using supported noble metals,t he hydrosilylation of internal alkenes is still challenging.I nt his context, Prieto and coworkers demonstrated an interesting isomerization-hydrosilylation of internal alkenes.V ery recently,they successfully developed aprocess involving tandem catalysis by Rh and Ru single-atom catalysts on CeO 2 .W hent hese two SACs were combined in as ingle reaction, as ynergetic effect was observed and high selectivity was obtained for the hydrosilylation of different internal alkenes with Et 3 SiH. DFT calculations ascribed the observed selectivity to differences in the binding strength of the alkene substrate where the single Ru atoms bind more strongly than the Rh counterparts. [95] 3.2. Non-Precious Metal Catalysts Nickel-Based Heterogeneous Catalysts In 2016, Hu and co-workers developed the first nickel nanoparticle catalyst, which is able to catalyze alkene hydrosilylation with tertiary silanes.I nt his case,n ickel nanoparticles with an average size of 3.5 nm were formed in situ using ap articular nickel alkoxide potassium salt (Ni-(OtBu) 2 ·x KCl). Interestingly,t he resulting Ni nanoparticles displayed high activity not only in the hydrosilylation of terminal alkenes with high anti-Markovnikov selectivity,b ut also in the tandem isomerization-hydrosilylation of cis and trans internal alkenes in high yields (Scheme 28). Consequently,this non-precious metal catalyst is able to synthesize as ingle terminal alkyl silane from am ixture of different internal and terminal olefin isomers. [96] Later on, other isolated Ni nanoparticles stabilized by n-octylsilane were prepared by decomposition of Ni(COD) 2 (COD = cycloocta-1,5-diene) in toluene under 4bar of H 2 .T he resulting Ni colloid (Ni 3 Si 2 )consists of very small nanoparticles 1.2 nm in diameter. However,w ith this system only moderate conversion (70 %) and low selectivity (30 %) were achieved for the model hydrosilylation of triethoxyvinylsilane with triethoxysilane. [97] In 2019, an ovel supported nickel catalyst was prepared whereby isolated nickel centers were anchored on am etalorganic framework (MOF) carrier.T he single metal sites are considered to be stabilized by the hydroxyl groups from the MOF forming the active centers.U sing this catalyst, the hydrosilylation of n-octene and diphenylsilane was carried out on a1 50 mmol scale under mild conditions with high conversion and selectivity. [98] Furthermore,isolated Ni centers stabilized by heterogeneous ligands have been applied for hydrosilylation reactions very recently.I nt his latter case, ap orous organic polymer containing Xantphos (POP-Xantphos) moieties was used to stabilize the isolated nickel centers.T he prepared Ni-POP-Xantphos catalyst demonstrated high regio-and stereoselectivity in the hydrosilylation of alkynes.T he obtained selectivity was considered to be controlled by the microporous structure of POP-Xantphos. [99] Figure 4. Characterization of the Pt-ISA/NG catalyst:a)T ypical transmission electron microscopy( TEM) image. b) High-resolution TEM (HRTEM) image. c) Energy-dispersive X-ray (EDX) mapping.d ,e) Aberration-corrected high-angle annular dark-field scanning transmission electron microscopy( AC-HAADFSTEM) image and corresponding enlarged view.R eproducedwith permission. [93] Copyright2 018, American Chemical Society. Cobalt-Based Heterogeneous Catalysts Comparable to the Ni-POP-Xantphos catalyst (vide supra), isolated Co sites were coordinated with aP OP-PPh 3 ligand (Co-POP-PPh 3 ). Theresultant Co-POP-PPh 3 material catalyzed the hydrosilylation of alkynes with PhSiH 3 with high regio-and stereoselectivity.Moreover,the reusability of Co-POP-PPh 3 was tested in ac ontinuous-flow system. Even after several rounds of recycling only little loss of activity and selectivity was observed. [100] Recently,a lso Co/TiO 2 was synthesized where the cobalt ions were doped onto the TiO 2 surface.A fter hydrogen treatment, CoTiO 3 species were formed, which exhibited excellent catalytic performance for the hydrosilylation of various alkenes under neat conditions. Importantly,t he CoTiO 3 species were not leached after recycling due to the strong interaction between Co and TiO 2 , leading to high stability and reusability. [101] Iron-Based Heterogeneous Catalysts In 2016, Lin and co-workers reported an iron-based catalyst for several hydrosilylation reactions of terminal alkenes using at wo-dimensional (2D) metal-organic layer (MOL) carrier.T he synthesized MOL was composed of [Hf 6 O 4 (OH) 4 (HCO 2 ) 6 ]a ss econdary building units (SBUs) and benzene-1,3,5-tribenzoate (BTB) as bridging ligands. After connecting the MOL with 4'-(4-benzoate)-(2,2',2''terpyridine)-5,5''-dicarboxylate (TPY), the resulting MOL-TPY was used to immobilize iron centers,affording single-site solid catalysts (Fe-MOL-TPY, Figure 5). Avariety of terminal alkenes were converted to the corresponding alkyl silanes with PhSiH 3 in high yields.F e-MOL-TPY is free from diffusional constraints,l eading to high activity and reusability. [102] In addition to Fe-SACs,i ron oxide nanoparticles (Fe 2 O 3 ) were synthesized in ap ractical way using iron(III) acetylacetonate as the precursor and N,N-dimethylformamide (DMF) as both the reducing and protecting agent. The DMF-stabilized Fe 2 O 3 nanoparticles were monodispersed in the solvent and showed high catalytic activity for the hydrosilylation of alkenes in the absence of any additives.F urthermore,t he colloidal catalyst can be recycled by simple extraction with ahexane/DMF system for fifth run. [103] Bimetallic Heterogeneous Catalysts Bimetallic nanoparticles can exhibit exclusive catalytic performances,d istinct from those of monometallic NPS,d ue to their unique electronic states and structures.A sa n example,L ia nd co-workers developed ab imetallic catalyst by immobilizing Pt 1 Ni 1 nanoparticles on the surface of nitrogen-doped carbon (NC). TheP t 1 Ni 1 /NC-1000 catalyst, which was obtained by pyrolysis of metal-organic frameworks at 1000 8 8C, displayed the highest catalytic performance for the hydrosilylation benchmark reaction of 1-octene with HSi-(OEt) 3 .C haracterizations revealed ah igh graphitization degree,which should favor astronger charge transfer between the NC support and Pt 1 Ni 1 ,f orming more positively charged Pt centers.T hese charged Pt species possibly lead to the higher catalytic activity. [105] In 2017, Cai and co-workers developed an efficient and recyclable Pd 1 Cu 2 bimetallic catalyst. TheP d 1 Cu 2 NPs were supported on SiO 2 and exhibited superior activity and selectivity toward the hydrosilylation of internal and terminal alkynes.T he catalytic performance is improved by the ultrasmall size (2.8 nm) and the high dispersion of the Pd-Cu nanoparticles as well as the enrichment of Pd on the catalyst surface ( Figure 6). [104] Finally,heterogeneous catalysts using bimetallic materials have been used for hydrosilylations of alkynes.Here,Shishido and co-workers reported the use of Pd-Aub imetallic NPs at ambient temperature.After careful screening of supports and Reproducedw ith permission. [102] Copyright2 016, Wiley-VCH. Figure 6. Characterization of the Pd 1 Cu 2 /SiO 2 catalyst:a ,b) TEM images. c) High-resolution TEM (HRTEM) image of the Pd 1 Cu 2 /SiO 2 catalyst. d) Size distribution of Pd 1 Cu 2 bimetallic nanoparticles. e) High-angle annular dark field (HAADF)S TEM image of Pd 1 Cu 2 / SiO 2 .f,g) EDS elemental maps for Pd (f)and Cu (g). Reproducedwith permission. [104] Copyright 2017, The Royal Society of Chemistry. Angewandte Chemie Minireviews 562 www.angewandte.org metal ratios,Pd 1 Au 5 /Nb 2 O 5 catalyst was identified as the most active catalyst for the hydrosilylation of alkynes with complete trans-configured products.H igh activity was observed for the catalyst with arelatively low Pd/Auratio of 1:5, which is in accord with the isolated single Pd atoms characterized in Pd 1 Au 5 /Nb 2 O 5 . [106] Challenges and Perspectives Thec atalytic addition of silanes to olefins and alkynes (hydrosilylation reaction) continues to attract significant interest among academic and industrial chemists.Infact, this methodology remains akey technology for innovations in the silicone industry.F or the advancement of this field, the development of new and improved catalysts is aprerequisite. In this respect, the present report discusses briefly actual developments of nonclassical homogeneous and heterogeneous catalysts for hydrosilylation reactions.I na ddition to molecularly defined non-noble metal catalysts,a lso recently discovered heterogeneous catalysts such as supported noble/ non-noble NPs and single-atom catalysts are discussed. Considering all these achievements,w hat are the major challenges in research on catalytic hydrosilylation reactions in the coming decade? Apart from few examples,most of the academic developments-which are by no means scientifically very interesting-are far from being relevant for industry and real-world applications.T his is because of the model reactions used, the reaction conditions applied, and the substrates investigated. We believe there is an enormous innovation potential, if the gap between academic and industrial research can be bridged more efficiently. In the development of new powerful homogeneous catalysts,t he complexity of the ligand part is often underestimated, sometimes fully neglected. Obviously,a ni deal catalytic system not only constitutes an available,l ess toxic metal center,f or example,i ron or manganese,b ut also inexpensive,stable,and modular ligands.Looking at recently disclosed ligand scaffolds,anumber can be synthesized only on small scale under special conditions,f or example,i n aglovebox. Especially,for non-noble metal catalysts it is still ac hallenge to develop practical ligands. Theu se of heterogeneous catalysts for hydrosilylation reactions is af ascinating,g rowing scientific area. Here,t he detailed understanding of the structural features that control the regio-and stereoselectivity of ag iven catalyst is still missing. Clearly,such knowledge is the basis for any rational catalyst development. Here,t he use of heterogeneous SACs with al imited and defined number of atomic species,w here the metal centers are spatially isolated from each other, might give new impetus.Furthermore,the combination of supported (multi)metallic NPs in the presence or absence of ligands will offer alternative solutions for directing regio-and stereoselectivities of hydrosilylations.
2020-07-16T09:06:52.895Z
2020-07-15T00:00:00.000
{ "year": 2020, "sha1": "60ea7f0630a341ebc00b34f226877a9c0c70867b", "oa_license": "CCBYNC", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/anie.202008729", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "34a29fd41bf8dbbffe134c36fdb76ac56c9eabc2", "s2fieldsofstudy": [ "Chemistry" ], "extfieldsofstudy": [ "Materials Science", "Medicine" ] }
186396242
pes2o/s2orc
v3-fos-license
Possible explanation of results of CR investigations in the energy interval 1015 – 1017 eV: Nuclear-physical approach Among three possible approaches to explanation of the behaviour of CR energy spectrum and mass composition in the energy interval 1015 – 1017 eV: methodical, cosmophysical and nuclear-physical, the last of them is considered. A possibility of a new massive state of matter generation in interactions of CR with such energies is discussed. As a possible version, the production of blobs of quark-gluon plasma with a large orbital momentum is considered. Introduction Energy region ~ 10 15 -10 17 eV is very interesting and important. Firstly, in this region direct measurements at satellites are impossible and only EAS investigations can be performed. Secondly, namely in this region basic characteristics of cosmic rays (CR): energy spectrum (the knee, the 2nd knee) and mass composition (becomes more heavy) are changed. But the second statement is incorrect since there are no direct experiments in which the energy spectrum and mass composition above 10 15 eV are measured. Really we can measure EAS parameters only. The scheme of cosmic ray investigations by means of EAS observables is given in figure 1. Energy spectrum Composition Figure 1. Existing approach to EAS data analysis [1]. What is the situation now? Today it is believed that the model of hadronic interactions is not changed significantly with increasing of energy at least up to 10 17 eV. The reason is simple. No serious deviations from SM in pp-interactions up to energy 13 TeV (~ 10 17 eV in laboratory system) in LHC experiments were observed. But cosmic rays consist mainly of nuclei (table 1) which interact with atomic nuclei of nitrogen or oxygen. Total amount of nuclei is about 60% at normal composition (without heaviering). Why a new approach is required? Now all changes in measured EAS parameters are explained by changes of the energy spectrum and mass composition. And at a first glance there are no reasons to change the interaction model. But no exhaustive and consistent description of the knee for almost 60 years was proposed. At the same time, many unusual results in cosmic ray experiments at energies above the knee were obtained. In the last years, so-called "muon puzzle" in CR investigations appeared (excess of VHE muons and muon bundles compared with calculations). In nucleus-nucleus interactions even in LHC experiments serious deviations from existing models are observed (see figure 2). New model of nucleus-nucleus interaction What do we need to explain all these data? Model of hadronic interactions which gives: -threshold behaviour (all changes in CR appear at several PeV only); -large cross section (to change EAS spectrum slope); -increasing yield of secondary particles (excess of pions, muons and muon bundles). Production of blobs of quark-gluon plasma (or better, quark-gluon matter, QGM) is very suitable for that. Production of QGM provides two main conditions: -threshold behaviour, since for that high temperature (energy) is required; -large cross section, since the transition from quark-quark interactions to some collective interaction of many quarks and gluons occurs: where R is a size of quarkgluon blob. But for explanation of other observed phenomena a large value of orbital angular momentum is required. The question about the orbital momentum which must appear in nucleus-nucleus interactions is usually not considered. A possibility of its appearance in non-central ion-ion collisions was considered in paper [3]. Corresponding scheme is presented in figure 3. Further investigations showed [4] that the value of the orbital momentum can reach L ~ 10 4 . Figure 3. Production of orbital angular momentum in non-central ion-ion collisions [3]. A blob of globally polarized QGM with large orbital angular momentum can be considered as a usual resonance with a large centrifugal barrier. Centrifugal barrier   22 /2 V L L mR  will be large for light quarks but much less for t-quarks or other heavy particles. Though in interacting nuclei t-quarks are absent, the suppression of decays into light quarks gives time for the appearance of heavy quarks. How interaction is changed in frame of a new model? Simultaneous interactions of many quarks change the energy in the center of mass system drastically: , where mc  nmN. At threshold energy, n  4 (-particle). Produced t-quarks take away energy t > 2mt  350 GeV, and taking into account fly-out energy, t > 4mt  700 GeV in the center of mass system. Topquarks decay very quickly: , W-bosons decay into leptons (~30%) and hadrons (~70%); b-quarks produce jets which generate pions decaying into muons and neutrinos. CR energy spectrum and mass composition in the new approach How the energy spectrum is changed? One part of t-quark energy gives the missing energy (e, m, , m), and another part changes EAS development due to increasing multiplicity of secondary particles. As a result, measured EAS energy E2 will not be equal to primary particle energy E1, and the measured spectrum will differ from the primary spectrum ( figure 4). Transition of particles from energy E1 to energy E2 gives a bump in the energy spectrum near the threshold. How the measured composition is changed in frame of the new approach? Since for QGM production not only high temperature (energy) but also high density is required, threshold energy for production of new state of matter for heavy nuclei will be less than for light nuclei and protons. Therefore heavy nuclei (e.g., iron) spectrum is changed earlier than light nuclei and proton spectra. Measured spectra for different nuclei will not correspond to the primary composition. If to propose that CR composition is not changed with increasing of energy, the composition recalculated from results of EAS measurements begins to change due to increasing probability of heavy nuclei interactions. Conclusion To explain the behavior of CR energy spectrum and mass composition at energies 10 15 -10 17 eV in the frame of nuclear-physical approach, the production of quark-gluon blobs with a large orbital momentum is required. This approach can be checked as in cosmic ray so in LHC experiments. In cosmic rays some excess of very high energy muons and neutrinos (> 100 TeV) must appear. In LHC experiments some excess of W-bosons or/and t-quarks and missing energy must be detected.
2019-06-13T13:22:47.630Z
2019-02-01T00:00:00.000
{ "year": 2019, "sha1": "f04f930483399ff6195387fdf8333901e3b0ea51", "oa_license": null, "oa_url": "https://doi.org/10.1088/1742-6596/1181/1/012022", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "c69b73fbbc49ded7c6dcd0cd4c084cf4a3d0af14", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
57794703
pes2o/s2orc
v3-fos-license
EXPERIENCE OF BRACHYTHERAPY IN CARCINOMA OF UTERINE CERVIX AT B.P.KOIRALA MEMORIAL CANCER HOSPITAL, BHARATPUR, CHITWAN, NEPAL Prasiko G * , Jha A K * , Dong J B * , Srivastava R P * Brachytherapy is primarily a treatment of a malignant disease with the use of radioactive isotope placed near or inside the target tissue. This technique plays an important role in the treatment of cancer of uterine cervix. This is a very new technique in Nepal and the department of radiation oncology of BPKMCH is the only place practicing brachytherapy procedure. The aim of the study was to analyze the cases that have undergone brachytherapy in the first year of initiation of this service. Total 232 patients, out of whom two hundred patients completed treatment. Total 657 applications were held during this period. Peak age group was in fifth decade followed by sixth decade. Good number of procedures was performed compared to other institutes abroad. INTRODUCTION Brachytherapy is the treatment of malignant lesion using radioactive material at a short distance. 1It has advantage of delivering high dose in the source vicinity without delivering too much radiation to surrounding tissue.The history of Brachytherapy began in Paris in the year 1897, when Henri Becqueral discovered natural radioactivity.The use of radium for brachytherapy was developed through clinical experience.It was used in the year 1901 for the first time to treat skin lesion.Since then the technology is gradually progressing.The after-loading methods were developed during 1959 and 1960, which offered protection from the radiation hazard to physicians and other members performing brachytherapy, and there was greater flexibility of source geometry as well as improved reproducibility of treatment and comparatively shorter treatment time. 2 There are three types of brachytherapy in terms of dose rate: High dose rate (HDR): more than 12 Gray/h, Medium dose rate (MDR): 2 to 12 Gray/h, Low dose rate (LDR): 0.4 to 2 Gray/h. 3137Cs (Cesium) is commonly used radioisotope in LDR, whereas 192Ir (Iridium) and 60Co (Cobalt) are used in HDR.Brachytherapy is the useful tool in treatment of carcinoma of uterine cervix, prostate, head and neck cancers, esophagus, bronchus and skin cancer.The clinical implementation of HDR depends strongly on proper and careful understanding of dose optimization.The use of HDR brachytherapy for cervical carcinoma is the result of technologic development in the manufacture of high intensity radioactive sources, sophisticated computerized remote afterloading devices and treatment planning software. 4There are various types of optimization used for dose distribution at desired points such as dose point, dwell time, geometry based dose, volume time and equal time etc. 5 In the treatment of carcinoma of cervix, usually external radiotherapy is followed by intracavitary brachytherapy.In very early stage of the disease only brachytherapy treatment may be sufficient.External beam is used to treat whole pelvis and the parametrium including lymph nodes where as the central disease (cervix, vagina, and medial parametrium) is treated mainly with intacavitary source.The aim of the study was to analyze retrospectively, the records of the patients with cervical cancer having undergone brachytherapy procedures from September 17, 2002 to September16, 2003 in the department of Radiation Oncology, BPKMCH according to age group, stage wise and completion of treatment as advised. MATERIALS AND METHODS On September16, 2002, for the first time in Nepal, after testing and commissioning of the machine in the department of Radiation Oncology, B. P. Koirala Memorial Cancer Hospital, Bharatpur, brachytherapy service was started for the treatment of carcinoma of uterine cervix.This machine is Varisource; manufactured by Varian Company, U.S.A.It is a remote after-loading, HDR brachytherapy unit with a single radioactive isotope 192Ir as the source of radiation.In cases of carcinoma of cervix, we have delivered 46 Gray to 50 Gray (2 Gray per fraction, one fraction a day, 5 days a week) of external radiotherapy followed by 2-3 applications of intracavitary radiotherapy treatment (ICR), one week interval between the ICRs.Three to five patients were treated daily.The Fletcher-Suit-Delclos (FSD) type of applicator was used for the treatment of uterine cervix with intact uterus and vaginal cylinder applicator for postoperative cases. Female patients who were diagnosed to have FIGO (International Federation of Gynecology Obstetrics) stage IA, IB, IIA, IIB, IIIA, IIIB were candidates for intracavitary radiation, excluding the patients with gross residual disease following external radiotherapy making insertion of uterine tandem impossible.Brachytherapy was not used in stage IVA and IVB.Patients were investigated for hemoglobin, total leucocytes count, differential count, blood sugar and creatinine.Patients were prepared by shaving the area and administration of soap water enema in the morning.Sedative (e.g.lorazepam 1 mg.orally) was given the previous night at bedtime.Patients' counseling was done regarding the procedure and expected side effects.The patient was transferred to application room, IV line opened and Pethidine 25 mg.and Phenargan 25 mg injected intravenously.We did not use general anesthesia or regional anesthesia, which is a common practice in developed nations.Patient was placed on the gynae-brachy table in the lithotomy position.Vulva, perineum, and upper thigh were cleaned with savlon and povidone iodine.Foley's catheter was inserted aseptically and 7 ml of radio opaque dye (urograffin) pushed into the balloon that helped us to locate the bladder reference point.Local examination was done and cervical Os was located.The uterine sound was inserted in the uterine cavity to assess the length and position of uterine cavity.Cervical canal was dilated as required.Uterine tandem was placed according to the length of uterine cavity and flange was fixed to remain at the external Os.The Cusco speculum was removed slowly and two Sim's vaginal speculums were inserted to retract the anterior vaginal wall and posterior vaginal wall as much as possible.Largest possible ovoids were placed in lateral fornices and fixed with the uterine tandem. The vaginal packing was done adequately with barium soaked ribbon gauge.Barium was used to locate rectal reference point in orthogonal film.After insertion of the applicator, patient was transferred to simulator room and orthogonal films (anteroposterior and lateral) were obtained.The planning computer system (Brachyvison software) imports images from hard films through Vidar scanner.Several reference points such as point A (2 cm superior to external cervical Os or cervical end of the uterine tandem and 2 cm lateral to cervical canal, 6 point B (3cm lateral to point A), bladder and rectal reference points were considered as per International Commission on Radiological Unit and Measurement (ICRU)-38 reports. 7During planning, the spacing of radioactive source and indwelling time both could be manipulated to get best dosimetry (pear shape).The maximum dose to bladder and rectum should be, as far as possible, less than the dose to point A (e.g.80% or less of the dose point A). 8 Once satisfied with dosimetry, plan was then transferred to treatment computer that makes the isotope source move through the catheters inserted within the applicators according to the plan verification.In the planning, the target volume is designed with sufficient safety margin.Applicators were removed once the treatment was over and patients were advised to have oral analgesics. RESULTS The total number of 232 patients received brachytherapy treatment for cervical cancer during this period.Altogether 657 insertions were carried out.Two hundred patients had completed three sittings with one-week interval between the applications.Ten patients had received two applications as advised.Fifteen patients had received 2 cycles out of three cycles planned.Seven patients had received only one application.Among 232 patients, 35 patients (17 from Bir Hospital, Kathmandu, 16 from Bhaktapur cancer care center, Bhaktapur and 2 patients from Manipal Hospital, Pokhara) Stage distribution FIGO (1994) staging was used for staging of the disease.Perspeculum, per-vagina, per-rectum, bimanual examination and lymph node examination was done to evaluate the patients.Chest x-ray and ultrasound of abdomen and pelvis were done routinely.Stage was not mentioned in 8 post operative cases. DISCUSSION The procedure was well tolerated with intravenous sedation and analgesics as no case was postponed due to difficulty in insertion of applicators and vaginal packing was satisfactory in relation to the anatomy.Eighty-eight percent of cases received all three sittings.In our series, the peak incidence was in fifth decade of age followed by sixth decade.Stage IIB was the commonest stage (43%) followed by IIIB (37.5%).Only 4.74% of the patients were found to be in stage IIIA. The number of cases treated with brachytherapy during this period of one year was comparable to other well-recognized cancer center.In Tata Memorial Hospital, Mumbai, total number of 574 sittings (HDR + LDR) 9 and in Cancer center welfare home and research institute, Kolkota, total number of 424 sitting were performed in one year time. 10To achieve a balance between Intracavitary radiation (ICR) and external beam radiation in terms of dose rate per fraction, number of fraction, dose per fraction, and overall duration, which gives maximum tumor clearance and minimum early and late morbidity is important.Absolute equivalence between HDR and LDR treatment with respect to all biological effects seems to be difficult on the basis of radiobiological theory.Linear quadratic (LQ) model gives the biological equivalent.Various clinical reviews have confirmed the equivalence of HDR and LDR in terms of tumour control, survival, and effects on normal tissue. 11,12,13
2018-12-17T23:12:38.637Z
2004-01-01T00:00:00.000
{ "year": 2004, "sha1": "c5236a3db0cdfaf50a52bb0ff58cd6b18fbf84f0", "oa_license": "CCBY", "oa_url": "https://www.jnma.com.np/jnma/index.php/jnma/article/download/605/1332", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "c5236a3db0cdfaf50a52bb0ff58cd6b18fbf84f0", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
73113877
pes2o/s2orc
v3-fos-license
Archibald V . Hill ’ s contribution to science and society A brief account of A.V. Hill’s contribution to our understanding of muscle contraction is given. This includes an overview of discoveries that led to solving the problem how chemical events provide the energy for mechanical work. Hill helped to train and educate a generation of scientists to use concise mathematical treatment of biological phenomena. He also taught his students moral values important when pursuing research. Finally Hill’s deep belief in the international nature of scientific work and his human qualities led him to join Lord Rutherford to become founder member of the Academic Assistance Council, an organization that rescued around 1500 academics from Nazi occupied Europe during the Second World War. With all these activities taken together Hill can be considered a person who made an exceptionally important contribution to the cultural and scientific life of the 20th and 21st century. On a Tuesday morning in March 2010 I was walking along the corridor to the Medical Sciences building at University College London (UCL), and a young girl, probably a student asked me to point out the way to the A.V. Hill Lecture Theatre.I stopped and before giving her the instructions how to find it, I asked her: 'Do you know who A.V. Hill was?' As expected her answer was: 'I have no idea, but I am in a hurry, we have a lecture on muscle physiology there, and I am late.'I felt sad and disappointed by her lack of interest.However her ignorance and probably that of other students who attend lectures in the A.V. Hill lecture theatre prompted me to write a brief account of the significance of A.V. Hill for science and society.In 1923 A.V. Hill succeeded Ernest Starling and was appointed to the post of Jodrell Professor of Physiology at UCL, a Department that already had an outstanding reputation and tradition.At the time of his appointment at UCL, Hill was a well-established scientists who had won the Nobel Prize in physiology in 1923 (as had Otto Meyerhoff).He played a pivotal role in helping to bring mathematical rigor into physiology, for example by introducing the 'Hill equation' to explain the effects of the aggregation of hemoglobin molecules on its dissociation curves [5].This equation is simple and widely deployed in bioscience and later proved useful in Hill's work on muscle. Hill's arrival heralded new interesting developments at UCL.The first was the emphasis on progress made in the understanding of muscular contraction, and the second was Hill's dedication to establishing national and international cooperation between scientists.This included a commitment of actively helping scientists in need. In 1933 Hill was a founder member and vice chairman of the Academic Assistance Council (AAC).This organization was dedicated to assist academics who for reasons including persecution and conflict were unable to continue their work in their country of origin.In the 30s it included Jewish and other academics forced to flee the emerging Nazi regimes in Europe.By actively participating in the work of this organization Hill continued the liberal tradition of UCL set out in1826 when the College was founded becoming the first secular institution of higher education in the UK.UCL then as now pledged that it would accept students of any race or religious or political belief.In this brief article three main features of A.V. Hill's contribution to science and society will be discussed: 1. His scientific work.2. His influence and inspiration to scientific colleagues internationally.3. His contribution to assisting refugee academics. A.V. Hill Scientific Work When A.V. Hill arrived to UCL in 1923 the question as to how muscle uses energy from chemical reactions in order to do mechanical work was still obscure.At the beginning of the 20th century it was universally accepted that the primary biochemical reaction associated with providing energy for muscle contraction was lactic acid, released from a hypothetical large molecule called 'lactacidogen'.The hydrogen ions liberated were supposed to neutralize negative charges on the contractile protein filaments allowing them to fold and shorten.However there were several findings that were inconsistent with this idea, including measurements of heat production during muscle contraction made by Hill.It was therefore not surprising that this hypothesis was overthrown in what Hill called 'a revolution in muscle research', and can be considered the first of several such revolutions [7].Hill dates the outbreak of this revolution to the end of 1926 when Philip and Grace Eggleton, research fellows in the Physiology and Biochemistry Department at UCL submitted a paper to the Biochemical Journal reporting that the amount of an unidentified organic phosphate compound present in muscle decreased during contraction with a corresponding increase in inorganic phosphate [3].Although this weakened 'the lactic acid hypothesis', supporters of the lactic acid theory did not give up immediately.It was not until 1930 when Lundsgaard observed that a muscle that is poisoned with iodoacetatic acid and therefore cannot use oxygen to break down glycogen to form lactic acid is nevertheless able to perform several contractions [9].Thus the energy required for this mechanical work has to be provided by reactions that did not require the production of lactic acid and the breakdown of organic phosphate molecules might be the source of this energy.The position of organic phosphate in providing energy for mechanical work started the second revolution in muscle research.This was supported by evidence that suggested that the only pathway for the utilization of the phosphate from phosphorylcreatine was to rephosphorylate adenosine-diphosphate (ADP) to adenosine triphosphate (ATP), and it was soon accepted that the primary reaction providing energy for muscle contraction was the conversion of ATP to ADP plus inorganic phosphate.Conclusive proof for this idea was difficult to obtain because the rephosphorylation of ATP from ADP was extremely rapid and therefore no decrease in ATP could be detected during muscle contraction.However, in 1962 Cain, Infante and Davies demonstrated that a muscle poisoned with fluorodinitrobenzene could perform several contractions with the utilization of ATP but not of phosphorylcreatine [2]. An important discovery based on measurements of heat production during contraction by Hill and colleagues showed an increase in heat production and therefore of the amount of chemical change when the muscle was allowed to shorten [7].This finding highlighted the importance of the relationship between the metabolic and mechanical conditions during contraction.Measurement of heat production during muscle contraction, the method favored in Hills department also revealed some other fundamental processes of muscle physiology.Hill's mathematical treatment of muscle dynamics based on a biophysical model of muscle mechanics was the starting point of AF Huxley's model of muscle contraction [8], which remains the valid paradigm of the behavior of all motor protein and cell mobility systems.The observation that oxygen is not needed during contraction but is necessary during recovery inspired the term 'oxygen debt' a useful concept in exercise physiology even today [7].These were original and revolutionary discoveries that brought our understanding as to how chemical energy is converted to mechanical work, to a new level.Hill considered measurement of physiological events using physical and electrical methods to be an important counterpart to other methods used in biology, partly because of the speed at which they can be recorded.His appreciation of the use of these techniques to examine biological processes inspired him to help to establish the first Department of Biophysics at UCL.After being awarded by the Royal Society the Foulerton Professorship in 1926 A.V. Hill became the first head of the Department of Biophysics and had been succeeded in 1952 by his pupil Bernard Katz, one of the scientists that were helped to leave Nazi Germany in 1935. A.V. Hill influence and inspiration to scientific colleagues internationally A.V. Hill had several PhD students.He is remembered by them with great affection, not only because of his scientific influence but also because of the way he treated them as human beings and colleagues.This is perhaps best described by his Chinese student TP Feng, who came to UCL in 1930 and stayed for 3 years.After returning to China Feng became the director of the Shanghai Institute of Physiology.In his memoirs TP Feng writes: 'The way Hill dealt with the first paper I wrote, entitled "The Heat-Tension Ratio in Prolonged Tetanic Contractions "is worth mentioning.The problem had been suggested by him and was carried out under his direction with much assistance from Mr. Parkinson, A.V. Hills' much appreciated and gifted technician.I naturally put Hill's name as a coauthor.He promptly took his name off the paper saying: "If this is the only paper you write while you are here it will not make much difference whether my name is on it or not, and it will not mean much to you". Another remark in a similar vein at the end of my stay in London should also be retold."You have done good work here and you have done most of the work independently.But people will still think that you are under my direction.You must go back and continue to do good work all by yourself, then you will be recognized as a fully independent worker."I don't know whether A.V. Hill talked to other students like this, but his words left a deep impression on me' [4].Another of his students who wrote about him with much affection was his successor as head of the Department of Biophysics, Bernard Katz.After Katz fled Germany in 1935 he was accepted as a PhD student by A.V. Hill at UCL London, where he stayed until 1939.He referred to A.V. Hill as his greatest scientific influence and described this period as 'the most inspiring period of my life'.Hill's influence on his pupils and colleagues was probably inspired by his strong sense of responsibility that scientists have in society and particularly the international nature of science.In an article in Science written during the Second World War he writes: 'It is nevertheless a fact that the nature of our occupation makes scientific men particular international in their outlook.In its judgment on facts science claims to be independent of political opinion, of nationality, of material profit.It believes that nature will give a single answer to any questions properly framed and that only one picture can ultimately be put together from the very complex jigsaw puzzle which the world presents.Individual and national bias, fashion, material advantage, a temporary emergency, may determine which part of the puzzle at any moment is subject to the greatest activity.For its final judgment however, for its estimates of scientific validity, there is a single court of appeal in nature itself, and nobody disputes its jurisdiction.Those who talk, for example of Aryan and non-Aryan physics or of proletarian and capitalist genetics, as though they were different simply make themselves ridiculous.For such reasons the community of scientific people throughout the world is convinced of international collaboration.'And later: 'In no other form of human activity, therefore, has so complete an internationalism spread throughout the national structure of society: in no other profession or craft is there so general an understanding or appreciation of fellow workers in other parts of the world.This implies no special merit or broadmindedness on the part of scientific men; it is their very good fortune, a good fortune which involves obligations as well as privileges.For example when the Nazis in 1933 began their persecution of Jews and liberals in Germany it was the scientific community in many other countries which came most quickly to the rescue of their colleagues; not out of any special generosity but because firstly they had personal knowledge of those who were being persecuted, and secondly they realized that such persecution struck at the basis of the position of science and scientific workers in society' and later : 'It may be then that through this by-product of international cooperation science may do as great a service to society (just as learning did in the Middle ages) as by any direct results in improving knowledge and controlling natural forces: not-as I would emphasize again-from any special virtue which we scientists have, but because in science world society can see a model of international cooperation carried on not merely for idealistic reasons but because it is the obvious and necessary basis of any system that is to work' [6].These views and ideas motivated Hill to become a founder member of the Academic Assistance Council (AAC) an organization that offered help to Jewish and other liberal scientists persecuted in Nazi Germany and other fascist countries A.V. Hill and the Council for Assisting Refugee Academics In 1933 whilst studying in Vienna, William Beveridge the director of the London School of Economics learned that academics deemed 'undesirable' by the Nazi government either because they were Jews or of a different political opinion than the Nazis were dismissed from their position and unable to work.Dismayed by this, Beveridge returned to England keen to help these scholars [1]. Conclusions Although A.V. Hill's contribution to the understanding of the problem of how chemical processes provide the energy to perform mechanical work in skeletal muscle was outstanding his equally important achievement was the realization that collaboration between scientists at national and international level can advance our understanding of the world around us more than the work of any single individual.Hill viewed the scientific community as a 'family' who's members strive to enhance our understanding of the world.By providing active help to individuals within this family Hill and others who assisted his efforts contributed more to advancing science then any single individual. [10] by helping their colleagues, scientists more than any other intellectual group helped to defy the program to exterminate European Jews.This is in marked contrast to other professional organizations such as the medical or legal who for fear of competition did not offer help to their colleagues.Above all, as emphasized by Canadian historian David Zimmerman the role of the S.P.S.L. in the history of academic freedom has not been adequately recognized, and Zimmerman seeks to redress this.He writes that the S.P.S.L. became a quasi-government agent and helped to rescue a generation of European scholars[13].Historian Gary Werskey emphasized the speed and efficiency with which the A.A.C. was established.He wrote, 'within a matter of weeks after the first expulsions of Jewish scholars from Germany, researchers like Hill and Lord Rutherford were able to set up an Academic Assistance Council'.The exceptional number of scientists rescued by this group who achieved ground breaking discoveries during their career is remarkable[12].Scientific achievements are difficult to measure, but the number of Nobel prizes gives some indication.Before 1933 German scientists had won 33 prizes in science since 1900, the highest number of any nation, Britain won 18 and the USA 6.After Hitler's rise to power 7 Nobel Prize winners left Germany and 20 of the refugees subsequently obtained the Nobel Prize[10].It is likely that by rescuing a generation of scholars from Nazi ruled Europe A.V. Hill and other members of the S.P.S.L. contributed more to scientific development in the West then any single individual could achieve.Therefore AV.Hills views that international cooperation of scientists plays an important role for scientific achievement have been confirmed.His contribution to bring about this cooperation while at UCL gives the college a special place in helping to initiate outstanding scientific developments in science and is consistent with UCL's liberal and secular values.
2015-03-07T18:39:34.000Z
2013-07-25T00:00:00.000
{ "year": 2013, "sha1": "aa2f6578c8e7a18baa24b706b045ca1906d97897", "oa_license": "CCBYNC", "oa_url": "https://www.pagepressjournals.org/index.php/bam/article/download/bam.2013.3.73/1271", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "aa2f6578c8e7a18baa24b706b045ca1906d97897", "s2fieldsofstudy": [ "Biology", "Philosophy" ], "extfieldsofstudy": [ "Medicine" ] }
227719074
pes2o/s2orc
v3-fos-license
Some results on hybrid relatives of the Sheffer polynomials via operational rules The intended objective of this paper is to introduce a new class of polynomials, namely the extended Laguerre-Gould-Hopper-Sheffer polynomials. The generating function and operational rule are derived by making use of integral transform. Their quasi-monomial properties and determinant forms are also established. Examples of certain members belonging to the extended Laguerre-Gould-Hopper-Sheffer polynomials are constructed and their corresponding results are established. 2010 Mathematics Subject Classification: 26A33; 33B10; 33C45 INTRODUCTION AND PRELIMINARIES Theory of special functions performs an essential role in the formalism of mathematical physics. They provide indeed a unique tool for developing simplified yet realistic models of physical problems, thus allowing for analytic solutions and hence a deeper insight into the problem under study. The specific physical problem indeed can suggest investigating new aspects of the well-established theory of special functions as well as introducing new family of special polynomials, which usually exhibit deeper features and thereby appear many times in new roles in numerous branches of mathematics and physics. Consequently, reformulating a physical problem in terms of special functions allows a more elegant mathematical model and then for an easier reading and numerical handling of the relevant equations as well as for the discovery of unsuspected connections with other fields of Physics. The Sheffer polynomials are one of the most important class of polynomial sequences and have been extensively studied not only due to the fact that they arise in numerous branches of mathematics but also because of their importance in applied sciences, such as physics and engineering. In view of the result [6, p.17], the Sheffer polynomials can be defined as: g.t / f .t / k j s n .x/˛D nŠ ı n;k ; 8 n; k 0; (1.2) where ı n;k is the Kronecker delta. Operational methods can be exploited to simplify the derivation of properties associated with ordinary and generalized special functions and to define new families of hybrid special polynomials. The combined use of integral transforms and operational methods provides a powerful tool to deal with fractional derivatives. Using the Euler's integral [7, p. Most of the properties of hybrid special polynomials recognized as quasi-monomial, can be deduced by using operational rules associated with the relevant multiplicative and derivative operators. For the multi-variable hybrid special polynomials, the use of operational techniques combined with the monomiality principle provides new means of analysis for the solution of a wide class of partial differential equations often encountered in physical problems. According to monomiality principle [2,8], a given polynomial set r n .x/ .n 2 N; x 2 C/ can be considered as quasi-monomial, if two operators O M and O P , called "multiplicative" and "derivative" operators respectively, can be defined in such a way that O M fr n .x/g D r nC1 .x/; (1.9) O P fr n .x/g D nr n 1 .x/; (1.10) for all n 2 N. The operators O M and O P also satisfy the commutation relation and thus display the Weyl group structure. If the considered polynomial set fr n .x/g n2N is quasi-monomial, its properties can easily be derived from those of the O M and O P operators. If O M and O P have a differential realization, then can be interpreted as the differential equation satisfied by r n .x/. The theory of hybrid special polynomials has been one of the most rapidly growing research topic in mathematical analysis. In 2016, N. Raza et. al. [5] introduced the Laguerre-Gould-Hopper-Sheffer polynomials (LGHSP) L H .m;r/ s n .x; y;´/ which are defined by the following generating function: The following operational representations for the LGHSP L H .m;r/ s n .x; y;´/ hold For suitable choices of the variables and indices the LGHSP L H .m;r/ s n .x; y;´/ reduce to certain hybrid special polynomials. These polynomials along with their notation and name are mentioned in Table 1. This article is an attempt to further stress the importance of the operational methods, Sheffer polynomials and Laguerre-Gould-Hopper polynomials. In Section 2, the extended Laguerre-Gould-Hopper-Sheffer polynomials are introduced by operational rule and generating function. Operational rule providing the connection between the extended Laguerre-Gould-Hopper-Sheffer polynomials and Sheffer polynomials are established. These special polynomials are framed within the context of monomiality principle formalism and their determinant form is also obtained. In the last section, the corresponding results for the extended Laguerre-Gould-Hopper-Gegenbauer polynomials and extended Laguerre-Gould-Hopper-Jacobi polynomials are derived. Next, the generating function of the ELGHSP L H .m;r/ s n; .x; y;´I˛/ is obtained by proving the following result: Theorem 2. The following generating function for the ELGHSP L H .m;r/ s n; .x; y;´I˛/ holds true: Differentiating generating function (2.5) w.r.t˛, the following recurrence relation for the ELGHSP L H .m;r/ s n; .x; y;´I˛/ is obtained: @ @˛L H .m;r/ s n; .x; y;´I˛/ D L H .m;r/ s n; C1 .x; y;´I˛/: In order to derive quasi-monomial properties and operational representation for the ELGHSP L H .m;r/ s n; .x; y;´I˛/, the following operation will be used: . /: Replacement of´by´t , multiplication by 1 . / e ˛t t 1 and then integration with respect to t from t D 0 to t D 1. To frame the ELGHSP L H .m;r/ s n; .x; y;´I˛/ within the context of monomiality principle, we prove the following result: Theorem 3. The ELGHSP L H .m;r/ s n; .x; y;´I˛/ are quasi-monomial with respect to the following multiplicative and derivative operators: and O P LH s D f @ y ; (2.10) respectively, where @ y WD @ @y . The operational representation between the ELGHSP L H .m;r/ s n; .x; y;´I˛/ and Sheffer polynomials s n .x/ is obtained in the form of the following result: (2.17) which in view of relation (1.6), yields assertion (2.14). The determinant definition of the Sheffer sequences proposed by W. Wang [9] in 2014, provides motivation to establish the determinant forms of the new hybrid special polynomials. The determinant approach is equivalent to the corresponding approach based on operational methods. This approach is beneficial in detecting the solution of general linear interpolation problems and also suitable for computations. Inspired by the novel work on determinant approaches, the determinant definition of the ELGHSP L H .m;r/ s n; .x; y;´I˛/ is established by proving the following result: .x; y;´I˛/ a 0;0 a 1;0 a n 1;0 a n;0 0 a 1;1 a n 1;1 a n;1 0 0 a n 1;2 a n;2 : : : : : : : : 0 0 a n 1;n 1 a n;n 1ˇ; where a n;k is the .n; k/ entry of the Riordan array g. .x; y;´I˛/ D n X kD0 a n;k L H .m;r/ s n; .x; y;´I˛/: (2.20) The above equality leads to the following system of infinite equations in the unknowns L H .m;r/ s n; .x; y;´I˛/; n D 0; 1; : : :, .x; y;´I˛/; : : : a n;0 L H .m;r/ s 0; .x; y;´I˛/ C a n;1 L H .m;r/ s 1; .x; y;´I˛/C C a n;n L H .m;r/ s n; .x; y;´I˛/ : : : .x; y;´I˛/ : : : : : : : : a n 1;0 a n 1;1 a n 1;n 1 L H .m;r/ n 1; .x; y;´I˛/ a n;0 a n;1 a n;n 1 L H .m;r/ n; .x; y;´I˛/ˇ: (2.22) Now, bringing the .n C 1/-th column to the first place by n transpositions of adjacent columns and noting that the determinant of a square matrix is the same as that of its transpose, assertion (2.19) follows. 2) and O P LH C D @y The ELGHGnP L H .m;r/ C . / n; .x; y;´I˛/ satisfy the following differential equation: where c n D 1= n and a n;n D 1 The following generating function for the extended Laguerre-Gould-Hopper-Jacobi polynomials (ELGHJP) L H .m;r/ J n; .x; y;´I˛/ holds true: L H .m;r/ J n; .x; y;´I˛/ u n nŠ : (3.9) The ELGHJP L H .m;r/ J n; .x; y;´I˛/ are quasi-monomial with respect to the following multiplicative and derivative operators: The ELGHJP L H .m;r/ J n; .x; y;´I˛/ satisfy the following differential equation: y @ @y C mD 1 x @ m @y m r´@ rC1 @y r @˛ .1 C˛Cˇ/@y The following relation between the Jacobi polynomials J n .x/ and ELGHJP L H .m;r/ J n; .x; y;´I˛/ holds true: x @ m @y m à fJ n .y/g D L H .m;r/ J n; .x; y;´I˛/: Taking a n;k D . 1/ n k 2 n c n c k where c n D 4 n .˛Cn/ n nŠ .˛CˇC2n/ 2n and a n;n D 1 (3.16) The above examples show that the operational rules provide a mechanism to obtain the results for the members belonging to the ELGHSP L H .m;r/ s n; .x; y;´I˛/ and prove the usefulness of the method adopted in this paper. The operational techniques can be used for a more general insight into the theory of hybrid special polynomials and for their extensions. The appropriate combination of methods relevant to generalized operational calculus and to special functions can be very useful tool to treat a large body of problems both in physics and mathematics. Thus, we conclude that the method based on the operational rules may provide powerful tools to deal with the possibilities offered by extended forms of the hybrid special polynomials. APPENDIX We have mentioned several special cases of the LGHSP L H .m;r/ s n .x; y;´/ in Table 1. Now, for the same choice of the variables and indices, the ELGHSP L H .m;r/ s n; .x; y;´I˛/ reduce to the corresponding special case. These new hybrid special polynomials related to the Sheffer polynomials are given in Table 2.
2020-07-02T10:37:25.975Z
2019-01-01T00:00:00.000
{ "year": 2019, "sha1": "78b5418435341cde6f8fdf0959bb5a9530fa7f04", "oa_license": null, "oa_url": "http://mat76.mat.uni-miskolc.hu/mnotes/download_article/2958.pdf", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "0810ca35e720012ae1fb2dd0c14c65ddc965b914", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
893424
pes2o/s2orc
v3-fos-license
Localization of StarD5 cholesterol binding protein. Human StarD5 belongs to the StarD4 subfamily of START (for steroidogenic acute regulatory lipid transfer) domain proteins. We previously reported that StarD5 is located in the cytosolic fraction of human liver and binds cholesterol and 25-hydroxycholesterol. After overexpression of the gene encoding StarD5 in primary rat hepatocytes, free cholesterol accumulated in intracellular membranes. These findings suggested StarD5 to be a directional cytosolic sterol transporter. The objective of this study was to determine the localization of StarD5 in human liver. Western blot analysis confirmed StarD5's presence in the liver but not in human hepatocytes. Immunohistochemistry studies showed StarD5 localized within sinusoidal lining cells in the human liver and colocalized with CD68, a marker for Kupffer cells. Western blot analyses identified the presence of StarD5 in monocytes and macrophages as well as mast cells, basophils, and promyelocytic cells, but not in human hepatocytes, endothelial cells, fibroblasts, osteocytes, astrocytes, or brain tissue. Cell fractionation and immunocytochemistry studies on THP-1 macrophages localized StarD5 to the cytosol and supported an association with the Golgi. The presence of this cholesterol/25-hydroxycholesterol-binding protein in cells related to inflammatory processes provides new clues to the role of this protein in free sterol transport in the cells and in lipid-mediated atherogenesis. StarD5 belongs to the StarD4 subfamily, a subfamily of the steroidogenic acute regulatory lipid transfer (START) domain superfamily of proteins, which are involved in several pathways of intracellular trafficking and metabolism of cholesterol (1)(2)(3). The z210 amino acid START domain contains a binding pocket, which determines the ligand binding specificity and function of each START domain protein (4). Although the functions of some START domain proteins, such as StarD1, PCTP/StarD2, and MLN64/StarD3, have been studied extensively (5)(6)(7)(8)(9)(10)(11), the roles and characteristics of the proteins of the StarD4 subfamily have remained uncertain. The StarD4 subfamily includes proteins StarD4, StarD5, and StarD6 (12). StarD4 and StarD5 are widely expressed, with the greatest levels of mRNA expression in the liver, whereas StarD6 appears limited to the testis (12). In contrast to other START domain proteins, such as StarD1 and MLN64, no members of the StarD4 subfamily have known N-terminal targeting sequences to direct these proteins to specific subcellular compartments; therefore, they are predicted to be cytoplasmic proteins (12,13). The localization of StarD6 to the testis was confirmed recently. This protein appears to play a role during germ cell maturation in adult testis (14). Recent studies by Soccio et al. (12) have confirmed previous observations on the sterol-mediated regulation of StarD4 expression. These new studies are consistent with the regulation of StarD4 by the sterol-regulatory element binding protein 2 and with the ability of StarD4 to transfer cholesterol in steroidogenesis assays after its transfection in COS-1 cells (15). The cellular localization of StarD4 remains uncertain, although recent studies have localized a green fluorescent protein-StarD4 fusion protein to the cytoplasm and nucleus of HeLa cells (16). StarD5 mRNA expression has been reported to be induced in response to endoplasmic reticulum (ER) stress in free cholesterol-loaded mouse macrophages or in NIH-3T3 cells treated with tunicamycin, thapsigargin, or brefeldin A (15). This difference in the regulation of StarD4/ StarD5 indicates putative different roles in cholesterol metabolism for each protein. StarD5 has been shown to be able to transfer cholesterol in steroidogenesis assays after its transfection in COS-1 cells (15). Previously, we had shown that StarD5 is able to selectively bind cholesterol and the potent regulatory oxysterol, 25-hydroxycholesterol (17). Furthermore, we confirmed the presence of StarD5 protein in human liver and found it to be localized to the cytosolic fraction (17). Expression of the gene encoding StarD5 in primary rat hepatocytes led to a marked increase in microsomal free cholesterol (17). These data suggested that StarD5 may be a cytosolic sterol carrier protein. The objective of this study was to determine the liver cells that express StarD5 and its subcellular localization and to gain insight into its function. Whereas our previous studies showed the presence of StarD5 protein in human liver (17), no StarD5 protein was detected in isolated hepatocytes. This study shows that StarD5 is localized within the Kupffer cells in the liver. Western blot analysis also demonstrated the presence of StarD5 in macrophages, monocytes, promyelocytic cells, mast cells, and basophils. Furthermore, immunocytochemistry on macrophages localized StarD5 in the cytosol and supported an association with the Golgi. These findings, coupled with the observation of increased microsomal cholesterol accumulation in hepatocytes with increasing StarD5 expression (17), support the idea that StarD5 is a cytosolic sterol carrier to the Golgi. Materials Rabbit polyclonal antibody against human StarD5 was obtained as described previously (17), monoclonal mouse anti-human macrophage CD68 was purchased from Serotec (Oxford, UK), and monoclonal mouse anti-rat GM130 was purchased from BD Biosciences (San Jose, CA). Secondary antibodies Alexa Fluor 568 goat anti-rabbit IgG, Alexa Fluor 488 goat anti-mouse IgG, Alexa Fluor 488 transferrin from human serum, and 49,6-diamidino-2phenylindole (DAPI) were purchased from Molecular Probes (Eugene, OR). Filipin III for staining of free cholesterol and Histopaque 1077 for isolation of mononuclear cells from blood were purchased from Sigma (St. Louis, MO). EDTA solution, pH 8.0, for heat-induced epitope retrieval was purchased from Zymed (San Francisco, CA). Human liver sections (formalin-fixed and embedded in paraffin), human liver tissue, primary human hepatocytes, freshly isolated human hepatocyte suspensions, and nonparenchymal cells from human liver were provided by the Liver Tissue Procurement and Distribution System (N01-DK-9-2310). Normal goat serum was purchased from Jackson Immunoresearch (West Grove, PA). Protein determinations were carried out with the Bio-Rad Miniprotein Assay from Bio-Rad (Hercules, CA). Cell cultures Human THP-1 monocytes, HepG2 hepatocellular carcinoma cells, HL-60 promyeloblasts, U2-OS osteosarcoma, and HT-29 colon adenocarcinoma cell lines were purchased from the American Type Culture Collection and maintained according to the supplier's protocols. Human KU 812-F myelogenous leukemia lymphoblast cells were supplied by the European Collection of Animal Cell Cultures and maintained in RPMI-1640 medium supplemented with 10% fetal calf serum and 2 mM glutamine. Human HMC-1 mast cells were kindly supplied by Dr. J. H. Butterfield (Mayo Clinic, Rochester, MN) and maintained in Iscove's medium supplemented with 10% fetal calf serum and 2 mM L-glutamine. Human umbilical vein endothelial (HUVE) cells were isolated from human umbilical cords by collagenase digestion (18) and were maintained in Medium 199 containing HEPES (10 mM), L-glutamine (2 mM), and heparin (10 mg/ml) supplemented with 3 mg/l endothelial cell growth supplement (Sigma) and 20% FBS. Primary human fibroblasts were kindly supplied by Dr. Dorne Yager (Virginia Commonwealth University) and were maintained in DMEM supplemented with 10% FBS. Samples of human astrocytes and brain tissue were supplied by Dr. H. Fillmore (Virginia Commonwealth University). Primary human monocytes were isolated from sodium EDTAtreated blood obtained from a healthy volunteer. The blood was layered on Histopaque 1077 (1:1) and centrifuged at 400 g for 30 min at room temperature. The resulting mononuclearenriched layer was collected and resuspended in 2 volumes of sterile PBS, pH 7.4. The cells were pelleted by centrifugation at 250 g for 10 min at room temperature and then plated in RPMI-1640 medium minus 2-mercaptoethanol supplemented with 10% autologous serum at 378C. After 3 h, nonadherent cells were removed by washing with sterile PBS, pH 7.4, and the remaining adherent monocyte-enriched cells were grown for 2 days at 378C before subsequent analysis. Western blot analyses for StarD5 Immunoblottings with polyclonal StarD5 antibody were performed as described previously (17). Immunofluorescence microscopic detection of StarD5 and CD68 in liver sections Human liver sections were deparaffinized in o-xylene and then rehydrated by passage through a graded series of ethanol and distilled water. CD68 antigen was retrieved by heating the slides in EDTA solution, pH 8.0, for 20 min. Blocking was accomplished by incubation with 5% normal goat serum in PBS containing 0.05% Tween-20 for 16 h at 48C. For interaction with primary antibodies, sections were incubated with 2.5% normal goat serum in PBS/0.05% Tween-20 containing StarD5 antibody (dilution, 1:400) or CD68 antibody (dilution, 1:100) for 45 min at 378C in an incubator. After the sections were washed in PBS/0.05% Tween-20 (3 3 20 min), the bound primary antibodies were visualized with Alexa Fluor 568 goat anti-rabbit IgG (for StarD5) or Alexa Fluor 488 goat anti-mouse IgG (for CD68). Sections were then washed in PBS/0.05% Tween-20 (3 3 20 min). DNA was stained with DAPI for 5 min at room temperature, and after washing in PBS, the slides were mounted with a coverslip and viewed with a Zeiss LSM 510 Meta confocal microscope. Control experiments were performed in the absence of the primary antibodies or with the StarD5 preimmune serum. Immunofluorescence microscopic detection of StarD5 in THP-1 macrophages THP-1 monocytes were differentiated to macrophages by adding 100 nM phorbol 12-myristate 13-acetate. Macrophages on coverslips were washed with PBS and fixed with PBS/3.7% formaldehyde for 30 min at 48C and then rinsed three times with PBS alone at room temperature. They were then permeabilized in PBS containing 0.2% Triton X-100 and washed with PBS before blocking by incubation with 5% normal goat serum in PBS containing 0.05% Tween-20 for 16 h at 48C. For interaction with primary antibodies, cells were incubated with 2.5% normal goat serum in PBS/0.05% Tween-20 containing StarD5 antibody (dilution, 1:500) for 45 min at 378C in an incubator. For Golgi staining, GM130 antibody was used at a dilution of 1:100 together with StarD5 antibody. Macrophages were washed in PBS/0.05% Tween-20 (3 3 20 min), and the bound primary antibodies were visualized with Alexa Fluor 568 goat anti-rabbit IgG (for StarD5) or Alexa Fluor 488 goat anti-mouse IgG (for GM130). DNA was stained as described above for liver sections. After washing, coverslips containing macrophages were mounted on slides and viewed as described above for liver sections. Control experiments were performed in the absence of the primary antibodies or with the StarD5 preimmune serum. Dispersal of Golgi apparatus in nocodazole-treated THP-1 macrophages THP-1 monocytes were first differentiated to macrophages on coverslips as described above. Nocodazole was dissolved in DMSO and added to the culture medium to a final concentration of 20 mM for 2 h at 378C. Immunofluorescence detection of StarD5 protein and GM130 for Golgi staining was performed as described above. Fluorescence labeling of THP-1 macrophages with Alexa Fluor 488 transferrin THP-1 monocytes were first differentiated to macrophages as described above. Macrophages on coverslips were labeled with 30 mg/ml Alexa Fluor 488 transferrin for 1 h at 378C, washed with PBS and fixed with PBS/3.7% formaldehyde for 10 min at 48C, and then washed with PBS before blocking by incubation with 5% normal goat serum in PBS containing 0.05% Tween-20 for 1 h at room temperature in the dark. Interaction with primary antibodies (against StarD5 and GM130) was performed as described for THP-1 macrophages, and the bound primary antibodies were visualized with Alexa Fluor 568 goat anti-rabbit IgG (for StarD5) or Alexa Fluor 568 goat anti-mouse IgG (for GM130). After washing, coverslips containing macrophages were mounted on slides and viewed as described above for liver sections. Filipin staining of cholesterol in THP-1 macrophages After immunofluorescence staining for StarD5 and Golgi localization (as described above), THP-1 macrophages were stained with 5 mg/ml filipin in PBS plus 0.5% BSA for 30 min at 378C and washed three times, 5 min each, with PBS while rocking gently at room temperature in the dark. The coverslips containing the cells were taken from the wells and mounted onto glass slides. The cells were allowed to dry for at least 45 min before being placed on a confocal microscope at excitation filter 360/40 nm, emission filter 460/50 nm, and beam splitter 400 nm. Preparation of membrane fractions from THP-1 macrophages THP-1 macrophage fractions were obtained as described previously by Li et al. (19). Briefly, cells were detached by incubating with a trypsin-EDTA solution for 1 min at 378C. The cells were pelleted by centrifugation at 500 g for 5 min, washed with PBS, recentrifuged, resuspended into 2 ml of a low ionic strength buffer (10 mM Tris-HCl, pH 7.5, 0.5 mM MgCl 2 , and 1 mM phenylmethylsulfonyl fluoride), and incubated on ice for 15 min. The cells were then homogenized in a Dounce homogenizer. Four hundred microliters of the low ionic strength buffer containing 1.46 M sucrose was added to the homogenates and centrifuged at 10,000 g for 15 min at 48C. The pellet was stored, and the supernatant (2.4 ml) was loaded onto a sucrose density gradient tube (3.0 ml of 1.1 M sucrose, 2.6 ml of 0.88 M sucrose, and 2.6 ml of 0.58 M sucrose) and centrifuged at 100,000 g for 2 h at 48C. This final procedure resulted in visible bands at each of the four interfaces plus a pellet, in which fraction 1 corresponds to the cytosolic fraction and fraction 2 corresponds to the Golgi. Overexpression and immunofluorescence detection of StarD5 in primary human hepatocytes Human hepatocytes were plated at 15-20% of normal density in Williams E medium containing dexamethasome (0.1 mM) on six-well culture plates containing coverslips and incubated at 378C and 5% CO 2 . Twenty-four hours after plating, the cells were infected with recombinant adenovirus encoding StarD5. Two hours after infection, medium was removed and replaced with fresh medium. After 24 h, cells were immunostained with StarD5 polyclonal antibody according to the protocol described above for THP-1 macrophages. Control experiments were performed in the absence of the primary antibodies or with the StarD5 preimmune serum. StarD5 is localized within Kupffer cells in the liver Western blot analysis of human liver fractionations have previously shown StarD5 to be cytosolic, as predicted based on its lack of any targeting sequence. Surprisingly, Western blot analysis in primary human hepatocytes and endothelial cell lines (HUVE) failed to detect StarD5 protein (Fig. 1A). Additional Western blot analysis with up to 100 mg of protein from primary human hepatocyte cultures was performed without detection of StarD5 protein (data not shown). Addressing the possibility of hepatocyte dedifferentiation in culture and their loss of StarD5 expression, cells from freshly isolated human hepatocyte suspensions were also analyzed (Fig. 1B). Western blot analysis failed to detect StarD5 protein in these purified hepatocyte cell suspensions (hepatocyte purity of .98%). On the other hand, the same analysis showed the presence of StarD5 protein in nonparenchymal cells (Fig. 1B). To further localize this protein to specific cells within the liver, immunolocalization studies on human liver sections were carried out. Figure 2 shows the presence of StarD5 in cells located next to sinusoids ( Fig. 2A). This observation, coupled with the fact that two other major cell types in the liver (hepatocytes and endothelial cells; Fig. 1) do not appear to express StarD5, led us to hypothesize that it might be expressed in Kupffer cells. To confirm this hypothesis, double immunofluorescence confocal microscopy studies were performed on normal human liver sections with polyclonal StarD5 antibody and with the murine antibody to CD68, a marker for Kupffer cells. Immunofluorescence analysis of the human liver sections confirmed the colocalization of StarD5 and CD68 in cells lining the hepatic sinusoids, corresponding to Kupffer cells (Fig. 2B). Immunoreactivity was absent in liver sections without the primary antibody or with the StarD5 preimmune serum (Fig. 2A). StarD5 is expressed in immune/inflammatory-related cells The presence of StarD5 in Kupffer cells, the liver macrophage equivalents, led us to look for StarD5 in monocytes and macrophages. Western blot analysis performed on THP-1 monocytes and THP-1 differentiated macrophages with the StarD5 polyclonal antibody showed the expression of StarD5 protein in both cell types (Fig. 3A). To establish the relevance of THP-1 monocytes and macrophages to the human, primary human monocytes were isolated from a human blood sample according to the protocol described in Materials and Methods. StarD5 protein was detected in primary human monocytes after Western blot analysis using the StarD5 polyclonal antibody. The fact that StarD5 was detected only in immune-related cells (Kupffer cells, monocytes, and macrophages) led us to analyze other available immunerelated cells. Western blot analysis with the StarD5 polyclonal antibody performed with three different cell lines, promyeloblasts (HL-60), mast cells (HMC-1), and lymphoblasts (KU 812-F), showed the presence of the protein in these three cell types (Fig. 3B). Further Western blot analysis on readily available tissues and cell lines (brain tissue, fibroblasts, osteosarcoma, and astrocytes) did not detect the presence StarD5 (data not shown). StarD5 is localized in cytosol and the Golgi We have shown previously (17) that StarD5 is localized in the cytosol by subcellular fractionation of liver tissue, but the high levels of StarD5 protein observed in THP-1 monocytes/macrophages (Fig. 3A) made these cells more suitable for immunolocalization analysis. Immunofluorescence studies with the StarD5 antibody showed the protein widely distributed in the cytosol of THP-1 macrophages, but with a focal intensity in an area localized next to the cell nucleus (Fig. 4A). Based on the high concentration of the protein in a perinuclear area and the association of increased StarD5 expression with increased cholesterol within the microsomal fraction (17), we hypothesized that StarD5 could localize to the Golgi. To confirm this, we performed double immunofluorescence studies using a mouse IgG against GM130 (a Golgi marker) together with the StarD5 antibody. Figure 4C shows immunofluorescence staining of StarD5 in THP-1 macrophages, which corroborates the presence of the protein not only in the cytosol (Fig. 4A) but in close association with a perinuclear organelle, the Golgi (Fig. 4C). Immunoreactivity was absent in THP-1 macrophages when primary antibody was not added or with the StarD5 preimmune serum (Fig. 4B). THP-1 macrophage cell fractions were then examined by Western blot analysis to determine StarD5 localization. StarD5 protein was detected in the cytosolic fraction ( Fig. 5A) but not in the nuclear, Golgi, plasma membrane, or ER fraction (Fig. 5A, lanes 1, 3, 4, and 6). Microsomal preparation showed StarD5 to be localized to the cytosolic fraction (Fig. 5B). To determine whether StarD5 is associated with the Golgi and not with the perinuclear endocytic recycling compartment (ERC), the Golgi apparatus was dispersed in nocodazole-treated THP-1 macrophages. The addition of nocodazole to the culture medium led to the fragmentation and dispersion of Golgi membranes. StarD5 also underwent dispersion in a pattern consistent with that observed for the Golgi (Fig. 4D). Also, using an ERC marker (Alexa Fluor 488 transferrin), we were able to identify the ERC in THP-1 macrophages as having a perinuclear localization similar to that described for StarD5 and the Golgi. However, the ERC stained more diffusely over a greater area than was observed for StarD5 staining, which exhibited a focal intensity similar to Golgi morphology. It was also observed that the Golgi and StarD5 were at the same focal plane, whereas the ERC was in a different focal plane (data not shown). Furthermore, the cytosolic localization (staining pattern) of StarD5 differed from that of endocytic Alexa Fluor 488 transferrin. Recent findings have suggested the presence of StarD5 in the nucleus, observations that we have been unable to confirm in monocyte/macrophage studies. To further explore the possible presence of StarD5 in the nucleus, primary human hepatocytes were infected with a recombinant StarD5 adenovirus. After overexpression of StarD5, anti-StarD5 immunofluorescence showed StarD5 in the expected diffuse localization throughout the cytoplasm of the cells. No nuclear staining of StarD5 was found (data not shown). We reported previously that StarD5 overexpression in primary rat hepatocytes led to an increase of free cholesterol in the microsomes (17). Based upon this result and the immunocytochemistry studies shown above, we hypothesized that increasing amounts of free cholesterol in THP-1 macrophages are localized in association with the Golgi. To confirm this hypothesis, cellular free cholesterol was stained with filipin in THP-1 macrophages previously immunostained for StarD5 and Golgi localization. As shown above, StarD5 was located again in the cytosol, with a concomitant increase of red staining with the Golgi (green) (Fig. 6A, anti-GM130 and anti-StarD5 panels). Filipin staining was diffuse in the cytosol and membranes, but with an area of highly increased fluorescence, indicating the presence of high levels of free cholesterol (Fig. 6A, filipin panel). Figure 6B shows extensive colocalization of StarD5 and Golgi (left panel), free cholesterol and Golgi (center panel), and free cholesterol and StarD5 (right panel). These results support the hypothesis that THP-1 macrophages accumulate free cholesterol in/ next to the Golgi and that StarD5 might play a major role in the distribution of free cholesterol in the cell. DISCUSSION Recent studies on the different proteins of the StarD4 subfamily have reported the first data on localization, regulation, and binding specificities of StarD4, StarD5, and StarD6 (14,15). Although StarD5 seems to be widely expressed, with the liver showing the highest levels of StarD5 mRNA (2,12), we have previously shown that protein levels in the liver are low (17). Immunolocalization of StarD5 protein in human liver sections confirmed previous findings of detection within the liver (17), but only in hepatic sinusoidal lining cells ( Fig. 2A). A double immunofluorescence analysis showed that those sinusoidal lining cells also reacted with a monoclonal antibody to CD68 (Fig. 2B), a specific marker for monocytes/ macrophages and Kupffer cells, demonstrating that StarD5 protein is expressed within Kupffer cells in the liver. Additional analysis showed that no primary human hepatocytes, freshly isolated hepatocytes, or HepG2 cells contained detectable StarD5 protein (Fig. 1). Western blot analysis of another common cell type in the liver, the endothelial cell, also failed to detect StarD5 protein (Fig. 1). Not surprisingly, given the expression in Kupffer cells, Western blot analysis with the StarD5 polyclonal antibody showed expression of the protein in THP-1 monocytes and macrophages and primary human monocytes (Fig. 3A). Interestingly, promyelocytic cells, as well as other, more differentiated inflammatory cell lines (mast cells and basophils), also demonstrated very high StarD5 protein levels (Fig. 3B). We reasoned that StarD5's immunolocalization in macrophages might provide a clue to the role of the protein in cellular cholesterol homeostasis and the mechanism of cholesterol transport. Figure 4A confirms the localization of StarD5 in the cytosol of the cells, but, interestingly, the protein seems to be most highly concentrated in an area next to the nucleus of the cells. Further double immunofluorescence studies with StarD5 polyclonal antibody and GM130 antibody (a Golgi marker) confirmed that StarD5 was not only in the cytosol but appears to be intensely associated with the Golgi (Fig. 4B). Further studies to confirm the localization of StarD5 with the Golgi showed that fragmentation and dispersion of the Golgi by disruption of the microtubule structure with nocodazole (20) also causes fragmentation and dispersal of the membranes in which StarD5 is located in the perinuclear region (Fig. 4D). Our data also indicate that the disruption of the microtubule structure causes a similar pattern of dispersal of Golgi membranes and the membranes in which StarD5 is localized, which supports the localization of StarD5 with the Golgi in THP-1 macrophages. When Alexa Fluor 488 transferrin was used for the staining of the ERC, its morphology did not resemble the highly intense perinuclear staining of StarD5. Of additional note is that the endocyte-localized fluorescent transferrin did not colocalize with the cytosolic StarD5. Although these findings are supportive of a role of StarD5 in the nonvesicular movement of cholesterol, it is not yet possible to exclude a possible role for StarD5 in the vesicular movement of cholesterol. Furthermore, although unlikely, the artifactual fixing and staining of soluble StarD5 with Golgi should also be considered. In summary, the combined gradient and immunofluorescence data support a possible mechanism for cholesterol transport between cell membranes and the Golgi similar to that described for other START proteins predicted to be cytosolic (12,13,16), with a loose association of StarD5 with the Golgi membrane. The high levels of free cholesterol and extensive colocalization with the Golgi membranes and StarD5 (Fig. 6B) observed in THP-1 macrophage cells suggest that this organelle accumulates free cholesterol in association with StarD5. The labeling of THP-1 macrophages with a fluorescent human transferrin showed evidence of an ERC, a major sterol storage compartment in mammalian cells that is closely related to the trans-Golgi network (21)(22)(23). These observations led us to assign the densest localization of free cholesterol to the Golgi in THP-1 macrophages. Coupled with the cytosolic localization of StarD5 in subcellular fractions (Fig. 5) and the ability to transfer cholesterol to the microsomes (17), these findings support a mechanism for cholesterol transport to the Golgi, an ability to transport cholesterol like other START proteins predicted to be cytosolic (12,13,16). In contrast to the studies by Alpy and Tomasetto (16), immunolocalization studies of StarD5 did not show any evidence of nuclear localization in either macrophages (Fig. 4A) or primary human hepatocytes (data not shown) infected with a recombinant adenovirus encoding StarD5. The high expression of StarD5 in monocytes/macrophages, and its mRNA induction in free cholesterol-loaded macrophages (15), raises many questions about its possible role in cholesterol homeostasis. Just as interestingly, StarD5 selectively binds 25-hydroxycholesterol, a potent regulatory metabolite of cholesterol. Given the recent observations of this and earlier studies, it is plausible that StarD5 may play a protective role in ER-stressed macrophages. StarD5, which is expressed in macrophages under normal conditions, would be upregulated under ER stress conditions caused by high levels of ER cholesterol (or 25-hydroxycholesterol). It might then transport free cholesterol/oxysterol from the ER membranes to the Golgi and thus reduce the accumulation of free cholesterol/ oxysterol in the ER. This mechanism would have important implications for atherogenic plaques in macrophages/ foam cells, possibly aiding in the cell's survival as a response to the ER stress caused by excess ER cholesterol.
2017-04-18T22:43:29.061Z
2006-06-01T00:00:00.000
{ "year": 2006, "sha1": "678ca6587cda7cf524906dd72672415721de9aa0", "oa_license": "CCBY", "oa_url": "http://www.jlr.org/content/47/6/1168.full.pdf", "oa_status": "HYBRID", "pdf_src": "Highwire", "pdf_hash": "678ca6587cda7cf524906dd72672415721de9aa0", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
237479688
pes2o/s2orc
v3-fos-license
Sarilumab monotherapy vs sarilumab and methotrexate combination therapy in patients with rheumatoid arthritis Abstract Objective Sarilumab, as monotherapy or in combination with conventional synthetic DMARDs, such as MTX, has demonstrated improvement in clinical outcomes in patients with RA. The primary objective of this post hoc analysis was to compare the efficacy of sarilumab (200 mg every 2 weeks) monotherapy (MONARCH study) with that of sarilumab and MTX combination therapy (MOBILITY study) at week 24. Methods The endpoints assessed were mean change from baseline in the Clinical Disease Activity Index (CDAI), 28-joint Disease Activity using CRP (DAS28-CRP), CRP, haemoglobin (Hb), pain visual analogue scale (VAS) and Functional Assessment of Chronic Illness Therapy (FACIT)–Fatigue. Least square (LS) mean change from baseline (95% CI) at week 24 for all endpoints was compared between the treatment arms for adjusted comparisons. Results This analysis included 184 patients on sarilumab monotherapy and 399 patients on sarilumab plus MTX. Differences (P < 0.05) were observed in ethnicity, region, body mass index group, rheumatoid factor, anti-cyclic citrullinated peptide antibodies, swollen joint count, CRP, CDAI and oral glucocorticoid use between these treatment groups. After adjusting for these differences in a mixed-effect model repeated measure, LS mean change from baseline for all assessments was similar between the treatment groups with overlapping CIs: CDAI, −28.79 vs −26.21; DAS28-CRP, −2.95 vs −2.81; CRP, −18.31 vs −16.46; Hb, 6.59 vs 8.09; Pain VAS, −33.62 vs −31.66; FACIT-Fatigue, 9.90 vs 10.24. Conclusion This analysis demonstrated that the efficacy of sarilumab monotherapy was similar to that of sarilumab and MTX combination therapy. Introduction Treatment guidelines recommend combining biologic and targeted synthetic DMARDs (tsDMARDs) with conventional synthetic DMARDs (csDMARDs), which primarily consists of MTX [1,2]. A recent real-world study in patients with RA reported suboptimal adherence to MTX citing adverse events as the main reason, which ultimately resulted in poor persistence of MTX [3]. Another systematic review also showed high variability in MTX adherence and persistence in patients with RA [4]. Therefore, there is a need for alternative treatment strategies in patients who are non-adherent to MTX. IL-6 plays a predominant role in the pathogenesis of RA by regulating a diverse range of activities that drive chronic inflammation associated with RA. IL-6 also mediates various activities that underlie both local and systemic clinical symptoms of RA via cell signalling modulated by membrane-bound and soluble forms of its receptor [5,6]. IL-6 receptor inhibitors, namely tocilizumab and sarilumab, have shown improvement in clinical outcomes in clinical studies and are approved for use as combination with csDMARDs or as monotherapy in patients with RA [5,7,8]. Recent EULAR guidelines recommend that IL-6 pathway inhibitors and tsDMARDs may have some advantages compared with other, biologic DMARDs (bDMARDs) in patients who cannot use csDMARDs as comedication [2]. Sarilumab, an IL-6 receptor-a (IL-6Ra) inhibitor, is a fully human monoclonal antibody which binds soluble and membrane-bound IL-6Ra to inhibit IL-6-mediated signalling [9][10][11]. In the MONARCH and MOBILITY trials, sarilumab as monotherapy and in combination with MTX, respectively, has demonstrated symptomatic and functional improvements in RA patients with inadequate responses/intolerance to MTX (MTX-IR/INT) [12,13]. There are no studies that have directly compared the efficacy of sarilumab monotherapy with that of its combination with MTX. In this post hoc analysis, we compared the efficacy of sarilumab monotherapy with sarilumab in combination with MTX using mixed-effect model repeated measure (MMRM) models. Patients and study design This post hoc analysis was performed using data from the MONARCH (NCT02332590 [14]) and MOBILITY (NCT01061736 [15]) phase III trials of sarilumab in patients with active RA. Details of the study design, patient population and outcomes of these trials have been published previously [12,13]. In the MONARCH trial, MTX-IR/INT patients with RA (enrolled based on the 2010 ACR/EULAR criteria) were randomized to receive subcutaneous (s.c.) sarilumab 200 mg every 2 weeks (q2w) or adalimumab 40 mg q2w in combination with placebo for 24 weeks [12]. In the MOBILITY trial, MTX-IR patients with RA (enrolled based on 1987 ACR revised classification criteria) were randomized to receive s.c. sarilumab 150 mg or 200 mg q2w or placebo in combination with weekly MTX for 52 weeks [13]. Detailed inclusion and exclusion criteria for both the trials were published previously [12,[14][15][16]. The present post hoc analysis is based on the data collected from MONARCH and MOBILITY studies. Both MONARCH and MOBILITY studies were performed in accordance with the Declaration of Helsinki and the protocols for both the studies were approved by the appropriate ethics committees/institutional review boards for the respective studies and patients gave written consent before participation [12,13,17]. Treatment arms This analysis included all patients who received sarilumab 200 mg q2w in the MONARCH and MOBILITY trials, based on treatment assigned. In the MOBILITY trial, patients received a stable dose of MTX (10-25 mg/week) for a minimum of 6 weeks prior to the screening visit, except patients within the Asia-Pacific region (Taiwan, South Korea, Malaysia, Philippines, Thailand and India) who were allowed to use a stable dose of MTX between 6 and 25 mg/week for a minimum of 6 weeks prior to the screening visit. Patients were to continue the stable dose of MTX for the duration of the study [16]. Statistical analysis For adjusted comparisons, continuous changes in endpoints from baseline were set as dependent variables and patient baseline characteristics that differed (P < 0.05) between the two trials were set as covariates in MMRM models; least squares (LS) mean change in endpoints from baseline (95% CI) at week 24 was compared between the treatment arms. Patients with nonmissing endpoint values were considered for these comparisons. For unadjusted comparisons of efficacy between monotherapy and combination therapy treatment arms, mean change in endpoints from baseline (95% CI) at week 24 was compared between the treatment arms. Sarilumab monotherapy vs sarilumab and methotrexate combination therapy in patients with rheumatoid arthritis https://academic.oup.com/rheumatology Responder analysis was performed using both ITT (patients with missing data imputed as non-responders) and OC (patients with missing data excluded) populations. Patient baseline characteristics This analysis included 184 patients in the sarilumab 200 mg q2w monotherapy arm from MONARCH and 399 patients in the sarilumab 200 mg q2w plus MTX combination therapy arm from MOBILITY. Baseline demographic and disease characteristics for patients included in both trials are shown in Table 1. Comparing the baseline characteristics of patients in these two trials, differences (P < 0.05) were observed in ethnicity, region, body mass index group, rheumatoid factor, anti-cyclic citrullinated peptide antibodies, swollen joint count, CRP, CDAI and oral glucocorticoid use between the treatment arms and were selected to be included in the MMRMs (Table 1). Details on the regional distribution of the study patients are provided in Supplementary Data S1, available at Rheumatology online. Efficacy assessments After adjusting for the selected baseline characteristics in MMRM, LS mean change from baseline at week 24 for all assessments was similar between the treatment arms with overlapping CIs (Fig. 1). Results of unadjusted comparisons were similar to adjusted comparisons (data not shown). Responder analysis At week 24, there were no discernible differences in the percentage of responders, for all outcomes between the treatment arms. In the ITT population, there were 42% responders in the monotherapy arm vs 43% responders in the combination treatment arm for CDAI LDA; 52% vs Fig. S1, available at Rheumatology online). A similar trend was observed in responder analyses based on OC ( Supplementary Fig. S1, available at Rheumatology online). Safety The safety profile of sarilumab has been previously reported, [12,13] and was not part of this analysis. Discussion After 24 weeks of treatment with sarilumab, both monotherapy and combination therapy showed greater clinical improvement in MTX-IR/INT patients with RA in the respective clinical trials. This post hoc analysis showed that for all efficacy assessments, no differences were observed between monotherapy and combination therapy treatment arms suggesting similar effectiveness of these therapies in patients with RA. Results of the current analysis are in line with the previous findings observed with another IL-6 receptor inhibitor, tocilizumab [18,19]. In a study that compared two different tocilizumab-based treatment strategies in patients with active RA (ACT-RAY), no clinically relevant superiority was demonstrated with MTX plus tocilizumab add-on strategy compared with tocilizumab monotherapy [18]. Another study that compared tocilizumab monotherapy with its combination with DMARDs in patients with RA and inadequate responses to previous treatments also showed that the monotherapy and combination therapy were similarly effective [19]. However, a recent study reported that TNF inhibitors require comedication with csDMARDs to achieve optimal clinical efficacy [20,21]. The results of this analysis suggest that sarilumab monotherapy may be a valuable treatment strategy when monotherapy with bDMARDs is recommended in certain patients with RA, specifically those who are [2]. This analysis provides preliminary evidence on similar effectiveness of sarilumab vs its combination with MTX, which might help rheumatologists in making informed treatment decisions, particularly, in MTX-IR/INT patients. One limitation of this analysis is that the data analysed were obtained from two different study populations. To overcome these differences, adjusted comparisons were made between the treatment arms. Difference in the eligibility criteria did not allow the analyses to be adjusted for prior medication including comparison of the background MTX treatment between monotherapy and combination therapy treatment arms. Another limitation is that radiographic data were not obtained during the MONARCH study due to which it was not possible to account for potential differences in radiographic damage at baseline in this analysis. Moreover, the data included in this analysis were from a relatively short duration (24 weeks), which may not be sufficient to derive longerterm conclusions. Conclusion This post hoc analysis in patients with RA, based on the aggregate data from two clinical studies, demonstrated similar efficacy of sarilumab when administered as either monotherapy or in combination with MTX. These data suggest that sarilumab monotherapy may be considered as a potential treatment alternative for patients in whom combination therapy with MTX is not appropriate.
2021-09-12T06:16:32.297Z
2021-09-11T00:00:00.000
{ "year": 2021, "sha1": "f791beb718dd96c1e83b9635264f3e7acb8e8d9c", "oa_license": "CCBYNC", "oa_url": "https://academic.oup.com/rheumatology/advance-article-pdf/doi/10.1093/rheumatology/keab676/40434877/keab676.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "c141f4d591f92b12017f86811c40026a0a497ac6", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
61990135
pes2o/s2orc
v3-fos-license
Vice Dean-Quality and Development Head of Quality and E-Leaning units Cloud computing is basically altering the expectation for how and when computing, storage and networking assets should be allocated, managed and devoted. End-users are progressively more sensitive in response time of services they ingest. Service Developers wish for the Service Providers to make sure or give the ability for dynamically assigning and managing resources in respond to alter the demand patterns in real-time. Ultimately, Service Providers are under anxiety to build their infrastructure to facilitate real-time end-to-end visibility and energetic resource management with well grained control to decrease total cost of tenure for improving quickness. What is required to rethink of the underlying operating system and management infrastructure to put up the on-going renovation of data centre from the traditional server-centric architecture model to a cloud or network centric model? This paper projects and describes a indication model for a network centric data centre infrastructure management heap that make use of it and validates key ideas that have enabled dynamism, the quality of being scalable, reliability and security in the telecommunication industry to the computing engineering. Finally, the paper will explain a proof of concept classification that was implemented to show how dynamic resource management can be enforced to enable real-time service guarantee for network centric data centre architecture. INTRODUCTION The random demands of the Web 2.0 era ingrouping with the desire to better utilize IT resources aredriving the need for a more dynamic IT infrastructure that canrespond to speedily changing requirements in real-time.This need for real-time enthusiasm is about to basically alter the data centre background and alter the IT infrastructure as we know it.In the cloud computing era, the computer can no longer be assumed in standings of the physical insertion i.e. the server or box, which households the processor, memory, storage and related components that establish the computer.In its place the "computer" in the cloud perfectly includes a group of physical work out resources i.e. work stations, retention, network bandwidth and storage, possibly circulated physically across server and geographical borders which can be planned on demand intoa dynamic consistent entity i.e. a "cloud computer", that can develop or shrink in real-time in order to promise the desired levels of potential sensitivity, performance, scalability, consistency and safety to any application that runs in it.What is really supporting this alteration today is virtualization technology, more precisely hardware assisted server virtualization.At an ultimate level, virtualization technology allows the abstraction or decoupling of the request payload from the original physical resource.What this typically means is that the physical resources can then be carved up into logical or virtual resources as needed.This is acknowledged as provisioning.By introducing a suitable management infrastructure on top of this virtualization functionality, the provisioning of these logical resources could be made dynamic i.e.the logical resource could be made bigger or smaller in accordance with demand.This is known as dynamic provisioning.To enable a true "cloud" computer, each single computing element or resource should be proficient of being enthusiastically provisioned and succeeded in real-time.Currently, there are numerous holes and areas for development in today's data centre infrastructure before we can attain the above vision of a cloud computer. A. Server useful systems and virtualization Whereas networks and storage resources appreciates to advances in network facility management and SANs have already been proficient of being virtualized for a while, only now with the broader acceptance of server virtualization, do we have the complete basic foundation for cloud computing i.e. all computing properties can now be virtualized.Subsequently, server virtualization is the catalyst that is now motivating the transformation of the IT infrastructure from the traditional server-centric computing architecture to a network centric cloud computing architecture.When server virtualization is done, we have the capability to generate whole logical (virtual) servers that are free of the fundamental physical infrastructure or their physical position.We can postulate the computing, network and storage resources for all logical server (virtual instrument) and even transfer workloads from one virtual machine to another in real time (live migration). All of this has aided deeply to convert the cost structure and competence of the data centre.Despite the many assistances that virtualization has allowed, we are still to realize the complete potential of virtualization with respect to the cloud computing.This is because: 1) Usual server centric operating systems were not planned to manage collective spread resources: www.ijacsa.thesai.org The Cloud computing example is all about optimally involving a set of scattered computing resources while the server-centric computing example is about devoting resources to a specific application.The server-centric example of computing fundamentally ties the application to the server.The work of the server operating system is to commit and guarantee to obtain ability of all accessible computing resources on the server to the application.If another application is installed on the same server, the operating system will once again manage the entire server resources to confirm that each application remains to be checked as if it has access to all available resources on that server.This model was not designed to allow for the "dial-up" or "dial down" of resource allocated to an application in response to change workload demands or business priorities.This is the reason that load-balancing and clustering was introduced. 2) Current hypervisors do not supply sufficient division between application management and physical supply management: Today's hypervisors have just interposed themselves one level down below the operating system to enable multiple "virtual" servers to be hosted on one physical server.While this is great for consolidation, once again there is no way for applications to manage how, what and when resources are assigned to themselves without having to concern about the management of physical resources.It is our observation that the current generation of hypervisors which were also born from the era of server-centric computing does not define hardware management from application management much similar the server operating systems themselves. 3) Server virtualization does not yet allow contribution of scattered resources: Server virtualization currently permits a single physical server to be structured into multiple logical servers.However, there is no way for example to generate a analytical or computer-generated server from resources that may be physically placed in separate servers.It is true that by virtue of the live migration capabilities that server virtualization technology enables, we are intelligent to move application loads from one physical server to another possibly even geographically distant physical server.However, moving is not the similar as sharing.It is our contention that to enable a truly distributed cloud computer, we must be able efficiently to share resources, no problem where they exist in purely based on the potential constraints of applications or services that consume their sources. B. Storage set of connections & virtualization Before the production of server virtualization, storage networking and storage virtualization permitted many improvements have been done in the data centre.The key improvement was the introduction of the FibreChannel (FC) protocol and Fibre Channel-based Storage Area Networks (SAN) which delivered great speed of storage connectivity and dedicated storage solutions to allow such profits as server-less backup, point to point reproduction, HA/DR and presentation optimization outside of the servers that run applications.However, these pay backs have come with improved management complication and costs. C. System virtualization The virtual networks now applied inside the physical server to switch between all the virtual servers to provide a substitute to the multiplexed, multi-patched network channels by trucking them nonstop to WAN transport, thus shortening the physical network infrastructure. D. Function creation and binding The existing method of exhausting Virtual Machine images that contain the application, OS and loading disk images is once again born of a server-centric computing model and does not provide itself to enable supply across mutual resources.In a cloud computing pattern, applications should preferably be built as a collection of facilities which can be integrated, disintegrated and distributed on the fly.Each of the services could be measured to be individual procedure of a larger workflow that establishes the application.In this way, individual services can be arranged and provisioned to improve the overall performance and potential requirements for the application. II. PLANNED SUGGESTION STRUCTURAL DESIGN MODEL If we were to purify the above interpretations from the previous section, we can realize that a couple of key subjects emerging.That is: A. The next generation architecture for cloud computing must entirely decouple physical resources management from virtual resource management. B. Supply the proficiency to intervene between applications and resources in real time. As we stressed in the earlier section, we are still to attain perfect decoupling of physical resources management from virtual resource management but the outline and improved acceptance of hardware assisted virtualization (HAV) as a significant and essential step towards this objective.Thanks to HAV, a next generation hypervisor will be capable to achieve and truly guarantee the identical level of access to the fundamental physical resources.Moreover, this hypervisor should be proficient of handling both the resources situated locally inside a server as well as any resources in other servers that may be situated somewhere else physically and linked by a network.Once the controlling of physical resources is decoupled from the virtual resource management.The necessity for a mediation layer that mediates the distribution of resources between various applications and the shared distributed physical resources becomes obvious.www.ijacsa.thesai.org---------------Infra. ------------------------------- III. INFRASTRUCTURE PROVISION FABRICS This layer includes two pieces.Together with the two components allow a computing resource "dial-tone" that delivers the basis for provisioning resource fairly to all applications in the cloud. A. Scattered services mediation This is a FCAPS based (Fault, Configuration, Accounting, Performance and Security) abstraction layer that enables autonomous self-management of every individual resource in a network of resources that may be distributed geographically. B. Virtual supply mediation layer This gives the ability to create logical virtual servers with a level of service guarantee those assurances resources such as number of CPUs, memory, bandwidth, latency, IOPS (I/Ooperations per second), storage through put and capacity. C. Circulated services Assurance Platform This layer will allow for creation of FCAPS-managed virtual servers that pack and host the desired choice of OS to allow the loading and execution of applications.Since the virtual servers implement FCAPS-management, they can give automated mediation services natively to guarantee fault management and reliability (HA/DR), performance optimization, accounting and security.This describes the management dial-tone in our orientation architecture model. D. Scattered Services Delivery Platform This is basically a workflow engine that implements the application which we described in the previous section, is preferably composed as business workflow that organizes a number of distributable workflow elements.This describes the services dial tone in our reference architecture model. E. Scattered Services Creation Platform This layer gives the tools that developers will utilize to generate applications defined as group of services which can be composed, decomposed and scattered on the fly to virtual servers that are automatically shaped and run by the distributed services assurance platform. F. Legacy Combination Services Mediation This is a layer that gives addition and support for existing or legacy application in our reference architecture model. IV. DEPLOYMENT OF THE SUGGESTION MODEL Any generic cloud service platform requirements must address the needs of four categories of stake holders: 1) Infrastructure suppliers, 2) Service suppliers. Below we explain how the reference model described will affect, benefit and are set up by each of the above stake holders. A. Infrastructure suppliers These are vendors who give the underlying computing, network and storage resources that can be fixed up into logical cloud computers which will be dynamically forced to deliver extremely scalable and globally interoperable service network infrastructure.The infrastructure will be utilized by both service creators who develop the services and also the end users who use these services. B. Service suppliers With the employment of our innovative reference architecture, service providers will be capable to promise both service developers and service users that resources will be obtainable on demand.They will be capable effectively to determine and meter resource utilization end-to-end usage to allow a dial-tone for computing service while management Service Levels to meet the availability, performance and security needs for each service.The service provider will now handle the application's link to computing, network and storage resource with suitable SLAs. A. Facility developers They will be able to develop cloud based services using the management services API to configure, monitor and manage service resource allocation, availability, utilization, performance and security of their applications in real-time.Service management and service delivery will now be integrated into application development to allow application developers to be able to specify run time SLAs. C. End users Their demand for selection, mobility and interactivity with sensitive user interfaces will continue to rise.The managed resources in our reference architecture will now not only permit the service developers to generate and distribute services using logical servers that end users can dynamically provision in immediate to respond for changing needs, but also provide service providers the ability to charge the end-user by metering correct resource handling for the required SLA. V. CONCLUSIONS In this paper, we have explained the needs for implementing a real dynamic cloud computing infrastructure which contains a group of physical computing resources i.e. processors, memory, network bandwidth and storage, potentially dispersed physically through server and geographical limits which can be controlled on demand into a dynamic reasonable entity i.e. "cloud computer", that can develop or reduce in size immediately in order to give surety about the desired levels of latency, sensitivity, performance, scalability, consistency and security to any application that runs in it.We worked out few key areas of shortage of current virtualization and management technologies.Particularly we explained detail importance of sorting out physical resource management from virtual resource management and why current operating systems are not designed and hence it was suitable to provide this ability for the distributed shared resources especially of cloud deployment.We also painted the need for FCAPS-based (Fault, Configuration, Accounting, Performance and Security) service "mediation" to give global administration functionality for all networked physical resources that include a cloud, irrespective of their allocation across a lot of physical servers in different geographical locations.We then projected an indication architecture model for a distributed cloud computing mediation (management) platform which will outline the foundation for making the possibility of next generation cloud computing infrastructure.We proved how this infrastructure will involve as well as advantage key stake holders such as the Infrastructure providers, service providers, service developers and end-users. Description in this paper is considerably different from most current cloud computing solutions that are nothing more than hosted infrastructure or applications accessed over the Internet.The proposed architecture in this paper will significantly change the current setting by enabling cloud computing service providers to give a next generation infrastructure platform which will recommend service developers and end users exceptional control and enthusiasm in real-time to assure SLAs for service latency, availability, performance and security. Figure 2 Figure 2 Position Architecture Model for Next Generation Cloud Computing Infrastructure
2014-10-01T00:00:00.000Z
2012-01-01T00:00:00.000
{ "year": 2012, "sha1": "f188c824af7b9ab790f7fc4dbea45ea59281d569", "oa_license": "CCBY", "oa_url": "http://thesai.org/Downloads/Volume3No5/Paper_26-Growing_Cloud_Computing_Efficiency.pdf", "oa_status": "HYBRID", "pdf_src": "CiteSeerX", "pdf_hash": "98d9900afda95f0a3cd75086778e75c60708eae7", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
56350557
pes2o/s2orc
v3-fos-license
A clinical study on maternal and fetal outcome in pregnancy with oligohydromnios Amniotic fluid provides protected shield for the growing fetus, cushioning fetus against mechanical and biological injury, supplying nutrient and facilitating growth and movement of fetus. Early in the developmental period of fetus, it is enclosed by amnion and is surrounded by amniotic fluid which is similar to extracellular fluid and an indicator of fetal status that has made amniotic fluid volume assessment, an important part of antenatal fetal surveillance. The quantity of amniotic fluid changes according to period of gestation. Oligohydromnios refers to amniotic fluid volume that is less than expected for gestational age. It is typically diagnosed by ultrasound examination and may be described qualitatively (e.g., normal, reduced) or quantitatively (e.g., amniotic fluid index [AFI] ≤5). Oligohydramnios occurs in about 1-5 % of pregnancies at term. In pregnancies of more than 40 weeks of gestation, the incidence may be more than 12 % as the amniotic fluid volume declines progressively after 41 weeks of gestation. Oligohydromnios results from medical or obstetrical complication related to maternal ,placental ,fetal causes and idiopathic. Both abnormal, increase and decrease in amniotic fluid volume have been associated with maternal as well as fetal morbidity and mortality. With advent of real time USG better ABSTRACT INTRODUCTION Amniotic fluid provides protected shield for the growing fetus, cushioning fetus against mechanical and biological injury, supplying nutrient and facilitating growth and movement of fetus. Early in the developmental period of fetus, it is enclosed by amnion and is surrounded by amniotic fluid which is similar to extracellular fluid and an indicator of fetal status that has made amniotic fluid volume assessment, an important part of antenatal fetal surveillance. The quantity of amniotic fluid changes according to period of gestation. Oligohydromnios refers to amniotic fluid volume that is less than expected for gestational age. It is typically diagnosed by ultrasound examination and may be described qualitatively (e.g., normal, reduced) or quantitatively (e.g., amniotic fluid index [AFI] ≤5). 1 Oligohydramnios occurs in about 1-5 % of pregnancies at term. 2 In pregnancies of more than 40 weeks of gestation, the incidence may be more than 12 % as the amniotic fluid volume declines progressively after 41 weeks of gestation. 3 Oligohydromnios results from medical or obstetrical complication related to maternal ,placental ,fetal causes and idiopathic. Both abnormal, increase and decrease in amniotic fluid volume have been associated with maternal as well as fetal morbidity and mortality. With advent of real time USG better identification can be done using AFI method described by phalen et al, where four quadrant technique is employed during TAS. 1 Oligohydromnios is associated with congenital fetal anomalies, Uteroplacental insufficiency, Premature Rupture of membranes, postdatism, abruption placenta and hypertensive disorder in pregnancy. It is found to be associated with high incidence of maternal and fetal morbidity and mortality. During labour the predominant mechanical function of amniotic fluid is to provide an aquatic cushion for umbilical cord. Without this cushion, compression of the cord between the fetus and the uterine wall may occur during contractions or fetal movement, this cord compression causes fetal distress which are associated with low APGAR scores and acidosis at birth, meconium staining, caesarean section and operative vaginal delivery. Early detection of oligohydramnios and its management may help in reduction of perinatal morbidity and mortality one side and decreased rate of caesarean deliveries on the other side. The objectives of the present study was to observe the effect of oligohydramnios in maternal outcome in form of, operative delivery and progress of labour, and to study the effect of oligohydramnios in fetal outcome in form of fetal compromise i.e. FGR, fetal distress, altered APGAR score, Need for an NICU admission, congenital anomaly and perinatal death. METHODS This study was conducted in department of Obstetrics and Gynecology of Mahatma Gandhi Medical College and Hospital, Jaipur; between Nov 2017 to June 2018. 50 patients with ≥ 28 weeks POG with oligohydramnios, confirmed by ultrasonographic measurement of AFI using four quadrant technique; were selected randomly after fulfilling inclusion and exclusion criteria. Exclusion criteria • Patient with Premature rupture of membrane • Multiple pregnancy. A detailed history and examination were done, oligohydromnios confirmed by AFI. Routine management in form of rest, left lateral position, oral and intravenous hydration and control of etiological factor was done if present. Fetal surveillance was done by USG, modified biophysical profile and Doppler. Decision of delivery by either induction or elective or emergency LSCS was done as per required. Some patients were already in labour and other allows going in spontaneous labour. RESULTS Out of 50 patients ,52% of patients were in 20-25 years age group and 36% were in 26-30 age group and rest 8% and 4% in 30-35 years, <20 years age group simultaneously. Thus, maximum Patients were in 20-30 years age Group. More number of sections was highest in 20-25 years age group, and lowest in <20 years age group. Incidence of oligohydromnios was highest in primigravida (56%) followed by 44 % in multigravida in present study. And operative morbidity was also highest in primigravida (53.57%) followed by 36.36% in multigravida. Most common cause of oligohydromnios is idiopathic (64%). Second commonest cause is Hypertensive disorder in pregnancy (20%). operative morbidity is highest in hypertensive disorder in pregnancy (60%). Figure 1: Associated condition and maternal outcome in oligohydromnios Out of 50 patients, 38(76%) patients had reactive nonstress test in which 34.21% patients had LSCS. Figure 2: Non-stress test in oligohydromnios. Operative intervention was significantly higher in NST non-reactive group (24%) in which 83.3 patients had LSCS. incidence of normal vaginal delivery was 65.78 % in NST reactive Patients. All patients were undergone Doppler study.14% patients were found to have placental insufficiency. Out of them, Operative intervention was significantly higher (85.71%) in patient with abnormal Doppler findings.86% patient had normal Doppler study in which 39.53% patients underwent LSCS. DISCUSSION In present study the incidence of oligohydromnios in primigravida is 56%, which is comparable to study done by Donald et al, it was 60%. It was 52% in study conducted by Jagatia K et al. 4,5 Various studies represent different rate of LSCS in pregnant women with amniotic fluid index <5 cm. The LSCS was done in 46% of cases in present study which is compared with other studies as follow. study by Casey B et al, found that there was increased rate (32%) of caesarean section in oligohydromnios cases. 6 Golan et al [7],found that, the caesarean section was performed in 35.2 % of cases of oligohydromnios. 7 Bansal D et al,found that there was 46% of cases of oligohydromnios who undergone caesarean section. These studies are comparable to my study. 8 Most common cause of oligohydromnios reported by Jagatia K et al. was idiopathic followed by hypertensive disorder in pregnancy which is comparable to my study. in present study association of hypertensive disorder is 20% which is comparable to study by Sriya et al, in which its incidence was 31%. 5,9 8% cases had postdated pregnancy in present study. Marks and divon has reported oligohydromnios in 11.5% of 551 pregnancies at 41 weeks or greater. 10 A study done by Bhat et al at Bharati Vidyapeeth Deemed University Medical College and Hospital at Sangli including 100 patients in third trimester of pregnancy with Oligohydramnios; found that the most common cause of oligohydramnios was idiopathic followed by PIH. 11 In the study by Kumar 9,12,13 In present study it was 24%. The operative morbidity is significantly higher in patients with altered Doppler study. In Weiss et al and Yound HK et al it was 71% and 69.7% respectively which was comparable to this study, in present study it was 83.3%. 14,15 In present study, we had intrapartum complication in form of fetal distress (23%). Which is comparable to study done by Casey et al (32%), most common reason to perform caesarean was fetal distress which was either due to cord compression or FGR. 6 Results are comparable with Casey et al study, which show significant higher proportion of LSCS due to fetal distress. There is high rate of operative delivery (instrumental + LSCS) in all the studies which are comparable to present study. Thus, oligohydramnios is associated with increased operative delivery and therefore, increased maternal morbidity. In Julie Johnson et al [16] ,92.6% babies were AGA and 7% were SGA. In Brain M Casey et al [6] CONCLUSION Oligohydramnios is frequent occurrence and demands intensive fetal surveillance and proper antepartum and intrapartum care. Oligohydramnios is a frequent finding in high risk pregnancies like FGR, hypertensive disorder in pregnancy, pregnancy beyond 40 weeks of gestation. AFI is a predictor of fetal tolerance in labour and decrease in AFI is associated with increased risk of abnormal fetal heart rate and meconium stained liquor. Due to intrapartum complications and high rate of caesarean section is rising, but decision between vaginal delivery and caesarean section should be well balanced so that unnecessary maternal morbidity prevented and on other side timely intervention can reduce perinatal morbidity and mortality.
2019-03-18T14:04:00.579Z
2018-10-25T00:00:00.000
{ "year": 2018, "sha1": "7710cb39a703b8a4830147bd1e4bc5a3d3ef9441", "oa_license": null, "oa_url": "https://www.ijrcog.org/index.php/ijrcog/article/download/5624/4027", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "ba52a9d2b8c8510f83e6af452ea9d54ade940aed", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
55204690
pes2o/s2orc
v3-fos-license
Assessment of culturable airborne fungi in a university campus in Hangzhou , southeast China Airborne fungi, which may adversely affect human health through allergy, infection, and toxicity, are being proposed as a cause of adverse health effects. In this study, a systematic survey on culturable airborne fungi was conducted for one year in four areas of the university campus in Hangzhou, southeast of China: living area (LA), dining area (DA), teaching area (TA), and office area (OA). Results showed that the mean concentration of culturable fungal was about 1104 CFU/m 3 . Pencillium was the most common fungal group, and contributed to more than 36% of the total fungal concentration, followed by Cladosporium, Alternaria, and Asperigillus isolates. Fungal concentration in DA was highest (1271 CFU/m 3 ), while the concentration in OA was lowest (791 CFU/m 3 ). The seasonal variation pattern of fungal concentration in a year was significant in LA, DA and TA, where the concentrations were higher in spring and summer, and lower in autumn and winter (**P<0.01). However, there were no significant differences in fungal concentration in OA. With regarding to the size distribution of airborne fungi, there were no significance among different areas and seasons, all appeared as normal logarithm distribution. Moreover, airborne fungi was mostly collected in stage 3 (3.0 to 6.0 μm), stage 4 (2.0 to 3.5 μm) and stage 5 (1.0 to 2.0 μm), and totally composing 83.55% of the population in LA, 83.96% in DA, 85.14% in TA, and 79.39% in OA. It suggested that a more efficient monitoring network and forewarning system of airborne microorganisms in the public areas should be gradually constructed in China. INTRODUCTION In 1998, the Chinese government proposed expand university enrollment of professional and specialized graduates and developed world class universities, and the enrollment in higher education increased drastically from 1.6 to 3.82 million between 1999 and 2003.China had 6.3 million students to graduate from college or university in 2010 (Liu et al., 2000).Meanwhile, the infrastructure construction at the university or college could not keep up with the expanding program very well, and the campus of many universities became relatively crowded.As a special public place, attentions should be focused on developing and applying disease control and prevention policies in these universities.However, problems of public health in universities were not concerned enough in China.Centers for Disease Control and Prevention (CDCs) have developed a systematic monitoring network of public health only in primary and middle schools.In fact, activities of disease prevention and control (infectious diseases, food-borne pathogens and other microbial infections), health promotion, environmental health should be also designed to improve the health of students in universities.Environmental problems, especially the air quality in the crowed places in universities, should be significantly concerned of, since outbreaks of many epidemic diseases are correlated with microorganisms in the air (for example, influenza A pandemic (H1N1) 2009 were frequently reported in universities in 2009). Fungi are ubiquitous in the atmosphere, and often constitute the main biological component of the air.They are considered to be closely related with air pollution and human health.Exposure to bio-aerosols containing airborne microorganisms and their by-products can result in respiratory disorders and other adverse health effects such as infections, hypersensitivity pneumonitis and toxic reactions (Harrison et al., 1992;Hargreaves et al., 2003;Fracchia et al., 2006).As a whole, more than 80 genera of fungi were associated with symptoms of respiratory tract allergies (Horner at al., 1995), and over 100 species of fungi were involved with serious human and animal infections, while many other species caused serious plant diseases (Cvetnic and Pepelnjak, 1997).It was reported that Cladosporium, Alternaria, Penicillium, Aspergillus in the atmosphere were the most common fungi (Shelton et al., 2002;Adhikari et al., 2004), and their concentrations differed from place to place because of local environmental variables, fungal substrates, and human activities (Shelton et al., 2002).The potential of health risk caused by exposure to airborne fungi could occur in workplaces and residential spaces at any time.The concern about adverse health effects from bioaerosol inhalation has led to consideration of permissible exposure limits for fungi.However, currently there are seldom specific standards or directives or other exposure limits for fungi. As an important indicator, airborne fungi investigation in university campus was considered to be necessary for their impacts on the human health.Many studies were carried out about the fungal community in outdoor, indoor and even underground environments (Bogomolova and Kirtsideli, 2009).However, little is known about the characteristics of airborne fungi in southeast of China, including in the campus of universities.Hangzhou is one of the most suitable to live cities in China, and generally speaking, its environment is pretty well.We chose a university in Hangzhou to survey on both concentration distribution and community of airborne fungi, to evaluate whether it is indispensable to construct monitoring network and forewarning system systematically and extensively in those areas. Sampling sites Hangzhou is the capital and largest city of Zhejiang Province in southeast of China with a registered population of 8.7 million.It has a humid subtropical climate with four distinctive seasons, characterized by long, very hot, humid summers and short, chilly, cloudy and dry winters (with occasional snow).The average annual temperature is 16.5°C (61.7°F), ranging from 4.3°C (39.7°F) in January to 28.4°C (83.1°F) in July.The city receives an average annual rainfall of 1,450 mm (57.1 in) and is affected by the plum rains of the Asian monsoon in June.(Hangzhou Climate and Weather; http://www.topchinatravel.com/hangzhou/hangzhouclimate-and-weather.html).In the present study, four sampling sites located in the campus of Zhejiang Gongshang University in Hangzhou city were selected for the study of culturable airborne fungi: (1) LA, a dormitory with the area of about 20 m 2 on the second floor of a building in Zhejiang Gongshang University.There are 6 graduate students living in it, and the indoor environment is not so clean.They do the sweeping about once a week.(2) DA, a canteen about 400 m 2 on the first floor of Xinlanyuan Building in Zhejiang Gongshang University.It can hold more than 300 people for dinner, and there are special barrels for plate and bowl leftover near the door.The indoor environment is pretty well, and they do the sweeping after each dinner.(3) TA, an extremely large classroom that can hold more than 130 students.The indoor environment is ordinary, not so clean or dirty.(4) OA, a teacher office about 15 m 2 on the second floor in a building.There are 3 teachers working in it, and with 3 office tables and 2 bookcases.The indoor environment is very well, and the teachers sweep the floor and open the window every day. Sampling methods A six-stage culture-able FA-1 sampler (imitated Andersen sampler), made by the Applied Technical Institute of Liaoyang, China, and was used to isolate culturable fungi from the atmosphere.Each stage includes a plate with 400 holes of uniform diameter through which air is drawn at 28.3 l min -1 to impact on Petri dishes containing agar media.Airborne particles are separated into six fractions, and the aerodynamic cut-size diameters in six stages are 7.0 µm (stage 1), 4.7 to 7.0 µm (stage 2), 3.3 to 4.7 µm (stage 3), 2.1 to 3.3 µm (stage 4), 1.1 to 2.1 µm (stage 5), and 0.65 to 1.1 µm (stage 6), respectively.In each sampling site, the sampler was mounted on 1.5 m above ground level with a platform.Sampling was conducted seasonally, in Oct. 2009 (autumn), Jan 2010 (winter), Apr 2010 (spring), and Jul 2010 (summer), respectively in a year.All of the samples were collected at 2:00 pm for 3 min in triplex, and continued for three consecutive days of each season.For each sampling, the FA-1 sampler was loaded with 9.0 cm Petri dishes containing sabouraud agar adding chloramphenicol to inhibit bacterial growth.Exposed culture dishes were incubated for 72 h at 25°C.Results were then expressed as colony forming units per cubic meter of air (CFU/m 3 ). Fungal concentration qualification Colony forming units (CFU) on each plate were counted and concentration of samples was expressed as CFU per cubic meter of air (CFU/m 3 ).However, since the superposition is unavoidable when the microbial particles impact the same spot through the same sieve pore, the colonies collected was revised by 1. CFU/m 3 was calculated by 2. ) 1 In the equations, Pr is the revised colony in every stage (r is from 1 to 6); N is the number of sieve pore in every stage of the sampler; r is the sampling colony; C is airborne fungal concentration; P1, P2, P3, P4, P5, and P6 is the revised colony in every stage in the sampler; t is the sampling time; F is the air flow rate of sampler during sampling (Fang et al., 2007). Fungal particle percentage qualification at every stage in the sampler The fungal particle percentage at every stage was calculated by 3. 3 In the equation, FPr is the fungal particle percentage (r is from 1 to 6); Pr is the revised colony in every stage (r is from 1 to 6); P1, P2, P3, P4, P5, and P6 is the revised colony in every stage in the sampler. Fungal identification After incubation, fungal colonies growing on each dish were counted and identified microscopically to their genus groups according to the morphology of the observed hyphae, conidia, and sporangia.Fungal colonies sub-cultured onto malt extract agar (MEA), or other appropriate media that have not developed sporing structures after 14 days were described as "non-sporing isolates".Fungal isolates selected from sampling sites were in further identified using the molecular method as described below.Each pure isolate was homogenized in liquid culture medium and then DNA was extracted using CATB method (Möller et al., 1992).The internal transcribed spacer (ITS) region of fungal rRNA genes was amplified using the following universal primer set: ITS1 (5′-TCCGTAGGTGAACCTGCGG-3′) and ITS4 (5′-TCCTCCGCTTATTGATATGC-3′) (White et al., 1990).The reaction mixture (50 μl) consisted of 0.3 μl Taq polymerase, 2 μl dNTP, 5 μL10×PCR buffer, 2 μl each primer, and 1.0 μl (ca. 10 ng DNA) template.The amplification program was as follows: initial denaturation at 94°C for 5 min, 30 cycles of 94°C for 30 s, annealing at 55°C for 30 s, and extension at 72°C for 30 s, and then final extension for 10 min at 72°C.The PCR products were detected by electrophoresis on a 1% agarose gel.The sequences were obtained using prime T7 by the Shanghai Majorbio Bio-technology Company, and were analyzed with the BLAST program of the National Center for Biotechnology Information (NCBI) (http://www.ncbi.nlm.nih.gov/Blast.cgi).The sequences showing the highest similarity to those of the clones were extracted from GenBank. Statistical analysis All the experimental data were analyzed using SPSS Version 17.0 and Microsoft Excel 2007.The multiple comparative analysis method of ANOVA and Duncan's test was used to assess the differences of concentrations of airborne fungi among the investigated sites.The significant differences of airborne fungal concentrations were analyzed by means of paired t-test.Lou et al. 1199 Community composition of culturable airborne fungi in the university campus Totally, eighteen genera of culturable airborne fungi were identified from the sampling sites in the university campus (Table 1).The frequency of the five groups including Pencillium, Cladosporium, Alternaria, Asperigillus, and non-sporing isolates varied from 66.7 to 100.0% in LA, DA, TA, and OA throughout a sampling year, while the frequency of other fungal groups varied from 0.0 to 50.0%. Within all the identified groups of culturable airborne fungi, Pencillium had the maximum fungal concentration percentage in all the sampling sites, followed by Cladosporium, Alternaria, Asperigillus and Non-sporing isolates (Table 1).Their concentration percentages varied from 4.3 to 13.9%.Other groups accounted for no more than 10.0% of total fungi colonies in the university campus.Therefore, the dominant culturable airborne fungi identified from the university campus were Pencillium, Cladosporium, Alternaria, Asperigillus and non-sporing isolates due to their frequency and concentration percentage. Spatial variation pattern of culturable airborne fungal concentration The concentration and its range of culturable airborne fungi in four sampling sites in the university campus were showed in Table 2. Considering all sampling area, the mean and geometric mean concentration of culturable airborne fungi were 1104 CFU/m 3 , and 1028 CFU/m 3 respectively in the university campus.Significant highest fungal concentrations were found in DA, followed by LA and TA, while lowest fungal concentration was detected in OA (**P<0.01).The mean fungal concentration was about 1202 CFU/m 3 in LA, 1271 CFU/m 3 in DA, 1151 CFU/m 3 in TA, and 791 CFU/m 3 in OA.The concentrations of Penicillius and Cladosporium in LA, DA, and DA were higher than those in OA (**P<0.01),and no significant difference was found among LA, TA, and DA (P>0.05).Significant higher concentrations of Alternaria were determined in LA and DA than in TA and OA.Concerning Aspergillus group, highest concentration was observed in DA, while lowest in OA (**P<0.01). Seasonal variation pattern of culturable airborne fungal concentration Significant differences in total fungal concentrations among seasons existed in LA, TA and DA, where 1).Higher Penicillius and Cladosporium concentration was found in spring and summer, then lower in autumn and winter in LA, DA, TA and OA.As for Alternaria and Aspergillus, significant seasonal variation pattern of their concentration was observed in LA, DA and TA, higher in spring and summer, lower in autumn and winter (*P<0.05).In OA, the concentration of Alternaria and Aspergillus presented no significant difference in a year (P>0.05). Spatial variation pattern of particle size distribution Particle size distribution of culturable airborne fungi in four sampling sites in the university campus was demonstrated in Figure 2. Basically, same particle size distribution pattern was observed in LA, DA, DA and OA in the university campus, and no significant difference ofthe size distribution pattern was found among these functional areas (P>0.05).The proportion of culturable airborne fungi increased gradually from stage1 (>8.2 μm) to stage4 (2.0 to 3.5 μm), and decreased dramatically at stage5 (1.0 to 2.0 μm) and stage6 (<1.0 μm).The distributing patterns of culturable airborne fungi presented log-normal distribution.Most culturable airborne fungi were distributed at stage 3, stage4 and stage5 in the sampler, totally contributing with 83.55, 83.96, 85.14 and 79.39% in the LA, DA, TA and OA, respectively.Just a few airborne fungi were found at stage 1, stage 2 (5.0 to 10.4 μm) and stage 6 in the sampler.The highest proportions of culturable airborne fungi were detected at stage 4 and the lowest at stage 6.The proportions were 43.56% (LA), 43.51% (DA), 46.36% (TA) and 42.76% (OA) at stage 4, and 2.28% (LA), 1.03% (DA), 1.24% (TA) and 1.98% (OA) at stage 6. Seasonal variation pattern of particle size distribution Just as particle size distribution of culturable airborne fungi in four sampling sites, the distributing patterns of culturable airborne fungi presented log-normal distribution (Figure 3), and no significant difference of the size distribution pattern was found among seasons in the year (P>0.05).Most culturable airborne fungi were distributed at stage 3 (3.0 to 6.0 μm), stage4 and stage 5 in the sampler, totally contrubuting to 86.13, 86.57, 81.88 (DA), 42.44% (TA) and 41.24% (OA) at stage 4, and 1.02% (LA), 0.92% (DA), 1.25% (TA) and 3.34% (OA) at stage 6. DISCUSSION It is an urgent need to undertake the study of indoor airborne fungi to generate baseline data and explore their link to nosocomial infections, to establish lawfully regulated standards related to airborne microorganisms.The U.S. Environmental Protection Agency (EPA) conducted the Building Assessment Survey and Evaluation (BASE) over a five-year period from 1994 to 1998, to characterize determinants of indoor air quality including aerosol dispersal of pathogens such as airborne bacteria and fungi (Sarica et al., 2002).In 2001, the German Federal Environmental Office also initiated a research project, the UFOPLAN (Umweltforschungsplans) study, to determine the background concentrations of indoor mold in order to standardize mold measurement procedures in non-moldy homes (Baudisch et al., 2009).However, to date in China, the network of sanitaryepidemiological responsible for microbiological measurements/control in indoor air is not sufficient at all.CDCs in China only monitor the total concentration of airborne bacteria and fungi in the public indoor places such as stations and hotels according to Chinese National Standard (GB/T 18204. .Further studies should be carried out to get enough baseline data and develop a more comprehensive monitoring network of airborne microorganisms in China.Airborne fungi of the indoor and outdoor environments has attracted much public attention especially in south China, since its humid climatic characteristics are appropriate for mold growth and reproduction.Our results provided the first hand data of airborne fungi baseline in southeast of China, and could efficiently help to evaluate the human health risks from exposure to these atmosphere. The community composition, concentration variation, and size distribution of airborne culturable fungi in LA, DA, TA, and OA in the university campus, southeast of China, was performed seasonally in a sampling year.In general, the fungal concentration in the indoor environment of university campus in Hangzhou city was higher compared to the studies conducted in other areas (Lee et al., 2006;Kim and Kim, 2007), which suggested that we should carefully focused on the public health problems associated with elevated concentration of airborne fungi because long time exposure to these indoor environments may lead to high risk of human health.The main cause of higher fungi concentration in Hangzhou might be attributed to its climatic characteristics, since it has a humid subtropical climate with four distinctive seasons (very long, hot, humid summers and short, chilly, cloudy, dry winters).What's Hangzhou city has relatively high vegetation vigorous growth plants might much fungal growth substrates in the air (Ju et al., 2003;Fang et al., 2005). It was shown that fungal concentration in DA was highest (1271 CFU/m 3 ), followed by LA (1202 CFU/m 3 ) and TA (1151 CFU/m 3 ), while the concentration in OA was lowest (791 CFU/m 3 ).Two sampling sites of LA and OA were with almost the same square area (20 vs. 15 m 2 ) and both on the 2nd floor of the building, however, compared to OA, LA with six male graduate students seems more crowed, with more personal activities and worse ventilation, and the room was not so clean because there are mess of cloths, socks, and other daily supplies, suggesting that human activities, cleanness, and ventilation were the major factors that influenced the concentration of indoor airborne fungi.The same reasons were also for the larger space of DA and TA, for DA with many people and much leftover at the front of the door at the mealtime, and TA with many students for their classes frequently.Meanwhile, leftover in the dining area, especially in the hot and wet seasons, might be the favourable growth substrates for airborne fungi.Therefore, it was reasonable that DA was with highest concentration of airborne fungi.As to its seasonal variation pattern, fungi concentrations were higher in spring and summer, and lower in autumn and winter (**P<0.01) in LA, DA and TA, whereas there were no significant differences in fungi concentrations in OA where the air-conditioner could keep the temperature relatively more stable with the change of seasons, indicating that air temperature was another important factor for airborne fungi. The most common fungi groups in LA, DA, TA, and OA in the university campus were Pencillium, Cladosporium, Alternaria, and Asperigillus according to priority, and Pencillium contributed to more than 36% of the total fungal concentration.High concentration of Pencillium and Cladosporium in indoor environment could cause allergic diseases (Li 1997), and Cladosporium were also associated with respiratory symptoms (Su et al., 1992).Asperigillus were the most infections, commonest airborne fungi; it could cause a group of diseases called aspergillosis which could occur in immune-compromised hosts or as a secondary infection.Symptoms of aspergillosis include persistent cold, watering eyes, prolonged muscle cramps and joint pain (Vonberg and Gastmeier, 2006).It has been reported that Pencillium was the most prevalent airborne fungi in indoor environments such as public buildings in Korea (Kim and Kim, 2007).However, many studies indicated that the most common fungi in indoor air was Cladosporium (Lee et al., 2006), and our former findings carried out in outdoor environments in Beijing was also Cladosporium (Fang et al., 2005).This might be caused by the great differences of environmental conditions (such as climate characteristics, vegetation coverage, human activities, etc). The size distribution of fungi-associated aerosol particles was also conducted in this study.Our results showed that the distributions were similar in different sampling sites, presenting log-normal distribution, which was in accordance with the studies of Meklin et al. (2003) in school buildings of two construction types (wooden and concrete) and our previous study in outdoor environment in Beijing city (Fang et al., 2005).In our study, the highest fungal levels were located in the 2.0 to 3.5 µm size range, and lowest in < 1 µm.It was shown that the deposition of fungal spores in lung and their effects on human health not only depended on their composition (genera and species) and concentration, but also their size.Larger spores (>10 µm) were deposited in the upper airway (nose, pharynx) and might result in hay fever symptoms, while smaller spore particles (diameter <10 µm, especially <5 µm) could penetrate the lower airways and might lead to other allergies or asthma (Horner et al., 1995).Attached or unattached fungal allergens that were in the ultra fine range (<0.1 µm) or sub-micrometre size, respectively, could penetrate to the deepest parts of the respiratory tract (Horner et al., 1995). With regard to the fungal identification methods, obvious disadvantages have been emerged for direct fungal identification by microscopy because it is a tedious and time consuming process (Williams et al., 2001), and it also must be identified by considering their morphology.The method of PCR by using the specific primer sets can identify the specific fungi efficiently (Yap et al., 2009), but the fungal species in air environments were relatively abundant and hard to make a full-scale prediction.What's more, conventional PCR analysis has certain limitations, particularly in its accuracy, reliability and reproducibility (Birch et al., 2001).Here, we used culture methods, followed by sequencing and BLAST the ITS gene to determine the fungi species.This molecular method of fungal typing seems to have a good performance, for this reason, it could help to finish the identification work efficiently, with a high-throughput process referred to a standard operating procedure and reliable results. Conclusion We found that the airborne fungal concentration was relatively high in these crowed places in the university campus in China.The findings of this paper suggest that Chinese government and research institutes should pay more attention to the biological pollution in the crowed public places such as the university campus, and then gradually construct a more efficient monitoring network of airborne microorganisms in the public areas in China. FiguresFig. 1 Figure 1 . FiguresFig.1Seasonal variation pattern of airborne fungal concentration in four sampling sites at university Figure 2 .Figure 3 . Figure 2. Size distribution of airborne fungi in four sampling sites at the university. Table 1 . Concentration percentage and frequency of airborne fungi in four sampling sites at the university.
2018-12-06T03:30:01.691Z
2012-02-16T00:00:00.000
{ "year": 2012, "sha1": "dea61138e3ac523461ccb14a8f1b4664ec83ff5e", "oa_license": "CCBY", "oa_url": "https://academicjournals.org/journal/AJMR/article-full-text-pdf/4CC373713532.pdf", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "dea61138e3ac523461ccb14a8f1b4664ec83ff5e", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Biology" ] }
40858343
pes2o/s2orc
v3-fos-license
A NON-MARKOVIAN QUEUEING SYSTEM WITH A VARIABLE NUMBER OF CHANNELS In this paper we study a queueing model of type GI/M/m̃a/∞ with m parallel channels, some of which may suspend their service at specified random moments of time. Whether or not this phenomenon occurs depends on the queue length. The queueing process, which we target, turns out to be semi-regenerative, and we fully explore this utilizing semi-regenerative techniques. This is contrary to the more traditional supplementary variable approach and the less popular approach of combination semi-regenerative and supplementary variable technique. We pass to the limiting distribution of the continuous time parameter process through the embedded Markov chain for which we find the invariant probability measure. All formulas are analytically tractable. Introduction This paper analyzes a multi-channel queueing system with a random number of channels, infinite capacity waiting room, general input, and exponentially distributed service times.The total number of channels does not exceed m, but at any given time not all of them are "active."The latter implies that even if a particular channel is busy servicing a customer, the service at some later time can become suspended.We assume that there is a certain sequence {T n } of stopping times relative to the queueing process at which a decision is being made for every busy channel to continue, suspend, or activate service.This policy literally makes the total quantity of servers random and it affects both the servicing and queueing processes. More formally, if Z t denotes the number of customers in the system at any given time t, the number of active channels at T n (where {T n } is the arrival process) is a binomial random variable with parameters (Z Tn , a) where 0 ≤ a ≤ 1, provided that Z t ≤ m.Unless service is interrupted, each of the customers is being processed at an exponentially distributed time and service durations on each of the active channels are independent.The input is a regular renewal process.For this system we use the symbolic notation GI/M/ ma /∞. The system, as it is, generalizes the classical GI/M/m/∞ queue by making it, in some real-world applications, more versatile.We can easily associate it with any mail order servicing system where once the order has been taken, it can be suspended at any moment of time for an unaware customer (who believes he is being processed) for various reasons, most commonly due to unavailable items.An item can also be backordered.The company is trying to shop around and find the item, and this takes time and effort.A similar situation occurs in Internet service where jobs are being routinely suspended for numerous reasons by Internet providers. In this problem setting we make the consequences of this interruption policy milder, in which suspensions take place only if the buffer (or waiting room) is empty, but all or a few channels are occupied.It makes perfect sense to reduce waiting time for many customers from the buffer. The queueing systems with variable number of channels have been investigated in the past literature.Saati [4] describes such one as a fully exponential system.Most commonly, the queues with unreliable servers (which can break down at any time) and priorities can also fall into this category.Our system is different firstly because it is non-Markovian and secondly because service interruptions do not take place fully arbitrarily, but with some probabilities upon certain random times.The closest problem to ours is the system in Rosson/Dshalalow [3] with no buffer. In the present paper, we focus our attention on the queueing process, which turns out to be semi-regenerative relative to the sequence {T n } of arrivals.We start with the embedded process over {T n } and turn to the analysis of the queue as a semi-regenerative process, which to the best of our knowledge has not been analyzed this way even in the case of the basic GI/M/m/∞ system, and thus this method is by itself novel for the class of multi-channel queues.We have used this approach in Rosson/Dshalalow [3] for the case of a more rudimentary GI/M/m/0 system with a random number of channels.The paper is organized as follows.Section 2 formalizes the model more rigourously.Section 3 deals with an embedded Markov chain and the invariant probability measure under a given ergodicity condition.Section 4 analyzes the continuous time parameter queueing process followed by an example presented in Section 5.All formulas are obtained in analytically tractable forms. Description of the system Let (Ω, F, (P x ) x=0,1,... , Z t ; t ≥ 0) → E = {0, 1, ...} be the stochastic process which describes the evolution of the queueing process in the GI/M/ ma /∞ system introduced in the previous section.In other words, at any time t, Z t gives the total number of units (or customers) in the system including those being in service.The servicing facility has m permanent channels, of which not all are necessarily active.The buffer (or waiting room) is of an infinite capacity, but the system needs to be "watched" for preserving the equilibrium condition.Customers arrive singly in the system in accordance with a standard renewal process {T n ; n = 0, 1, ..., T 0 = 0}.Inter-renewal times have a common PDF (probability distribution function) A(x), with finite mean ā and the LST (Laplace-Stieltjes transform) α(θ).The formation of active or inactive channels is being rendered upon {T n } as follows.If at time T n − (i.e.immediately preceding the nth arrival) the total number of customers is less than m, each of the channels, including the one that is aimed to accommodate the nth customer (just arrived), can become temporarily inactive with probability b.In such a case, the service of a customer by any such channel becomes suspended until the next stopping time T n+1 (of the process Z t ).Every busy channel is active with probability a (= 1 − b) and he is processing a customer a period of time exponentially distributed with parameter µ until T n+1 or the end of service, whichever comes first.Thereby, service times at each of the parallel channels within the random interval [T n , T n+1 ) are conditionally independent, given X n := Z Tn− and the number of active channels is governed by a binomial random variable ξ n with parameters (X n + 1, a) provided that X n < m.If the number of customers in the system upon time T n − is m or more, then all channels are almost surely active until T n+1 . Notation We will be using the following notation throughout the remainder of this paper: a = probability of each busy channel at T n+ to be active, n = 0, 1, ... .b = 1 − a = probability of a channel at T n+ to be inactive. Embedded Process As a precursor to the key semi-regenerative approach for treating the one-dimensional distribution of the queueing process (Ω, F, (P Transition Probabilities Then, where q j+1 s is the probability that s out of j + 1 busy channels are inactive at T n , (1 − e −µx ) j+1−k is the probability that in [0, x], j+1−s−(k−s) customers are processed, and e −µ(k−s)x is the probability that in [0, x], k − s customers are not finished while being treated.Clearly, p jk = 0 for Notice that where and that The above probabilities (3.1)-(3.4) form the upper block The lower block of L with the rows from m and all the way down are identical to that for the system GI/M/m/∞: with and The PGF (probability generation function) of the i th row of L 1 is We can easily deduce that L represents an irreducible and aperiodic MC (Markov chain). The Invariant Probability Measure According to Abolnikov and Dukhovny [1], for mµα > 1, there exists an invariant probability measure P := (P 0 , P 1 , . ..) of {X n } as a unique positive solution of the matrix equation (3.12) (3.12) also reads For k ≥ m, (3.13), in light of (3.6-3.8),leads to Let us now consider equations (3.16) for k ≥ m.We will seek the solution of (3.16) in the form where A will be evaluated from (P, 1 ) = 1.Inserting (3.17) into (3.16)gives, where δ, according to Takács [6], is a unique, real positive root of (3.19), strictly less than 1 when meeting the ergodicity condition mµα > 1.For k = m, (3.16) and (3.17) yield which, after some algebra and due to (3.19), reduces to To determine the unknowns P 0 , P 1 , . . ., P m−1 let us define the PGF The following proposition gives an equation in U (z) and P i , i = 0, . . ., m − 1, which lead to the derivation of the invariant probabilities.Proof: From (3.13) , To obtain P k 's we will use a method similar to that of Takács [6].Given a polynomial function f (z), define the sequence of linear functionals: Since U (z − 1) is identical to the Taylor series of U (z) expanded at 1, applying R r to the polynomial U (z), r = 0, 1, . . ., m − 1, and then multiplying it scalarly by (1, z − 1, . . ., (z − 1) m−1 ) we will thereby reexpand U (z) in a Taylor series at 1 and arrive at its binomial moments: On the other hand, given the binomial moments, we have The following proposition is an analog to the result known for the GI/M/m/∞ queue, except for a r in (3.36) being different. 37) Proof: Applying R r to (3.25) we have and due to we have where With the usual combinatorics, m−1 i=r−1 or in the form Simplifying, we obtain Denoting and dividing by a r both sides of (3.43), we have and thus .Therefore, In (3.49), for n = 0 and a 0 = 1 On the other hand, by (3.22) and (3.17), Hence, Futhermore, In particular, U 0 = U (1).By (3.52), therefore, Substituting equation (3.53) into (3.55)yields and after substituting equation (3.45) into (3.57)we finally have We are done with the proposition. Continuous Time Parameter Process The continuous time parameter queueing process is our main objective, which we target in the present section.The tools we are exploring are quite different from that used in our past experience (cf.Dshalalow [2]).Back then, we extended the process (Ω, F, (P x ) x=0,1,... , Z t ; t ≥ 0) from just being semi-regenerative to a two-(and more) variate process by using typical supplementary variable techniques, forming Kolmogorov's partial differential equations and Laplace transforms, then combining all of these with some semi-regenerative tools to yield a compact result.In this particular case, the past approach does not work since the additional information about arrival processes still does not make the two-variate process Markov, and any further efforts in this direction are cumbersome and counterproductive.So, we instead try to use a similar idea as that utilized in our recent paper [3] on semi-regenerative analysis directly applied to (Ω, F, (P x ) x=0,1,... , Z t ; t ≥ 0).Knowledge of invariant probability measures for embedded processes is crucial information needed for the upcoming analysis.We present some formal concepts of semi-regenerative processes pertinent to the models studied here.According to Section 3, is a semi-regenerative process relative to the sequence {T n } of stopping times.By the key convergence theorem for semi-regenerative processes, for each initial state x = 0, 1, . . ., the limiting probability exists if mµα > 1 and is given by the expression where {K ij (t)} is the semi-regenerative kernel of the process (Z t ) (cf. [2]) whose entries are defined as Observe that K ij (t) = P i {Z t = j T 1 > t} [1−A(x)], and thus Notice that and Let us now consider equations (4.2) for the index values j ≥ m.The following theorem states that, except for π m , the result for π j , j ≥ m, is similar to that of [6]. Theorem 4.1: The part of the limiting distribution π m , π m+1 , . . . of the queueing process Z t exists if mµα > 1, it is independent of any initial state, and is given by Proof: Case I: Let j > m.Then from (4.2), where and Inserting (4.8) into (4.6)we get Then, mµ , we have (using integration by parts for Stieltjes integrals as in Appendix, Theorem A.1) and Therefore, and the latter simplifies to or also in the form: (since where and Now we turn to ϕ(z).Substitution of (4.5) into (4.24) and Fubini's Theorem give where .29) The latter can be simplified to where and and U r is defined as in (3.43). Proof: We use a method similar to that in Section 3. If With the usual combinatorics we arrive at m−1 i=r−1 where In the following sections we will be concerned with a special case. Special Case Consider the special case of GI/M/ ma /∞, with a = 1 and p(x) = e −µx , which corresponds to the system GI/M/m/∞.Now, we have: can hold while a 2 = ∞, and so condition (A.2) is exactly what we need and it is weaker.However, a 2 < ∞ is more tame.In most special cases, however, (A.2) holds true (i.e. if 1 − A(x) → 0 and x → ∞, the latter should go faster to zero than 1 x does ) .For instance, if A(x) is k -Erlang, then obviously vanishes when x → ∞, for any k = 1, 2, ..., and so will do any convex linear combination of Erlang distributions.
2017-07-30T02:59:07.353Z
2003-01-01T00:00:00.000
{ "year": 2003, "sha1": "c86041d74dc783f6363994391cc06e910757a54a", "oa_license": "CCBY", "oa_url": "https://downloads.hindawi.com/archive/2003/579034.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "c86041d74dc783f6363994391cc06e910757a54a", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
241220458
pes2o/s2orc
v3-fos-license
Pregnant women with gestational diabetes mellitus exhibit unique features in oral microbiome Background: Gestational diabetes mellitus (GDM) leads to a series of adverse pregnancy outcomes, seriously endangering the health of mothers and infants. Oral glucose tolerance test (OGTT) is the gold standard test for GDM diagnosis, but largely increases the discomfort of pregnant women, and is inconvenient to follow-up and detection. Considering that oral sampling is convenient, rapid, safe and non-invasive, the feasibility of distinguishing GDM via biomarkers from oral microbiota was evaluated in this study. Here, the 16S rRNA gene sequencing was used to compare the microbiome of saliva and dental plaque samples of 111 pregnant women, and to analyze the structure of oral microbiota in patients with GDM as well as to find effective biomarkers. Results: The results showed that the microbiota of both types of oral samples in patients with GDM changed, and it was quite different from that of neither periodontitis nor dental caries. By using bacterial biomarkers from oral microbiota, GDM classification models based on SVM and random forest algorithms were constructed. For the SVM algorithm, the AUC value of the classification model constructed by the combination of dental plaque Lautropia , Neisseria and saliva Veillonella achieved 0.83. For the random forest algorithm, the maximum AUC value of the model constructed by the dental plaque Streptococcus , Eikenella , Anoxybacillus and saliva Leptotrichia , Kingella was larger than 0.90. Conclusions: These findings revealed that certain bacteria of either saliva or dental plaque can accurately distinguish GDM from healthy pregnant women, which provides a potential non-invasive approach for GDM diagnosis with oral microbial markers. Background Gestational diabetes mellitus (GDM) is defined as varying degrees of glucose intolerance first found or occurred during pregnancy [1]. It is one of the most common maternal complications of middle and late pregnancy. There is about 5.2% to 8.8% of pregnant women worldwide suffering from GDM each year [2]. In some country and areas, the incidence of this disease is even more than 20% and is increasing year by year [3]. GDM increases the risk of long-term complications, including obesity, impaired glucose metabolism and cardiovascular disease, in both mother and infant [4]. Oral glucose tolerance test (OGTT) is recognized as the diagnostic gold standard of GDM [5]. The protocol of OGTT is to take oral glucose-containing liquid at [24][25][26][27][28] weeks of gestation, and collect fasting, 1-hour and 2-hour peripheral blood to test the glucose content. But this method is invasive, needs to collect maternal blood many times, and it is not easy to collect sample and detect at any time. There is an urgent to find a non-invasive, safe, simple and effective method to auxiliary detect this disease. Oral sampling is convenient, safe and non-invasive, so they have great advantages in disease prevention, diagnosis and treatment. Compared with normal pregnant women, the microbial composition and abundance in GDM patients may be different. In placental microbiome of GDM patients, the proportion of Proteobacteria increased, while Bacteroidetes and Firmicutes decreased [6]. In another study, there were significant differences in intestinal microbiota between GDM patients and normal pregnant women in the third trimester of pregnancy and even eight months after delivery [4]. Akkermansia bacteria associated with decreased insulin sensitivity and Christensenella bacteria associated with fasting plasma glucose levels were observed, these changes were 5 more similar to those of the intestinal microbiota of patients with type 2 diabetes mellitus. In recent years, using microbes as biomarkers for disease prediction is a promising strategy [7]. Several studies have found that using bacteria for the diagnosis of diseases has great potential [8]. For example, using intestinal bacteria to develop biomarkers of systemic diseases, 16 kinds of bacteria were screened from the intestinal microbiota of colorectal cancer patients, which could accurately distinguish colorectal cancer from normal people, with an accuracy of 84% [9]. Coincidentally, the accuracy of 15 microbial related gene markers in the diagnosis of liver cancer can reach 83.6% [10]. In the exploration of using oral microbes as markers, a dental caries prediction model was established according to the dynamic changes of oral microbes during the occurrence and development of diseases [11]. Another study using oral microbes to predict periodontitis found that the oral microbial prediction could distinguish 95% of healthy people from patients, showing a good ability to predict and diagnose diseases [12]. Nevertheless, some studies analyzed the relationship between oral microbiota and pancreatic diseases, and constructed a prediction model of pancreatic cancer using specific microbial species [13]. In our previous study, comparing the microbial composition of vagina, intestinal tract and oral saliva between normal pregnant women and GDM patients, we found that microbiota in different body sites of GDM patients were distinct [14]. In particular, the maximum change occurred in the oral cavity, which reflects the feasibility of selecting oral microbes as biomarkers for GDM detection. However, many studies have shown that there is some relationship between GDM and periodontitis [15,16]. It has been found that the incidence of GDM in patients with 6 periodontitis increases during pregnancy [16]. In addition, both of them had the characteristics of increased dental plaque, decreased red blood cell count, increased inflammation and so on [17]. Periodontal infection may increase the risk of GDM by affecting endocrine metabolism and blood glucose control [18], but whether the links between these two diseases are related to microorganisms is unknown. It is also not clear whether the microbial changes caused by oral diseases will affect the accuracy of the disease categorization model. This study intends to analyze the 16s rRNA gene sequencing data of oral saliva and dental plaque microbiome from GDM patients and normal pregnant women, and look for possible relationships between GDM and two major oral diseases, dental caries and chronic periodontitis. On this basis, the study will identify suitable microbial markers from oral microbiota to construct GDM classification models, in addition to develop a simple and non-invasive technique for auxiliary diagnosis and daily follow-up of GDM. Subjects recruitment This study was approved by the Ethics Committee of Wenzhou People's Hospital. Pregnant women were recruited at Wenzhou People's Hospital. Informed consent was obtained from all participants. Pregnant women with GDM (GDM+) were diagnosed by specialized doctors according to the results of OGTT and were recruited as a case group. Healthy pregnant women (GDM−) who had no history of other systemic, metabolic or oral diseases, especially periodontitis and dental caries, were served as the control group. Sample collection Saliva and dental plaque samples were collected from the third trimester pregnant women, referring to the sampling methods described in our previous study, with minor modifications [19]. Briefly, ~2 ml saliva was collected from each pregnant woman with a sterile tube and store at -80 °C. Dental plaque was scraped from tooth surface and resuspended into a centrifugal tube, stored at -80°C until total DNA extraction for latter sequencing. DNA extraction In a strictly controlled, separate and sterile workplace, approximately 0. 16S rRNA sequence analysis 8 Raw sequencing reads of the 16S rRNA gene sequences were quality filtered and analyzed using QIIME V.1.8.0.12. The operational taxonomic units (OTU) were classified taxonomically using the Greengenes 16S rRNA gene reference database. Analysis of microbial community composition The taxonomic composition of microbial communities was visualized using Calypso [21]. Community clustering was measured by unweighted UniFrac distance based on the normalized OTU table. Together with a data set of the saliva and plaque microbiota retrieved from a recent study, Bray-Curtis dissimilarity between different sample types was calculated using the R package ecodist. The difference of Alpha diversity between groups was statistically analyzed by Mann-Whitney test (P < 0.05). Biomarker screening and classification model construction Linear discriminant analysis effect size (LEfSe) and odds ratio analysis were used to identify the characteristic genera in GDM+ and GDM-groups, and the score of log linear discriminant analysis (LDA) > 3.0 or the odds ratio P < 0.05 was considered as differential signatures that better discriminate between groups. After normalized the abundance data of the characteristic bacteria in 45 paired samples which saliva and plaque were collected from the same person, 4/5 samples were served as training set and 1/5 as test set. Support vector machine (SVM) algorithm and ROC calculation was performed by e1071 and ROCR packages in R, respectively. The average AUC value of 100 iterations first was calculated to screen out the bacterial genera which better distinguish GDM from healthy pregnant women, and then the 9 value of 1000 iterations was used to draw the ROC curve. With the same method, the bacterial abundance data of 105 saliva samples were used to construct another SVM model. Random forests models were trained using bacterial taxonomy profiles to predict disease status in the data sets of 45 paired samples, with 4/5 samples as training set and 1/5 as test set. The recursive feature elimination method was used to sort the importance of all bacterial features and draw the ROC curve. Then 5-fold cross validation with 1000 times iteration was used to evaluate the performance of these models. Results We enrolled 111 pregnant women, including 44 pregnant women with gestational diabetes mellitus (GDM+) and 67 healthy pregnant women (GDM-). Because the volunteers were all Chinese women, there is no effect of sex or ethnicity. Totally Changes of oral microbiota in patients with GDM To investigate whether hyperglycaemia that develops during pregnancy is accompanied by extensive changes in the oral microbiota, we explored the microbial shift of saliva and dental plaque of pregnant women who were diagnosed suffering from GDM. We found both saliva and dental plaque samples of GDM+ were divided into different clusters from GDM-( Figure 1A), despite there was no significant difference in α-diversity (Additional file 3A-D). Additionally, we calculated Bray-Curtis distances using normalized OTU abundance. In saliva, the Bray-Curtis (BC) distances between samples were significantly smaller intra-group GDM+ than either intra-group GDM− or inter-group GDM+ vs. GDM-(P < 0.001, Mann-Whitney test). In dental plaque, the BC distances were not as obvious as saliva ( Figure 1B). These results suggest that pregnant women with GDM have a distinct oral microbial community different from healthy women. Microbial shift of oral cavity in GDM+ showed obvious sample-type specificity, and saliva was more significant than plaque. Oral microbial variations between GDM and major oral diseases To explore the relationship between GDM and periodontitis, and whether oral microbial variations in major oral diseases such as dental caries can disturb the accuracy of GDM classification based on bacterial biomarkers. We compared the oral microbial shifts between GDM, periodontitis, and dental caries. No significant difference in the number of shared bacteria were shown in the oral microbiota of periodontal health (PH) and periodontitis patients (PD) when compared with that of pregnant women, regardless of whether the pregnant women had GDM or not (Figure 2A-B). Compared with PD, the Bray-Curtis distances between either GDM+ or GDM-and PH were significantly smaller (P < 0.0001, Mann-Whitney test), no matter in saliva or in dental plaque ( Figure 2C-D). These results indicate the oral microbiota of pregnant women, was more similar to healthy periodontitis, but different from periodontitis, thus the microbial variations in oral cavities of pregnant women with GDM are not equivalent to those of periodontitis. There was not any significant difference in the number of shared bacteria in oral cavity, when pregnant women with GDM or without GDM was compared with cariesfree (NC), mild (LC), moderate (MC) and severe (HC), respectively (Additional file 4A-D). The saliva and dental plaque microbiota of both GDM+ and GDM-showed a larger Bray-Curtis distance to dental caries than to NC (Additional file 4E), which indicated that there should be little relationship in the oral microbial shifts between GDM and dental caries. SVM classification model of GDM To identify specific microbial biomarker which can be used to discriminate GDM, we investigated the differential genera from pregnant women with GDM and without GDM. Firstly, we compared the two groups by LEfSe, with the threshold value of To expand the scanning scope of potential microbial markers, odds ratio analysis was performed. Significant differences in four genera Lautropia, Neisseria, Streptococcus, and Veillonella were found between GDM+ and GDM-groups in both saliva and dental plaque samples (Additional file 5C-D). It is suggested that using these four bacteria as microbial biomarkers to distinguish GDM+ from GDM-may have an ideal effect. Meanwhile, it is worth noting that Streptococcus and Veillonella also depleted in patients with periodontitis (Additional file 6A), indicating that the possible relationship between GDM and periodontitis may be related to the decreased abundance of these two genera. There was no significant variation in these four bacteria in dental caries (Additional file 6B-D), indicating that there was little relationship between GDM and dental caries in the change of microbial community. To optimize the efficiency of identifying GDM, the common specific bacteria in both saliva and dental plaque were used to construct classification models. According to the above results, we found that Lautropia, Neisseria, Streptococcus, and Veillonella were significantly different in the two sample types, so they were used to construct classification models based on SVM algorithm. Firstly, for finding the optimal combination of microbial biomarkers, we performed orthogonal experiment using the paired samples of saliva and dental plaque collected from same person ( Figure 4A). The AUC value of the optimal combination could reach 0.84 (95% CI: 0.81-0.87), using the relative abundance of Lautropia and Neisseria of dental plaque and Random forest classification model of GDM In addition, to give users more choices, a classifier was constructed based on random forest algorithm to discriminate GDM, using 45 dental plaque and saliva paired samples. The recursive feature elimination method was used to rank the importance of all the features, and the top ten features and their abundance information were shown ( Figure 5A and Additional file 8). We then selected different features to calculate the AUC value of the model. When using five genera p_Streptococcus, s_Leptotrichia, p_Eikenella, s_Kingella, p_Anoxybacillus to build the model, the model had the best performance ( Figure 5B), and the AUC value was 0.89 (95% CI: 0.81-0.97) ( Figure 5C). Furthermore, only using the p_Streptococcus and s_Leptotrichia to construct the model, the AUC could also reach 0.77 (95% CI: 0.67-0.87) ( Figure 5D). Discussion As a complex ecosystem and important colonization site of human microbes, oral cavities inhabit large number of microorganisms [16,22]. In recent years, people 14 have gradually realized the importance of oral micro-environment to health. Normally, there is a complex symbiotic mode and dynamic balance in oral microbiota, which plays an important role in maintaining the ecological balance of oral and systemic health [23]. The imbalance of oral microbiota can lead to several diseases such as diabetes, obesity, periodontitis, preterm birth [24][25][26]. There are significant differences in the composition of microbial communities in different parts of the oral cavities, and saliva and dental plaque are two major niches [7,27]. Saliva and dental plaque, as detection objects, have the advantages of noninvasive, fast collection and storage, simple and safely to obtain a large number of DNA. It is becoming a powerful diagnostic tool for systemic diseases such as cancer, intestinal diseases, diabetes, neurodegenerative diseases, and muscle and joint diseases [28,29]. In this study, 16S rRNA gene high-throughput sequencing was used to analyze the microbiota of saliva and dental plaque associated with GDM, and to screen the oral microbiological markers which could effectively distinguish GDM from healthy pregnant women. It is the first time to construct classification model of GDM discrimination using oral microbes as the biomarkers. This is also the first attempt to reveal the relationship between GDM and major oral diseases including periodontitis and dental caries by comparing the microbial shift, and to evaluate the potential impact of oral diseases on using oral microbes as diagnostic markers. This study lays a foundation for rapid, effective and non-invasive detection of GDM in clinic. Among the oral microbiota of pregnant women with GDM in this study, the four most varied bacteria are Lautropia, Neisseria, Streptococcus and Veillonella. Among them, Streptococcus and Veillonella exist not only in the oral cavity, but also in the 15 esophagus, throat, stomach and small intestine of human body [30]. Except for few pathogenic species, they were considered to be symbiotic species that coexist peacefully with the human body [31]. Most Streptococcus species are symbiotic bacteria of oral, skin, intestinal and upper respiratory tract. Veillonella has the ability of lactic acid fermentation and is common bacteria in the intestinal and oral mucosa of mammals, which is rarely associated with disease in human beings. However, there might be some synergistic effect between these two kinds of bacteria. Streptococcus may be significantly correlated with bacterial invasion, phosphotransferase system, alanine metabolism and oxidative stress markers in epithelial cells, and was proved to involve in the fermentation process of sugar. Streptococcus sometimes is positively correlated with Actinomyces [32], while the latter participates in the Embden-Meyerhof-Parnas (EMP) pathway in which glucose is degraded to pyruvate and further degraded to lactate, formate and acetate [33]. Veillonella could use lactic acid as carbon source and energy source [34]as well as to regulate pH to promote the proliferation of Streptococcus [35,36]. In addition, the other two kinds of bacteria, Lautropia and Neisseria, may be related to synthesis of bacterial motion protein gene, linoleic acid metabolism and flavonoid [35,36]. Streptococcus, Veillonella and Neisseria in dental plaque and saliva are positively correlated with glycolysis, fructose metabolism and alanine metabolism, while are negatively correlated with arginine metabolism [37]. The findings suggest that these four kinds of bacteria have complex interactions and were closely related to glucose metabolism, so they may be used to point to the occurrence and development of GDM as a metabolic disease. In previous reports on disease prediction using oral microbes, a model for predicting periodontitis was constructed using various bacteria such as Lautropia, Streptococcus, Selenomonas, Peptostreptococcus, Oribacterium and Veillonellaceae [12]. In a predictive model of caries, Prevotella can predict the success rate of caries up to 0.74, while 20 bacteria including Streptococcus, Veillonella and Prevotella predict the success rate of caries up to 0.77 [11,38]. In the prediction of oral odor, the model was constructed and predicted by using 108 significantly different bacteria found in saliva samples, which can reach 78.9% accuracy. Among those bacteria, it was found that Bacteroides, Prevotella and Porphyromonas had the most discriminatory validity on oral odor [38]. In the prediction of oral and oropharyngeal carcinoma, the AUC values predicted by seven biomarkers such as Rothia, Haemophilus and Capnocytophaga of oral microbiota could even reach 90%-100% [39]. In the prediction of Barrett's esophagus, the accuracy of using Lautropia, Streptococcus and Enterobacteriaceae was 94% [40]. Obviously, the markers mentioned in the above diseases were not exactly the same as the microbes used in this study for GDM classification, which ensures the specificity of our model. However, it should be pointed out that, in view of the fact that many studies had shown a link between GDM and periodontitis [15,16], and their partial overlaps in microbial markers (viz., Streptococcus and Veillonella), we should pay special attention to the oral health status of the patients, and select four kinds of bacteria as much as possible for simultaneous detection when using this method for GDM testing in the future,. In order to provide more choices, we used both SVM and random forest algorithms to build GDM prediction models. From our own performance model, random forests performed better, but the bacteria it needed will be more complicated. Conclusions 17 The results of this study showed that the oral cavity of patients with GDM had a unique microbial composition. Both free-floating and attached oral microbiota varied with GDM. However, microbial shift of oral cavity in GDM+ showed an obvious sample type specificity, and saliva exhibited more significant changes than plaque. There are very few similarities on the shift of oral microbiota between GDM and oral diseases. The oral microbiota of GDM+ was more similar to healthy people than to periodontitis, and no obvious relationship in the oral microbial shifts was observed between GDM and dental caries. Streptococcus and Veillonella depleted in both pregnant women with GDM and patients with periodontitis, indicating that the inferred relationship between the two diseases may be due to the decreased abundance of these two genera. Using the selected bacterial genera to construct SVM or random forests classification model can accurately and specifically distinguish between GDM from healthy pregnant women. Whether using saliva and plaque paired samples, or using saliva samples simply, GDM can achieve a relatively ideal differentiation. Detection of GDM by oral microbial targeting markers may be a promising method to aid in the diagnosis of this disease. Ethics approval and consent to participate This study was approved by the Ethics Committee of Wenzhou People's Hospital. Pregnant women were recruited at Wenzhou People's Hospital. Informed consent was obtained from all participants. Consent for publication Not applicable.
2019-11-22T01:04:38.058Z
2019-11-20T00:00:00.000
{ "year": 2019, "sha1": "eb6480a0a6848acb7eafce6f0617c21ac7e6fd27", "oa_license": "CCBY", "oa_url": "https://www.researchsquare.com/article/rs-8101/latest.pdf", "oa_status": "GREEN", "pdf_src": "ScienceParsePlus", "pdf_hash": "713dc010fa0d57957a88e4093d21c7ad8b3c87d9", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [] }
235415239
pes2o/s2orc
v3-fos-license
Role of Unfolded Protein Response and Endoplasmic Reticulum-Associated Degradation by Repeated Exposure to Inhalation Anesthetics in Caenorhabditis elegans Background: When an imbalance occurs between the demand and capacity for protein folding, unfolded proteins accumulate in the endoplasmic reticulum (ER) lumen and activate the unfolded protein response (UPR). In addition, unfolded proteins are cleared from the ER lumen for ubiquitination and subsequent cytosolic proteasomal degradation, which is termed as the ER-associated degradation (ERAD) pathway. This study focused on changes in the UPR and ERAD pathways induced by the repeated inhalation anesthetic exposure in Caenorhabditis elegans. Methods: Depending on repeated isoflurane exposure, C. elegans was classified into the control or isoflurane group. To evaluate the expression of a specific gene, RNA was extracted from adult worms in each group and real-time polymerase chain reaction was performed. Ubiquitinated protein levels were measured using western blotting, and behavioral changes were evaluated by chemotaxis assay using various mutant strains. Results: Isoflurane upregulated the expression of ire-1 and pek-1 whereas the expression of atf-6 was unaffected. The expression of both sel-1 and sel-11 was decreased by isoflurane exposure, possibly indicating the inhibition of retro-translocation. The expression of cdc-48.1 and cdc-48.2 was decreased and higher ubiquitinated protein levels were observed in the isoflurane group than in the control, suggesting that deubiquitination and degradation of misfolded proteins were interrupted. The chemotaxis indices of ire-1, pek-1, sel-1, and sel-11 mutants decreased significantly compared to N2, and they were not suppressed further even after the repeated isoflurane exposure. Conclusion: Repeated isoflurane exposure caused significant ER stress in C. elegans. Following the increase in UPR, the ERAD pathway was disrupted by repeated isoflurane exposure and ubiquitinated proteins was accumulated subsequently. UPR and ERAD pathways are potential modifiable neuroprotection targets against anesthesia-induced neurotoxicity. Introduction General anesthesia is an essential practice for various surgeries, and inhalation anesthetics are commonly used for general anesthesia, either alone or in combination with other drugs. Over the past decade, extensive pre-clinical researches have consistently shown that anesthetic exposure during early post-natal period can cause the neurotoxicity leading to the behavioral or cognitive defects [1][2][3]. However, it is still controversial whether the animal studies can translate to clinical field. Recent three studies [4][5][6] including large population stated that developmental neurotoxicity may not exist in single brief anesthetic exposure during early life. In 2016, the U.S. Food and Drug Administration issued a warning regarding anesthetic and sedative agents describing the potential risk of anesthesia-induced neurotoxicity Ivyspring International Publisher (AIN) in children aged below 3 years, particularly when they are exposed to these drugs over a long term or repeatedly [7]. Diverse mechanisms related to cell death, growth factor signaling, mitochondria, N-methyl D-aspartate, γ-aminobutyric acid, or the endoplasmic reticulum (ER) have been postulated as an underlying process of AIN [8]. In addition, the mammalian target of rapamycin signaling pathway [9] and brain-derived neurotrophic factor were known to be modulated by anesthesia in developing nervous system [10]. However, the precise mechanism underlying AIN has not yet been determined, and this study focused on ER stress and the following changes in the unfolded protein response (UPR) responses and ER-associated degradation (ERAD) pathway after repeated inhalation anesthetic exposure. The ER is an intracellular organelle that facilitates the folding and maturation of protein molecules and their transport to the Golgi apparatus. The release of excessive Ca 2+ from the ER and the relationship of ryanodine receptors have been studied as a causative factor of anesthesia-induced ER stress [11][12][13]. Excessive cytosolic Ca 2+ changes the proteinfolding environment in the ER, leading to ER stress [14]. When an imbalance occurs between the demand and capacity for protein folding, unfolded or misfolded proteins accumulate in the ER lumen and activate UPR [15]. In addition, misfolded proteins are cleared from the ER lumen for ubiquitination and subsequent cytosolic proteasomal degradation, which is termed as ERAD pathway [16]. This process controls the quality and quantity of proteins. Caenorhabditis elegans contains several orthologous genes involved in the human UPR and ERAD pathways, and they were evaluated in several conditions such as aging or neurodegenerative diseases [17,18]. This study evaluated changes in the UPR and ERAD pathways induced by the repeated inhalation anesthetic exposure in Caenorhabditis elegans. Isoflurane was used for the anesthesia of C. elegans, and was administered four times during each stage from L1 to L4. The concentration of isoflurane was the 99.9% effective immobilizing dose, which was determined by pilot examination and used in our previous study [19]. Depending on isoflurane exposure, the worms were classified into the control or isoflurane group. RNA preparation and real-time polymerase chain reaction (PCR) To evaluate the expression of a specific gene, RNA was extracted from adult worms in each group. After collecting the worm pellet, it was ground to a fine frozen powder using liquid nitrogen. After adding a RLT butter, ethanol, and RW1 buffer sequentially to the frozen powdered worm, purified RNA was extracted using RNeasy mini spin column. Finally, RNase-free water was added to the extracted RNA, which was frozen at -80°C until use. Using a NanoDrop (ND-2000, Thermo Fisher Scientific, MA, USA) and an Agilent 2100 Bioanalyzer (Agilent Technologies, Palo Alto, USA), the ratios of absorbance at 260-280 nm (OD 260/280) and at 260-230 nm (OD 260/230) were determined and the quality-controlled RNA (OD 260/280 >1.5 and OD 260/230 >1.0) was used for real-time polymerase chain reaction. After first-strand cDNA synthesis using Maxima H minus First strand cDNA Synthesis Kit (Thermo Fisher Scientific, MA, USA), real-time PCR was performed using the cDNA, each gene-specific primer (Table 1), and Power SYBT Green PCR Master mix (Applied Biosystems, MA, USA). Pan-actin was used as a reference gene and the ΔCT was calculated. Fluorescence imaging In zcls4[hsp-4::GFP]V strain, green fluorescence protein (GFP) expression was measured using a Zeiss LSM 710 confocal microscope system (Oberkochen, Germany). L4 stage worms were mounted on agar pad after immobilizing them by sodium azide. Chemotaxis assay Chemotaxis assay was performed by one experimenter blind to condition to confirm the abnormal behavioral pattern when C. elegans reached the young adult stage as described in our previous study [19]. About 50 young adult worms, which were washed by S-basal buffer, were transferred to the center of the chemotaxis plates (Fig. 1). After 1 h, the number of the worms on each side was counted and chemotaxis index was calculated using the equation: (number of A point -number of C point)/total number of worms × 100 (%). Chemotaxis assay was performed three times and all batches included 3 plates in each group. Chemotaxis assay plate. A 9-cm petri plate covered with nutrient growth medium was used for chemotaxis assay. OP50 was used for attractant, and control site was blank. Before placing worm pellet in the center, 1 μl of 1 M sodium azide was dropped in both sites to immobilize worms when they reached there. Number of worms in a circle within 1.5-cm radius at each point and in other zone was counted and used for chemotaxis index calculation. Statistics Data are presented as the mean and standard deviation. Due to the small sample size, nonparametric test, Mann-Whitney U test, was used to determine the significance of differences between the two groups. Statistical analyses were performed using SPSS (version 21.0; IBM Co., Armonk, NY, USA), and P values of < 0.05 were considered to indicate significant differences. Results HSP-4, a heat shock protein in C. elegans, is used to monitor ER stress and it is homologous to binding immunoglobulin protein (BiP) in mammals. The ER-specific heat shock protein HSP-4 reporter (Phsp-4::gfp) was upregulated in L1 larvae after isoflurane exposure, as reported previously [20]. After repeated isoflurane exposure during the developmental period, the expression of hsp-4::GFP was increased (Fig. S1). Real-time PCR was performed to validate the expression of hsp-4. In isoflurane-exposed worms, the expression of hsp-4 was induced (1.4 ± 0.1 in isoflurane group compared to the normalized control group; P <0.001). To determine the effects of isoflurane on the regulation of the UPR and ERAD pathway in AIN, the expression of related genes was evaluated by real-time PCR under the same conditions. Further, ire-1, pek-1, and atf-6 correspond to inositol-requiring enzyme 1 (IRE1), protein kinase RNA-like ER kinase (PERK), and activating transcription factor 6 (ATF6) in humans and are related to UPR in C. elegans. Isoflurane upregulated the expression of ire-1 and pek-1 (P <0.001 in both); however, the expression of atf-6 (P = 0.726) remained unaffected ( Fig. 2A). Both sel-1 and sel-11 are orthologs of human SEL-1L and HRD1, which are related to the ERAD pathway and induced by ER stress. Interestingly, the expression of both sel-1 and sel-11 was decreased by isoflurane exposure (P <0.001 in both) (Fig. 2B). Polyubiquitin chains bind to proteins destined for degradation to serve as a degradation signal. Both cdc-48.1 and cdc-48.2 are orthologs of P97, which facilitates the degradation of large amounts of misfolded proteins. The expression of cdc-48.1 and cdc-48.2 was decreased by isoflurane exposure (P <0.001 in both) (Fig. 2C). The decreased expression of sel-1, sel-11, cdc-48.1 and cdc-48.2 suggests that the ERAD pathway was inhibited. Therefore, the levels of ubiquitinated proteins were investigated by western blotting with an anti-ubiquitin antibody. Higher levels of ubiquitinated protein were observed in the isoflurane group than in the control group (P <0.001), suggesting that ERAD was interrupted by isoflurane (Fig. 3). The chemotaxis index of N2 was 86.4 ± 8.8% in the control group, whereas it was 45.1 ± 8.4% in the isoflurane group (P = 0.001). In several mutant strains related UPR and ERAD, the chemotaxis indices were measured (Fig. 4). In the control groups, the chemotaxis indices of ire-1, pek-1, sel-1, and sel-11(RNAi) decreased significantly compared to those of N2 (P < 0.001). The chemotaxis indices were not suppressed further even after the repeated isoflurane exposure in these four mutants. Discussion We found that repeated exposure to isoflurane induced the expression of ER chaperones and UPR in C. elegans, indicating an increase of ER stress. To the best our knowledge, it is observed for the first time that ubiquitinated proteins were accumulated considerably with a disrupted ERAD pathway in the isoflurane group than in the control group. BiP is an ER chaperone present on the ER membrane in the absence of ER stress. However, it dissociates from the ER membrane and binds to misfolded proteins during ER stress [21]. In addition, three membrane proteins, namely PERK, ATF6, and IRE1, are activated and initiate the UPR signaling pathway [22]. In C. elegans, hsp-4, pek-1, atf-6, and ire-1 are the homologues of human BiP, PERK, ATF6, and IRE1, respectively. It is known that hsp-4 is induced under ER stress in C. elegans and their UPR is very similar to that of humans [23]. Higher expression of hsp-4 after repeated exposure to isoflurane indicates that isoflurane induces ER stress in C. elegans; a similar result was observed in a previous study [20]. Interestingly, two major UPR regulators, pek-1 and ire-1, were induced in isoflurane-exposed C. elegans, whereas atf-6 was not affected. We could not determine the exact cause of the gene expression mismatch among the three UPR regulators; however, the activation processes differed among the three UPR genes. A previous study reported that IRE1 and PERK have a similar luminal domain and detect unfolded proteins through the same mechanism [24]. IRE1 and PERK exist in an inactive state by binding with BiP in the absence of ER stress. However, ER stress causes BiP to dissociate from the luminal domains of IRE1 and PERK, resulting in their oligomerization and autophosphorylation [25]. In contrast, ER stress causes the translocation of ATF6 from the ER to the Golgi, where it is cleaved to its active form [26] that becomes dominant in the absence of ER stress [27]. Thus, cleaved atf-6 might not be detected by real-time PCR, which caused the discrepancy in gene expression of the three UPR regulators in this study. Misfolded proteins are retrotranslocated from the ER lumen into the cytosol, and they are recognized and cleared by the ERAD pathway to maintain ER homeostasis [28]. HRD1 and SEL-1L, the most representative ERAD complex, correspond to sel-11 and sel-1 in C. elegans, respectively. Previously, deletion of HRD1 or SEL1L in mice was reported to cause embryonic or premature death [29][30][31]. The pathobiological role of the E3 ubiquitin ligase HRD1 and its adaptor protein SEL1L was previously evaluated, and their importance was demonstrated in neurodegenerative diseases. Both were proposed to be involved in the pathogenesis of and identified as therapeutic targets against Parkinson's disease [32,33]. HRD1 was also involved in the accumulation of the amyloid precursor protein and the subsequent production of amyloid β, which is linked to Alzheimer's disease [34][35][36]. Polymorphism in SEL1L may also be a susceptibility factor for Alzheimer's disease [37]. According to our results, the expression of sel-11 and sel-1 was suppressed by repeated isoflurane exposure, which might have increased the hsp-4::GFP expression sequestrated to the misfolded protein in the ER lumen. Retrotranslocated misfolded proteins are modified with ubiquitin, and p97 (also known as valosin-containing protein) guides these proteins to the proteasome for degradation [38]. P97 is a component of the ERAD pathway and its gene deletion can have fatal consequences [39]. The orthologs of p97 are cdc-48.1 and cdc-48.2 in C. elegans, and repeated isoflurane-associated decreases in these genes induced aggregation of polyubiquitin-conjugated proteins. The defective ubiquitin-proteasome system interrupts the degradation of retrotranslocated misfolded proteins via the proteasome. Accumulation and aggregation of neurotoxic proteins by dysregulation of the ubiquitin-proteasome system is known to be associated with numerous neurodegenerative diseases [40]. Particularly, polyglutamine aggregation causes several neurodegenerative diseases including Huntington's or Machado-Joseph disease, and p97 homologs have been reported to play a protective role in polyglutamine aggregates [18,41]. Although we could not identify whether the accumulated ubiquitinated proteins act as toxic aggregates, we found that repeated isoflurane exposure might interrupt the elimination of aggregate formation in C. elegans. A previous study showed that inhalation anesthetic can induce neuronal protein aggregation and mislocalization [42]. Behavioral change was evaluated by chemotaxis assay in our study. The significant decrease in chemotaxis index after repeated isoflurane exposure in wild-type N2 might be caused by increased ER stress and defective ERAD. UPR and ERAD pathway essentially play a role in the stress response, and the aforementioned results were observed in the nervous system as well as in the non-neuronal tissue throughout the entire body of C. elegans; however, they are finally involved in maintaining a variety of physiological conditions in a normal state [17,23,43]. Thus, it might be said that developmental repeated exposure of C. elegans to isoflurane worsen the chemotaxis index. Interestingly, several mutants showed different patterns in chemotaxis indices. Loss of ire-1 or pek-1 was known to affect various basal physiology, such as secretory-protein metabolism, longevity, or development [43][44][45]; therefore, the basal chemotaxis indices of ire-1 and pek-1 mutants seemed to decrease in the control group compared to that of N2. Unlike ire-1 or pek-1, the atf-6 mutants were known to have extended lifespan without sensitivity to proteotoxic stress [46,47], and the results of chemotaxis index were not different from that of N2. The other two mutants, sel-1and sel-11, also showed different chemotaxis indices from those of N2. Loss of sel-1 or sel-11 is able to induce ER stress [48] or lead to behavioral defects [49], respectively, which appeared to decrease chemotaxis index in each control group of sel-1 and sel-11 compared to the N2 control. Moreover, repeated isoflurane no longer suppressed the chemotaxis indices further in ire-1, pek-1, sel-1, and sel-11, which could be interpreted that those four genes might be significantly affected by repeated isoflurane exposure during developmental period in C. elegans. Both cdc-48.1 and cdc-48.2 were known to act redundantly in the elimination of misfolded protein from ER [50,51]; thus, either cdc-48.1 or cdc-48.2 single mutant did not seem to present any difference from the N2 in their chemotaxis assay. This study had several limitations. First, it is unclear if suppression of each gene may result in decreased levels of each protein. We could not investigate all protein levels involved in the UPR and ERAD pathway because of antibody unavailability for C. elegans. Furthermore, we did not investigate the phosphorylation of each UPR gene or subsequent initiation of downstream signaling events. Finally, this study was conducted in C. elegans. Although this model is valuable for studying certain signaling pathways and cellular processes, which have been well-conserved in humans, it is unclear whether our conclusion can be extrapolated to humans. Despite these limitations, our findings revealed that repeated isoflurane exposure leads to defective expression of genes associated with the UPR and ERAD pathways. Studies are needed to determine how this abnormal gene expression influences potential neurodegenerative consequences by anesthetic agents. In conclusion, we showed that repeated isoflurane exposure caused significant ER stress in C. elegans. Following the increase in UPR, the ERAD pathway was disrupted by repeated isoflurane exposure and ubiquitinated proteins was accumulated subsequently. UPR and ERAD pathways are potential vulnerable neuroprotective targets against anesthesia-induced neurotoxicity.
2021-06-12T15:34:24.213Z
2021-06-01T00:00:00.000
{ "year": 2021, "sha1": "99a52dc6d5cb44d3cc9fa25b5480a1a2512492af", "oa_license": "CCBY", "oa_url": "https://www.medsci.org/v18p2890.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "99a52dc6d5cb44d3cc9fa25b5480a1a2512492af", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
264795354
pes2o/s2orc
v3-fos-license
Translating Authentic Selves into Authentic Applications: Private College Consulting and Selective College Admissions Stratification in selective college admissions persists even as colleges’ criteria for evaluating merit have multiplied in efforts to increase socioeconomic and racial diversity. Middle-class and affluent families increasingly turn to privatized services, such as private college consulting, to navigate what they perceive to be a complicated and opaque application process. How independent educational consultants (IECs) advise students can thus serve as a lens for understanding how the rules of college admissions are interpreted and taught to students. Through 50 in-depth interviews with IECs, I find that IECs encourage students to be authentic by being true to themselves but that demonstrating authenticity requires attention to how one’s authentic self will be perceived. Translating an authentic self into an authentic application also involves class-based and racialized considerations, particularly for Asian American students who are susceptible to being stereotyped as inauthentic. These findings suggest that efforts to improve diversity must be carefully implemented, or they risk reproducing inequality. to mitigate it.Students enter the college application process with varying degrees of resources and information, and family class status plays an important role in students' abilities to understand and meet colleges' standards for qualifications and cultural fit (Lamont and Lareau 1988;Lareau and Weininger 2008;Silva, Snellman, and Frederick 2014).As colleges' criteria for evaluating merit multiply, to include assessments of students' character and authenticity, so too do families' strategies for navigating admissions.Privatized services, such as standardized test prep and tutoring, have become prominent features of the socalled ''admission industrial complex'' (Liu 2011). A fast-growing part of this sector is college consulting.These private consultants, also known as independent educational consultants (IECs), guide students and families through the application process, which now incorporates not only standardized tests but also the presentation (Beljean 2019) of an applicant's qualifications via personal statements, short essays, letters of recommendation, and interviews.Although research on IECs is relatively nascent, IECs' numbers quintupled between 2005 and 2015, to approximately 7,500 consultants in the United States (Sklarow 2016).The growing engagement of IECs reflects families' strategic responses to what they perceive as an increasingly complicated and opaque (Bastedo et al. 2018) college application process and perceived deficits in school-based counseling (McDonough 2004).As a result, the ways IECs advise students can serve as a lens for understanding how colleges' stated interest in subjective criteria are interpreted and taught to students and how class and race remain embedded in the process. Through 50 qualitative interviews with IECs, I find that IECs echo colleges in promoting the importance of being authentic in one's applications; they thus understand their role as helping students demonstrate authenticity.However, the persistent need to present an attractive application reveals the inherently evaluative nature of authenticity.Moreover, the demonstration of authenticity is subject to class-based and racialized considerations, showing how class remains encoded in the evaluation process and how cultural capital differs across racial groups (Cartwright 2022). Class-Based and Racialized Cultural Capital in College Admissions For decades, scholars have drawn on various definitions (Davies and Rizk 2018) of cultural capital (Bourdieu 1979) to study the role of culture in reproducing inequality via educational and other institutions (Lareau and Weininger 2008).In the case of college admissions, cultural capital's role in the reproduction of inequality can perhaps best be understood as the ''institutionalized, i.e., widely shared, high status cultural signals (attitudes, preferences, formal knowledge, behaviors, goods and credentials) used for social and cultural exclusion'' (Lamont and Lareau 1988:156).That is, colleges apply institutionalized standards, such as high school GPA, class rank, and standardized test scores, to evaluate students' qualifications.From students' perspectives, applying to college requires ''savvy,'' or ''the tools, skills, knowledge, behaviors, and resources that must be mobilized, more or less consciously, in order to apply, get in, decide where to go, choose a major,'' and so on (Silva et al. 2014:40). Class fundamentally affects access to these resources.Middle-class parents are more likely than working-class and poor parents to mobilize cultural resources, such as their own knowledge of the higher education system, to aid in their children's transition to a college-going adulthood (Lareau and Weininger 2008) and to circulate admissions savvy in their networks (Silva et al. 2014).These admissions-related behaviors are a form of habitus for privileged families (Weis 2016), enabling students to accrue the credentials and send the high-status signals that colleges seek.Class has remained relevant even as institutional definitions of merit have evolved over time.For example, standardized tests became prominent in the post-World War II era-dubbed ''the shifting meritocracy'' (Alon and Tienda 2007)-with the goal of helping schools identify ''intelligent'' students regardless of class background (Lemann 2000).However, fulfilling the adage, ''what gets measured gets managed,'' the growing use of standardized tests spurred the development of services oriented to helping students improve their scores, such as prep courses and private tutors (Alon and Tienda 2007).Many observers now note that standardized test scores reflect not aptitude but, rather, socioeconomic status and other indicators of inequality (Dixon-Roman et al. 2013). Literature on cultural capital is often race-blind (Richards 2020), which can lead scholars to construct it as equivalent to ''Whiteness'' (Wallace 2018).However, questions about race are fundamental to understanding the strategies used to succeed in admissions.Standardized tests have long been criticized for being racially and culturally biased against Black, Latinx, and Indigenous students (Rosales and Walker 2021).Debates about whether Asian Americans are held to higher standards on SAT scores have also persisted for decades (Takagi 1990), in light of Asian American students outperforming White students (Espenshade and Radford 2009;Hsin and Xie 2014).Although some see Asian American students' educational achievement as a sign of successful assimilation, others view this assimilation as fundamentally racialized (Lee and Kye 2016).Indeed, Asian American families' emphasis on academics has been criticized as ''weird'' (Jime ´nez 2017), damaging to non-Asian students' educational opportunities (Dhingra 2020), and obsessive (Warikoo 2022).Moreover, the ''model minority'' stereotype of Asian Americans obscures heterogeneity within the category and casts other racial minority groups as responsible for their own challenges (Ngo and Lee 2007). The Promises and Challenges of Holistic Review Partially in response to these concerns about inequality, the use of ''holistic review'' in admissions has increased, particularly at selective universities.Holistic approaches, which can include the consideration of applicants' characteristics, circumstances, and environments (Bastedo et al. 2018), have become widespread; in a 2019 survey of admissions officers, more than half responded that ''positive character attributes,'' writing samples, and recommendations were of ''considerable'' or ''moderate'' importance (National Association for College Admission Counseling [NACAC] 2020).Even the College Board, maker of the SAT, has produced a dashboard aimed at contextualizing scores within students' school and neighborhood environments.Importantly, unlike early practices in which colleges evaluated nonacademic factors in order to exclude groups that scored well on entrance exams (Karabel 2005), contemporary use of holistic review is often a strategy for expanding colleges' conceptions of ''merit'' and enrolling more diverse classes (Gebre-Medhin et al. 2022;Jaschik 2020).Moreover, cognizant that socioeconomically privileged students may be engaged in an arms race of extracurricular and volunteer activities designed to impress admissions, recent initiatives have encouraged colleges to prioritize ''authentic'' engagement (Weissbourd 2019).Indeed, ''authenticity'' has become an admissions buzzword, with colleges appearing to prefer applicants who are imperfect but genuine (Feeney 2021). Yet one recent analysis revealed that considering subjective factors (i.e., information from interviews, essays, and character evaluations) had no effect on enrollment outcomes for low-income or racially marginalized students at selective universities (Rosinger et al. 2021).Furthermore, a computational analysis of 240,000 University of California admissions essays found that essay content and style were correlated with self-reported household income, suggesting that subjective factors are not immune to socioeconomic biases (Alvero et al. 2021). 1 More recently, Harvard has faced accusations that its ''personal rating'' metric discriminates against Asian American students, who are expected to be academically successful (Lee and Zhou 2015) but also face stereotypes about their lack of warmth and leadership capabilities (Fiske et al. 2002;Lin et al. 2005).From a theoretical perspective, even ''authenticity'' is not value, class, or race neutral; rather, it is subject to audiences' expectations and moral judgment (Grazian 2018;Peterson 2005). These prior studies serve as cautionary tales for the use of holistic review, raising the following questions: How do class and race remain encoded in the college application process, particularly for subjective components of college applications?Moreover, are there racial differences in what constitutes cultural capital in applying to college?Answering these questions can lead to a better understanding of how stratification persists in admissions, even as universities' standards of evaluation shift over time.To do so, I draw on an emerging strategy for navigating college admissions: the engagement of private college counselors. Independent Educational Consulting in the College Admissions Process Just as test prep services emerged in response to the growing use of standardized tests, additional services have sprung up to provide resources to families seeking to prepare their children for the college admissions process (Liu 2011).These evolving strategies reflect how ''the terms of interaction and competition . . .are constantly being redefined'' (McDonough, Korn, and Yamasaki 1997:301) as actors within a social field (Bourdieu 1979) gain cultural capital and subsequently alter the rules of the game. One increasingly common strategy is hiring a college consultant, also known as an independent educational consultant (IEC).Private college consulting has existed since at least the 1990s, but the profession has grown exponentially in the past two decades, now comprising more than one-third of a $1.9 billion educational consulting industry (Hiner 2020).IECs' services vary but can include assistance on all components of the application, such as essay coaching, interview preparation, and deciding which schools to apply to. Prior research has examined IECs using a field analysis approach, situating IECs among applicants, high schools, and colleges (McDonough et al. 1997).IECs often have relationships with all these actors, and many have backgrounds in education, whether as admissions officers, school-based counselors, or teachers (Sklarow 2018).Another trajectory is that of a parent, typically a middle-class mother, who works with her own children through the application process and then parlays that experience into a business while seeking formal training and credentialing.IECs may also visit campuses and attend professional development and networking events, thus engaging with admissions as well.IECs are thus uniquely situated within the admissions field, not defining the terms of competition but, rather, interpreting them. There is limited up-to-date research on who hires IECs.One exception is the nationally representative High School Longitudinal Study, which found that 12 percent of juniors surveyed in 2012 reported having consulted a hired counselor (Ho, Park, and Kao 2019).IECs are typically engaged by higher-income families (Ho et al. 2019), with the majority describing their typical client as ''upper class/wealthy'' or ''professionals/ upper middle class'' (Independent Educational Counsultants Associations 2015); however, many IECs also engage in pro bono work. 2 Early work found that students who consulted an IEC were disproportionately more likely to be White (McDonough et al. 1997); more recent data suggest students with immigrant parents are more likely to report having consulted a hired counselor compared to students with nonimmigrant parents (Ho et al. 2019). 3 Although there is limited research on whether IECs affect admissions outcomes, some work suggests that IECs provide strategic value.For example, receiving advice from a private counselor can increase the likelihood that students will enroll via early decision, which is often considered an advantage because colleges typically accept more students in early admissions pools (Park and Eagan 2011).Some IECs are also involved in other college-going strategies, such as helping clients obtain internships and advising on course enrollment and extracurriculars (Gardner 2001;Kirp 2004).Families also seek IECs' assistance with organization and discipline, managing expectations, alleviating stress, and mediating between parents and children (McDonough 1994;McDonough et al. 1997;Smith 2014;Smith and Sun 2016;Sun and Smith 2017). The growth of the IEC profession therefore reflects families' strategic responses to what they perceive as an increasingly complicated college application process.As a result, the ways they advise students can serve as a lens for understanding (1) how colleges' stated criteria are interpreted and taught to applicants, (2) how applicants might think about sending the desired signals, and (3) the class and racial dimensions of doing so. Recruitment I draw on 50 semistructured qualitative interviews with IECs, conducted between fall of 2019 and summer of 2020.Recruitment was limited to California and New York for two main reasons: First, early work on IECs found high concentrations in these states (McDonough et al. 1997).Second, interviews were part of a larger project examining perceptions of racial identity and diversity in the context of recent controversies about affirmative action, and both states have large non-White populations. The majority of participants (n = 39) were recruited using membership lists of national professional organizations that include IECs. 4 I initially randomly sampled IECs from these organizations.As data collection proceeded, I sought geographic and racial diversity, using cities and last names 5 to refine recruitment.Thus, the final sample should not be construed as randomly selected but as maximizing variation within these states.This method tended to yield IECs who worked as solo or small-group practices.However, IECs are not required to hold professional certifications or affiliations, and some work for larger firms.I therefore identified firms cited in news articles about IECs or listed on LinkedIn, and I reached out to employees at those firms, yielding seven interviews.Finally, I recruited a small number of interviewees through personal networks and snowball sampling, yielding four interviews. Sample Characteristics Many respondents expressed that preserving their students' anonymity and their own reputations was critical.I therefore use pseudonyms for all respondents and include demographic descriptors only when relevant.Because demographic information could potentially be triangulated to identify individuals, I provide summary demographics in lieu of individual details (see Table 1). As mentioned previously, I sought racial diversity in my sample.Table 2 summarizes IECs' descriptions of their own and their clients' racial and ethnic identities.Among my 50 respondents, 30 self-identified as White, and 16 identified as Asian; this sample likely overrepresents non-White IECs, according to the limited available data. 6The majority of White interviewees worked primarily with White clients, and the majority of Asian interviewees worked primarily with Asian clients.Most worked with very few, if any, Black, Latinx, or Indigenous clients.This is likely due to a combination of neighborhood segregation and racial homophily in the referral networks that provide IECs with their clientele. Interviews The typical interview lasted between 60 and 90 minutes.Interviews probed IECs' processes for working with students; the advice they tended to give about elements of the process, including deciding where to apply, writing essays, and obtaining letters of recommendation; their perceptions on issues relating to diversity, stereotyping, discrimination, and affirmative action; and their overall perceptions of their work and the profession.Interviewees were not compensated. My status as a graduate student researcher at Columbia University was salient during interviews.Respondents inserted ''Columbia'' into hypothetical scenarios and asked about my experiences there.In addition, nearly all participants had earned a postbaccalaureate degree or certificate, often in college counseling; were often familiar with scholarly work on college admissions; and expressed interest in my findings.I was therefore granted access to a population that was both interested in my background and willing to participate and with whom I shared a common language. Because interview topics included the consideration of race in admissions, I was keenly aware that my presence as an Asian American woman could influence participants' responses.My analysis on this front is informed by the idea that although respondents may have been performing, that performance itself is analytically useful insofar as it reveals respondents' values and priorities (Monahan and Fisher 2010;Pugh 2013).How IECs presented themselves to me provided useful information on how IECs present themselves to clients and, in turn, on how they advise clients to present themselves to admissions officers. Data Analysis With respondents' permission and Institutional Review Board approval, interviews were audiorecorded and transcribed verbatim.For respondents who declined to be recorded, I took detailed notes during the interview.I analyzed transcripts using a combination of deductive and open coding in Atlas.ti.I created attribute and index codes (Deterding and Waters 2021) based on the interview schedule, and I developed additional analytical codes as new themes emerged, regarding topics such as competition, mental health concerns, inequality, and authenticity (for examples of codes, see Table S1 in the online Supplemental Material).After coding, I used visual displays to organize data and look for patterns and relationships (Miles, Huberman, and Saldana 2014), for example, sorting responses according to respondents' client demographics.Finally, I wrote analytic memos throughout the data collection, coding, and analysis processes as a means of reflecting on and exploring preliminary themes and findings. TRANSLATING AUTHENTIC SELVES INTO AUTHENTIC APPLICATIONS Most consultants typically encouraged students to apply to a ''balanced'' list of schools, including ones with higher acceptance rates (e.g., above 50 percent), where they were likely to be admitted.Nevertheless, they characterized their clients as primarily being interested in more selective, often extremely selective, universities.These more competitive schools comprise a minority of institutions: Only 19 percent of four-year institutions accept fewer than 50 percent of applicants, and only about 7 percent of schools accept fewer than 30 percent of applicants (Desilver 2019).Yet students' interests meant IECs' time was likewise concentrated on these schools, particularly on the subjective components that are more likely to be valued by private and selective universities compared to public and less selective universities (NACAC 2020). Often, interviewees worked most intensely with students on personal essays.Here, they emphasized that regardless of students' re ´sume ´s and activities, students should seek to portray their true selves and values-reflecting colleges' interests in authenticity (Feeney 2021).Interviewees' responses thus revealed how IECs transmitted knowledge about the perceived value of authenticity when it came to subjective factors and how IECs helped students translate their experiences into applications they believed would be both authentic and attractive to admissions.Interviews . .The answer is, the colleges want to see the kid really excelling in their extracurricular and that's only going to happen if the kid is really into it.So it's whatever the kid wants to do.The parents are like, that can't be the answer. According to Sam's and similar respondents' logic, students who pursued their true interests would be happier and healthier.Their passions would be readily apparent in their applications-which would ultimately be more successful. Being Authentic versus Demonstrating Authenticity However, whereas being authentic was the product of students' natural interests, demonstrating authenticity was another matter, one that was imbued with contradictions.On the one hand, authenticity, by definition, cannot be faked.As Anna, a White IEC, said, I think it's very apparent and obvious when they know it looks good so they're just writing about it, versus a kid who's actually genuinely passionate and can write about it authentically. Other interviewees described being able to tell if an essay had been written or overly coached by an adult or if the student tried too hard to impress.As another IEC, Caitlin (White), said, ''It's one of those things that's really hard to describe, but you know it when you feel it.''Yet IECs also acknowledged that honing an authentic voice took work.Anna added: And so I'll help them add, you know, I'll coach them on how to bring that voice authentically into the essay.Instead of it reading kind of like a stiff essay where they're just listing out what they did. Anna's response suggests that students all inherently have authentic voices, but expressing that voice was not always easy.In fact, IECs interpreted their ability to help students reveal their authentic selves as a function of getting to know students over long periods of time.As Cherie, a White IEC, described, [T]he beauty is knowing the kids for a longer period of time and spending quality time with them, talking about their preferences and knowing who they are. . . .I can hear if they've given it to a friend or something else like that in their writing, and just saying, ''This doesn't sound like you.''With only a few exceptions, interviewees suggested that the earlier a student hired them, the better.If IECs could encourage students to pursue their true interests early on, those interests could eventually translate into an authentic essay.Building long-term relationships was critical to learning more about students and thus helping them demonstrate their authenticity.This example also demonstrates how claims of authenticity are subject to evaluation (Peterson 2005).That is, the work of portraying oneself authentically involves considering whether one will be perceived as authentic. Constructing an Authentic Application Interviewees encouraged being authentic, but applicants are ultimately presenting themselves for evaluation.IECs' advice thus included caveats about how to meet what they perceived to be universities' expectations.First, demonstrating authenticity required internal consistency.As Deborah, a White IEC, said, The authenticity will come through. . . .It will tell that student's story, when the pieces go together, when what they're writing in their essays and their short answers are matching the classes that they're taking and the extracurriculars that they do and the community service they do. . . .It's when the pieces just don't go together.Like, this is weird.Yes you could do musical theater, but you want to go to a seven year [BS/ MD] program, but you didn't become an ambulance rider. Deborah, like others, emphasized that authenticity was first and foremost the result of passionate interest.However, she also implied that demonstrating authenticity entailed consistency, or ''all the pieces go[ing] together,'' including extracurriculars and coursework.If not, an applicant's stated goals could be perceived as inauthentic. Indeed, over the course of data collection, I encountered a common refrain: Selective colleges seek well-rounded classes, not well-rounded individuals.Students who failed to realize this and whose applications therefore lacked internal consistency risked being dismissed (Toor 2001).Thus, a student's interests, or at least how they presented their interests, would ideally coalesce around a theme, creating a coherent narrative for an admissions officer.IECs sought to help students identify that theme and ensure it translated onto their applications. A second expectation was the avoidance of cliche ´d essay topics; per interviewees, these included sports-related injuries, divorce, or the deaths of pets or extended family members.IECs typically added that students could write about these topics as long as they approached them from a unique perspective.Myra, an Asian American IEC, told me that in her previous role as an admissions officer, I probably read 7,000 applications, which translates to 35,000 essays.And so you do see some common themes arise . . .but I think it's not that you write about it or don't, it's that you write about it in sort of a deep and probing way . . .expressing your unique voice and your unique story. Advice on avoiding cliche ´s is readily available online, but the individualized advising IECs provided and their experience reading hundreds, if not thousands, of essays enabled students to understand what could constitute a unique and therefore more authentic angle on these otherwise tired topics. Finally, IECs relayed an expectation of crafting personal growth narratives, or stories in which students demonstrated how they grew or learned as they overcame a challenge.Andrew, an Asian American IEC, cited the following example: There was a girl [who] had a killer re ´sume ánd we said, let's not talk about that.Let's talk about who you are as a person, or maybe what kind of an obstacle or challenge you had and it's centered on, let's say, forgiving her parent for something she did.And that shows maturity, that shows the ability to grow. Like Andrew, most interviewees broadly recommended against rehashing one's achievements in favor of stories about personal or emotional growth. This advice was particularly important for students who wanted to write about topics that IECs were sometimes wary of, such as personal difficulties, trauma, and mental health challenges.As Heidi, a White IEC, told me, I think there are topics that may appear sensitive, but if they're handled the right way-disappointment, a failure, something that they're not-might be ashamed of or not proud of, that somehow they turned around and turned it in, and can actually present it as a learning experience and something positive . . .those can be dealt with. Again, IECs encouraged students to write what was true to them but advised students that framing these topics in terms of personal growth could avoid setting off red flags.Likewise, for students who wanted to write about mental health challenges-often, depression or eating disorders-IECs said they would encourage them to consider whether they had overcome (or were in the process of overcoming) it.Kristen, a White IEC, made the following suggestion: If they just got diagnosed, maybe it's not the right topic, but if they've been treated for anxiety and depression for two years and they have a therapist they see twice a week and they're going to stay with that person and they feel like they need to talk about it, then, okay. In other words, students who wanted to write about mental health challenges should do so in a way that would assure admissions readers that they could manage them. In summary, interviews with IECs revealed that when they encourage applicants to be authentic, what is actually required is the demonstration of authenticity.To be clear, engaging in the work of demonstrating authenticity does not imply that students are not still being authentic; as authenticity scholars point out, the performance of authenticity can still be sincere (Grazian 2018).Nevertheless, it does involve considering how one's authentic self will be perceived. CLASS ADVANTAGE AND DISAD-VANTAGE IN THE DEMONSTRA-TION OF AUTHENTICITY Like most IECs, my respondents primarily worked with families they described as middle class or affluent.These students possessed many of the expected advantages, including the time and resources to engage in the extracurricular and volunteer activities they thought colleges would value.Moreover, many of the highly selective colleges they targeted are more likely to enroll students from wealthier socioeconomic backgrounds (Chetty et al. 2017).Likewise, prior analyses of college application essays have found that applicants' choice of essay topic and the content of their responses are correlated with socioeconomic status (Gebre-Medhin et al. 2022;Jones 2013). Nevertheless, interviews revealed that advantaged students still had to consider how to translate their ample social and economic capital into attractive applications.For example, several respondents mentioned that students should be careful in their essays when writing about experiences of privilege.These experiences might include community service trips or extensive travel, especially if they included travel to developing nations or wealthy students' encounters with poverty.As Vivian, an Asian American IEC who described her clientele as primarily uppermiddle and upper class, said: The other essay that I try to avoid [is] kind of talking about, ''Oh I visited this country that was like a third-world country and I'm so blessed that my life is so great.''Ugh.Please don't do that.But if you want to write about your experience traveling, and you have a perspective that really shows who you are as a person . . .you can write it, but I'm nervous that you're gonna write about this topic. In other words, IECs advised students to avoid signaling economic privilege-or again, to write about these experiences in a way that showcased a unique perspective.Interviewees also said they encouraged families to be thoughtful about community service, especially for wealthy families who might engage in ''pay-to-play'' opportunities-as Meredith (White) put it, ''the 'pay $10,000 to paint a wall in Costa Rica' approach.''Such forms of community service could be perceived by admissions officers merely as attempts to check a box.Instead, IECs talked about encouraging students to pursue more meaningful and longer-term community service projects closer to home.This advice is echoed elsewhere in the admissions world.For example, one recent report endorsed by more than 100 college admissions deans recommends that high schools provide ''opportunities for authentic student service'' rather than engaging in a ''community service Olympics'' in which students compete to perform the most impressive service activities (Weissbourd 2019). On the other end of the socioeconomic spectrum, most interviewees did not work with substantial numbers of lower-income students, but many occasionally took on pro bono clients or volunteer work.Prior work has found that advisors in similar roles can expose students from underrepresented and low-income backgrounds to the cultural capital that can help them navigate admissions (Bernhardt 2013;Rosenbaum and Naffziger 2011).Among interviewees, one theme that arose was the value of providing context about these students' backgrounds.For example, some interviewees emphasized that a lack of expensive extracurricular activities did not disqualify them from selective schools.Rather, caretaking and outside work also count as extracurriculars.As Connie (Asian American), who said she worked with one to three reduced-rate or pro bono students per year, said: [I]f a student comes in that really has been struggling financially, then I will encourage them to talk about, not necessarily just, ''I'm poor,'' but more about the kinds of family responsibilities or maybe their personal experience that then reflects what their socioeconomic background is. . . .I'm always encouraging students to value what they do with their time, even if it's not in the typical expected kind of organized way. The Common App itself includes ''family responsibilities'' as an example category of activities and explains that knowing an applicant's family responsibilities helps admissions officers understand the applicant and their academic context.Nevertheless, students who lack admissions savvy may not be aware of this information's value. More strikingly, IEC interviews revealed how the desire for authenticity can upend traditional status structures (Fine 2003).Not only did caretaking and nonacademic work responsibilities ''count'' as extracurriculars, they could be translated into assets.As Tammy (White), who spoke about providing free workshops and some volunteer services, said: If I get a student that doesn't have a lot of extracurricular 'cause they have to take care of their siblings, that's gold to me, right?I hate to say that.You have a job at CVS. I'll take that any day over anything else you did, because you're working. That is, service-sector employment could provide a signal that colleges might value more than expensive extracurriculars.Although such activities may not typically be considered high status, they take on new meaning within the admissions context-particularly at the very wealthy, very competitive institutions that tend to enroll relatively fewer low-income students (Chetty et al. 2017) but that promise to meet the full financial need of those they do enroll (Selingo 2020).In part, this could be because students undertake these activities due to family needs rather than as a response to colleges' expectations-thus making their engagement totally authentic. However, IECs still considered how students went about sending these kinds of signals.Specifically, interviewees mentioned that students who wrote about caretaking or work should emphasize traits like grit and resilience.For example, Allison (White) volunteered for a college access organization in addition to working as a private consultant.When I asked whether lower-income applicants should address their socioeconomic status in their applications, she responded, If students don't have as many bright and shiny things to put on their application, it's much more understandable if you had to stay home and take care of a younger sibling or your parents didn't have a car or you were working a job in the summer . . .those are all critical to discuss, in the right context and in the right way.So as long as it's not like a victimization, showing more how a student has grown through those experiences, then it's definitely appropriate. Like IECs' more general advice, to consider framing essays as personal growth narratives, Allison encouraged lower-income students to discuss their experiences in similar terms.This advice suggests that including caretaking and work responsibilities as activities is not solely about providing context for admissions officers (i.e., as suggested by the Common App's instructions).Rather, associating such activities with personal development and positive character traits helped translate them into the signals that colleges sought.Again, most respondents did not work with significant numbers of low-income students, but when they did, they aimed to help such students translate their experiences into narratives that they believed would resonate with an admissions officer. RACIAL DIFFERENCES IN THE DEMONSTRATION OF AUTHENTICITY Due to purposive sampling, about one-third of interviewees worked predominantly with Asian American clients.Very few had substantial numbers of paying clients who they identified as Black or Hispanic, but about half occasionally worked with Black, Hispanic, and American Indian students, primarily pro bono.Interviews revealed that for these groups, the demonstration of authenticity carried additional racialized considerations above and beyond the usual advice given to White students.In contrast, race rarely emerged as a consideration when discussing White students' authenticity. Forestalling Stereotypes about Asian American Students When it came to being authentic, respondents overwhelmingly told me that they advised all students to be honest.For example, some respondents said they occasionally received questions from Asian American students about whether they should leave racial demographic questions blank to avoid potential penalties.These IECs said they advised students not to try to hide their identity-admittedly in part because having ethnic surnames would give them away regardless.The demonstration of authenticity, however, was less straightforward, particularly for Asian American students. First, unique to Asian American students was the inclusion of immigration as a potentially cliche éssay topic.For example, Vivian, an Asian American woman who described her clientele as predominantly Asian American, relayed that she recommended avoiding immigrant stories.When asked why, she responded, It's cliche ´.So a lot of kids talk about having parents who are immigrants and now they're here and you know-there's just so many of those that we try to avoid them. . . .But if you can write an amazing essay about it, and you feel strongly, let me see what you have. As with sports injuries or dead pets, interviewees did not ban writing about immigration but, rather, advised that students should approach the topic creatively and connect such stories to their own personal growth.Laura, a White woman who described her clientele as primarily Asian American, said: When you get someone who writes an essay that they want to go to college because they want to pay their parents back for all the struggle and pain that their parents went through for immigrating, the admissions office doesn't want to hear that, and they're very blunt about it.''We don't want to hear that.Yeah, Grandma's great.Grandma suffered and did everything for you, but Grandma's not applying for college.We want to hear about you.''Even if being from an immigrant family is an important part of a student's journey or identity, Laura believed admissions officers were only interested in such journeys as they related to students as individuals.Laura opined that colleges' preferences for personal growth narratives suggested ''some ignorance'' or ''some intolerance'' of alternative cultural norms, prizing individualist cultures over family-oriented ones.Indeed, this orientation reflects a common interpretation of authenticity as valorizing individualism (Peterson 2005;Williams 2006), suggesting that in this context, the notion of authenticity itself may preclude collectivist norms. Interviews did reveal some variation.Some IECs disagreed that immigrant stories needed to be treated with caution.Jason, a White male who described his clientele as predominantly Indian American, summed up his approach: I certainly encourage kids to consider whether they want to discuss ethnicity. . . .I want [them] to think about, okay, I'm Indian American.What aspects of Indian culture do I relate to or not relate to?I'm Chinese American, but what aspects of Chinese culture do I relate to or not relate to, and so forth.Sometimes even the kids who haven't thought about that much can end up writing interesting things about it once they're given half a chance to think about it. Unlike Vivian, who called immigrant stories ''cliche ´,'' Jason stated that the immigrant story was ''interesting'' to him.Notably, Asian American interviewees were more likely than White interviewees to caution against writing about immigration.This pattern suggests a few possibilities: Perhaps Asian American consultants are more attuned to potential stereotyping, Asian American consultants may hear immigration stories more often because of their own identities, or immigration stories become cliche ´only when associated with Asian Americans. The second way that demonstrated authenticity was racialized for Asian American students was that racial stereotypes affected how their interests were perceived.As with all students, interviewees were adamant about encouraging Asian American students to be authentic by pursuing their genuine interests, even if they evoked stereotypes.Nevertheless, both White and Asian IECs also said that such students could stand out from other Asian Americans in two ways-suggesting that Asian American students would be compared to other Asian American students rather than with the pool of applicants as a whole.The first strategy was to emphasize interests that were less stereotypical.As Myra, an Asian American IEC who said that about half her clientele was Asian, told me: If you really love robotics, then you should do robotics.I don't care if you're an Asian male, it doesn't matter. . . .I will say in the case where a student is-let's say for that South Asian male who's equally interested in history and robotics, then I say, let's make sure the history part of your interests are coming out loud and clear.[But] you can never manufacture that kind of interest, it has to be genuine. No interviewee talked about actively dissuading Asian American students from pursuing stereotypical activities.However, Asian American students who happened to be genuinely interested in activities or fields that were seen as less stereotypical, like Myra's hypothetical South Asian male, were nudged to emphasize those aspects on their applications. A second strategy for making ''stereotypical'' interests stand out was by framing them differently-specifically, to demonstrate the sincerity of their interest.Laura, a White woman who described her clientele as predominantly Chinese and Indian, spoke about feeling that stereotypes about Asian American students were unfair.In the face of this unfairness, she tried to help students put together the best possible applications: What I try to do is help them focus on the part of the piano that's different.'Cause there's that negative stereotype of Asians being grade-grubbing, award-grubbing. . . .You're not in it because you love piano, it's, you want the certificate type thing.So I try to help them show how much they love the piano.We're not going to talk about what it was like during your examination for the level 10.We're going to talk about why you really love Mozart's Rondo Alla Turca. . . .We try to concentrate on something that might be a little more appealing to the admissions office. The advice to focus more on passion and less on awards was not limited to Asian American students.However, whereas IECs did not perceive that the authenticity of White students' interests would be scrutinized in racial terms, they did suggest that Asian American students in stereotypical activities needed to demonstrate that they engaged in them out of genuine interest and not merely due to parental or cultural expectations. Another framing for these activities involved emphasizing community service.Amy, an Asian American who said she worked predominantly with Chinese international and Chinese American students, talked about encouraging students to pursue what they ''really care about'' rather than activities their parents started them in.When I asked whether an activity that a student ''really cared about'' couldn't also be an activity that a parent had started them in, she responded: I'm not saying you have to stop doing [piano] if you love it.And if you do, that's great.But I'm telling them it's important for a lot of these top colleges-like you're using your talent to help others, right?You're not just saying, I'm a great piano player and I'm going to get all these awards, but say, maybe go, I teach piano, right?Or maybe go start a nonprofit and go play at nursing homes, or go play at hospitals, you know what I mean?Like do something with your talent that actually gives back. In other words, students could demonstrate that they were truly passionate about an activity if they used that activity as a way to ''give back.''Again, advice to engage in community service, much like advice to avoid focusing on awards, was not limited to Asian American students.However, tying their interests to community service seemed particularly important for students whose activities might otherwise be perceived as inauthentic.This strategy is consistent with research showing that suspicions of inauthenticity can be offset by evidence of prosocial motivations (Hahl and Zuckerman 2014).In this case, Asian American students-stereotyped as academically successful and engaged in high-status extracurriculars-could alleviate suspicions about the authenticity of their interests by connecting them to community service. Thus, IECs gave Asian American students much of the same advice they gave to White students but couched in racialized terms.Asian American students were encouraged to pursue their interests, whether stereotypical or not, but were nudged to frame their interests in ways that could alleviate concerns about their inauthenticity.They were also advised to avoid racialized cliche ´s. Foregrounding Black and Hispanic Identity Finally, although these cases were rarer due to respondents' typical clientele, some interviewees encouraged students from underrepresented racial or ethnic groups to consider incorporating their identities into their applications.These IECs generally stated that they were not trying to use race merely as an inauthentic ''hook.''Instead, they portrayed it as just one possible essay topic-although one with the potential added benefit of demonstrating the student's contribution to campus diversity.Carol, one of few respondents who said her clientele was neither predominantly White nor Asian, gave the following example of a biracial student: So Mom is Anglo, is White, and Dad is Black.And boy, I could not get him to write about that at all.I think he finally just put like one sentence in there. . . .And he wanted to write about other things, which is fine. Carol felt that some of her demographically diverse clientele were aware that their racial identity could help them stand out, but others were not.She therefore sometimes encouraged them to consider writing essays about such topics-although ultimately, as in the example she gave, she deferred to students' preferences.Authenticity remained important, as Pamela, a White woman, told me: [For] institutional priorities, diversity plays a big part.I may gently nudge a kid to try to explore things about their identity or their-that I think would help them in that way, but I don't want it to feel manufactured, that they're just doing it simply to check off a box. Here, Pamela alluded to universities' interest in recruiting diverse student bodies, often as a marker of prestige (Holland and Ford 2021), and suggested that students could appeal to this interest in their essays.Nevertheless, she also expressed deference to students' being authentic to their own interests. In other instances, writing about racial or ethnic identity was a means of authenticating students' self-classification on close-ended demographic questions.Peter, who had previously worked in admissions, said he felt that applicants who self-classified as ''Hispanic'' were particularly and unfairly prone to scrutiny at highly selective schools: ''There's kind of this internal Hispanicity test, like, is this kid really Hispanic enough?''According to Peter, who identified as Asian, some schools differentiated between students who merely checked the Hispanic box and those who could ''really bring Hispanic culture'' to their campuses. 7When I asked how students might address this, he responded: It might be like thinking about their summer plans.Like, are you going to go back to Peru? Are you going to go back to Honduras?Are you going to work with students tutoring Spanish or something like that. . . .And, I don't want to speak from both sides of my mouth, 'cause I'll say like, ''Hey, we want you to do the things that you want to say, that you enjoy to do.But I also want you to know how it could be perceived.''Like most respondents, Peter encouraged students to pursue whatever extracurricular activities they wanted, but he also felt responsible for helping clients understand how their claim to Hispanic identity would be perceived.Notably, although other IECs suggested that Asian American students could frame their stereotypical interests in ways that avoided negative associations with parental or cultural expectations, Peter suggested that Hispanic students could emphasize the cultural relevance of their activities. In summary, IECs emphasized the importance of authenticity regardless of their students' racial identities.However, the expression of that authenticity varied according to race.The most prevalent advice addressed how Asian American students could anticipate and forestall potential stereotyping about their families' immigration stories and their own interests.For Black and Latinx students, IECs suggested that demonstrating authenticity could emphasize or buttress their racial and ethnic self-classification.In contrast, neither of these considerations were relevant for White students. DISCUSSION Interviews with IECs echo colleges' refrains that they simply want to get to know students' authentic selves.Indeed, ''the search for authenticity has become a distinctive marker of contemporary life'' (Anteby andOcchiuto 2020:1290), perceived as being rewarded by educational institutions (Lamont, Kaufman, and Moody 2000), the workplace (Chin 2020), and beyond. Yet I find that for college applicants, being authentic is not necessarily the same as demonstrating authenticity.Demonstrating authenticity requires balancing being true to oneself with knowing what colleges are looking for-and having the skills to express it.This balance harkens to other ways individuals turn to professionals to manage their emotional lives.For example, Hochschild (2012:25), in her work on ''the outsourced self,'' cites a ''love coach'' who advised online daters to ''be 'real' but not 'too real,' distinguishing between off-putting and enticing real stories.''Likewise, students can be real about their experiences but in a way that remains enticing.Thus, the signal of authenticity functions in some ways like ease, or ''the true mark of privilege'' (Khan 2011:112).Both ease and authenticity should appear effortless, but in reality, they are produced through hard work.That said, the effort of performing authenticity (Grazian 2018) does not necessarily make it any less sincere. Interviews also demonstrate how class and race remain embedded in applicants' understanding of and ability to respond to the types of signals institutions value, even as those signals shift over time.Prior work from admission officers' standpoint suggests that socioeconomic privilege enables applicants to deliver the fine-grained information sought by universities (Stevens 2007).Here, I show one mechanism by which this relationship occurs, as one's ability to demonstrate ''merit'' continues to be moderated by access to resources like the customized advice and attention provided by IECs.Of course, applicants can learn how to demonstrate authenticity from sources other than IECs, like books, online resources, school-based counselors, or college-going peers.Yet it is precisely the students who already possess cultural and social capital who are more likely to know that such resources exist and to make use of them. 8Nevertheless, interviews also show that less resourced students, too, can accumulate the kind of cultural capital valued by colleges when exposed to knowledge about the appropriate signals to send (see also Jack 2019). IECs' advice for students of color also demonstrates the extent to which White cultural capital continues to dominate higher education.For example, encouraging Asian American students to frame immigrant stories to be about themselves rather than about their families reinforces the devaluation of certain kinds of cultural capital-specifically, capital that may be valuable in communities of color-in White-centered contexts (Richards 2020;Yosso 2005).In addition, concerns about how Asian Americans might be stereotyped influenced IECs' advice on how these students could demonstrate the authenticity of their interests 9 in a way that was not necessary for White students-thus reifying those stereotypes.IECs wanted to help Asian American students by nudging them to ''smash'' stereotypes (in one respondent's words) precisely because they believed that admissions officers did see too many Asian American applicants engaged in stereotypical activities.Interviewees believed this was more likely to be true at extremely selective universities where Asian Americans are overrepresented (Tran, Lee, and Huang 2019).How this approach to stereotyping may affect Asian American students' sense of racial identity is an open question. My findings reveal some patterns in how IECs advise Black, Hispanic, and Indigenous students, but because most respondents worked with only a few of these students, they provided less detail about these groups.For example, some IECs considered immigration to be a cliche ´topic for Asian American students, but they did not describe racialized constraints on how Black students might write about Black identity.However, prior work suggests that Black students are also subject to controlling images, with expectations that they present as racially apolitical (i.e., instead of racial justice oriented; Thornhill 2019), that they share painful and traumatic stories to gain admissions (Waller-Bey 2020), and that they present stories of racism as learning experiences (Anon 2002).Future work can examine nuances in how consultants advise these groups, particularly for IECs who are also from underrepresented racial backgrounds, and the extent to which ''merit'' is associated with adherence to these controlling images (Cartwright 2022). The study sample involves additional limitations.Most respondents were members of at least one professional association, but one estimate suggests that only about 20 percent of IECs have these affiliations (Sklarow 2018).The profession's persistent lack of regulatory or certification requirements has resulted in an absence of comprehensive data about IECs not affiliated with professional associations, making it difficult to draw conclusions about the field as a whole.Nevertheless, using a sample composed predominantly of association members has the benefit of being more likely to reflect institutionalized norms.In addition, most respondents worked as consultants full-time, meaning they were also more likely to interact with larger numbers of students. Colleges and universities that attempt to overcome the potential biases of ''objective'' (i.e., quantifiable) criteria by incorporating subjective criteria into their admissions processes often do so with the intention of promoting diversity and equity.Although the present study does not assess success in terms of admissions outcomes, it does show how changes in institutions' definitions of merit are met with changes in applicants' strategies, which, in turn, reflect long-standing class and racial disparities.My findings suggest that to truly level the playing field, colleges need to ensure that they do not simply trade in one imperfect measure (standardized tests) for another (essays and recommendations).Class and racial inequality can be embedded in students' varying abilities to leverage subjective application components as a means of sending the ''correct'' signals to colleges.Policies on the use of subjective criteria must therefore be implemented and evaluated carefully. 7. The decision to self-classify as Hispanic on college applications is particularly prone to wide-ranging interpretations (Giebel, Alvero, and Pearman 2022;Huang 2023).Peter likewise did not perceive applicants who self-classified as Black as facing the same scrutiny.8.This is an example of ''specified ignorance'' (''the express recognition of what is not yet known but needs to be known in order to lay the foundation for still more knowledge''; Merton 1987:1).9.This finding is consistent with social psychology research showing that similar concerns lead people to engage in racial impression management (Swencionis, Dupree, and Fiske 2017). Table 1 . Demographic Characteristics of Independent Educational Consultant Respondents. Note: ''Background in related professions'' excludes volunteering, internships, or practicums; respondents may be counted in multiple categories.''Cost of services, package'' was provided by 37 respondents.''Cost of services, hourly'' was provided by 31 respondents.For respondents providing a range, I used the upper end to calculate middle 50%.Cost measures exclude one respondent who provided free services under a nonprofit model. Table 2 . Sample by Independent Educational Consultant (IEC) Race and Reported Clientele Race.
2023-11-01T15:17:00.284Z
2023-10-30T00:00:00.000
{ "year": 2024, "sha1": "1b3269ac4870c864765ea206c65ab4acc1b52bd5", "oa_license": "CCBYNC", "oa_url": "https://journals.sagepub.com/doi/pdf/10.1177/00380407231202975", "oa_status": "HYBRID", "pdf_src": "Sage", "pdf_hash": "4565c184be5f5caf9d11ff8e730a1ccc48f67948", "s2fieldsofstudy": [ "Education", "Sociology" ], "extfieldsofstudy": [] }
234733022
pes2o/s2orc
v3-fos-license
Characterization Production Systems and Productivity Indices of Local Pigs of East Timor The objective of this study was to evaluate the production system and productivity indices of local swine raised in subsistence production system in East Timor. About 1,096 respondents were interviewed in fitting villages in eight municipalities. In each village, 10% of breeders were interviewed as respondents, using simple random method and the interviews based on established questionnaires. It was observed that about 80%-90% of total respondents still used subsistence production system. The result of descriptive statistical analysis showed that the average of piglets per litter was 4-6 and weight of piglets at birth was 0.97 ± 0.22 kg. The age of weaning of piglets and the weaning weight was 3.94 ± 0.72 months and 5.56 ± 0.88 kg, respectively. The age of the first breeding gilts was 8-10 months; calving interval was 6-12 months. Productive period of females was 3-12 years, and the number of mortality rate of piglets was 0.17% to 1% per production period. Thus it was concluded that the subsistence production system could affect the productivity level of local pigs. Introduction For decades, pig's production system has been regarded as a marginal or subsistence activity in Timor-Leste and is normally low-productivity and carried out by small farms in rural areas across the territory. In addition, it is an excellent instrument for internalizing development, making small property viable and securing labor in the field. The most used pig breeding system is subsistence breeding, it is a form of extractive culture, with no concern for animal productivity and there is no technical control in its breeding activity [1]. The animals of different stages of production remain together and dispute among themselves the same food offered by the producers. The system characterizes primitive creations, without the use of appropriate technologies and, therefore, presents low levels of productivity. This system is used by farmers in rural areas who have never received any technical guidance on animal husbandry Corresponding author: Graciano Soares Gomes, Ph.D., professor, research fields: swine production systems, monogastric nutrition and environmental impact. and only as an activity of secondary family importance or hobby. In this way, the breeding is intended for the supply of meat and fat for subsistence and the surplus is traded regionally, that is, the animals are only raised to meet the basic needs of the family to fulfill social obligations, to be sold in case of need to supply the family economy, and to be consumed on special occasions [2]. Based on the result of the National Statistical Census (NSC) in 2010, the country has a herd of 330,435 pigs distributed in 13 municipalities with an average density of 2-4 pigs per breeding establishment. However, the result of the NSC in 2015 [3] showed that pig production increased by about 26.85%, that is, with a total herd of 419,169 animals over a period of five years, with the national average density 2-6 animals per establishment. On the other hand, the breeders in the eight municipalities still live with a traditional swine culture consisting of local breeds with low performance production. On the one hand, traditional production systems make it possible to respond adequately to production needs in order to take advantage of local resources and D DAVID PUBLISHING Characterization Production Systems and Productivity Indices of Local Pigs of East Timor 148 native breeds. For this activity to be viable and profitable for producers, it is necessary to introduce modern or advanced production systems in order to improve animal production [1]. The use of technologies may be the best strategy to make and keep producers competitive, thus preventing them from abandoning the activity and subsequently ownership. However, the technology has to be transmitted in a rational, organized way to the producers, preserving the native breeds, identifying the causes of the animals' low performance and increasing their productivity [4]. Therefore, the existence of productivity targets for the herd is an essential element for monitoring the performance of the system and for diagnosing the problem of animal production. This work aimed to evaluate the productivity indexes and characterization of local pig production systems in the research area. Materials and Methods The study was carried out in eight municipalities in East Timor. Two municipalities located in southern part of the country. Three municipalities located in the southwest and three municipalities located in west of the country. Each municipality chose 10% of the total registered producers according to NSC in 2015, using the simple random sampling method to select the respondents. Thus, about 1,096 producers as respondents have been interviewed in this study. The variables observed and interviewed in this study were the type of production systems, pig's productivity indices such as number of piglets (litter size), number of piglets weaned by farrowing, mortality rate of piglets, weight of piglets at birth (g), weight of piglets at weaning (kg), age of piglets at weaning (month), productivity period of sows (year), number of piglets weaned per sow per birth, calving interval (month), and the age of first production of sows (month/year). Data were collected from May to November 2018. During the visits, a semi-structured questionnaire was applied to obtain the data. Observations and interventions were carried out by the same group with the same system to avoid misinterpretations. The data obtained or collected were subjected to statistical analysis (quantitative) according to Sampurna and Nindhia [5]. Results and Discussion The statistical data analysis referring to the characterization of production systems and animal productivity indices was shown in Tables 1 and 2. It was found that 93.57% of the interviewees had practiced systems of extensive livelihood with some assistance. The animals are rearing free in the common areas and fed twice a day without worrying about the quantity and quality of food provided. In this breeding system, all animals from different stages of production competed together for the same food provided according to their agility and strength. On the other hand, the result of statistical analysis revealed that the local pigs bred in extensive production system without concern for productivity and technical control have low levels of productivity (Tables 1 and 2). In addition, breeders despite their breeding and socioeconomic importance, little is known about the characterization of production systems with applied technologies to improve animal productivity. The data presented in Table 1 showed that the average of piglets per sow is 5.76 ± 1.61. The average weight at birth is 0.97 ± 0.22 kg. According to Filha et al. [2], the weight of local piglets varies from 0.70 kg to 1.30 kg or more, according to their breed. It is also possible to achieve five to six piglets per sow. In addition, it was observed that the weaning age of the piglets varied between two months and six months, with weaning weight from 3 kg to 15 kg. The results obtained in this study were considered as ideal according to Gomes et al. [6] that the productivity of the extensive subsistence system presents a number of piglets per sow per year from (7) Outdoor production systems --(8) Alternative feeding production -five to six piglets and the number of piglets weaned from three to five and the frequencies of births per sow per year less than one. The litter size at weaning is influenced by the number of piglets born alive, the age of them at birth and/or calving order and the time of birth [7]. For the age of the first sows rearing, it was found that in systems of extensive livelihood production, the animals entered into the production period with an average age of 11.99 ± 1.66 months. According to Sobestiansky et al. [8], the sow must start breeding when it reaches 10-12 months of age and is developing well. The sexual maturity of gilts occurs between 5.5 months and 6.5 months of age, with some variations depending on genetics, nutrition, management and the environment where they are housed [9]. To increase the productive indexes of breeding, it is necessary to use males and females of high genetic value in the breeding stock [1]. The average birth interval obtained in this study was sooner than six months and later than 12 months. This average is considered ideal by the literature that pigs raised in subsistence systems have the number of calves per year less than one [6]. Conclusions The production system used in all villages was the extensive subsistence system with food management twice a day without considering the quantity and quality of the food provided. The values of the productivity indices of the animals obtained in this study are still considered low and late for the age of the gilts entering into production period. The result of observations showed that the producers, despite considering the importance of production and some parts of socioeconomic, however, there is still very little that they knowing about the characterization of production systems with applied technologies to improve the productivity of animals, especially management of production, reproduction, animal health and animal welfare.
2020-12-24T09:12:54.059Z
2020-06-28T00:00:00.000
{ "year": 2020, "sha1": "e859c648ce3b77ba143b255f6ceff0387fdfdda0", "oa_license": null, "oa_url": "https://doi.org/10.17265/2161-6256/2020.03.005", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "35987de59c6db06fa8e6b6e2dbc60ba6368b3f8c", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "extfieldsofstudy": [ "Environmental Science" ] }
258219035
pes2o/s2orc
v3-fos-license
neuroAIx-Framework: design of future neuroscience simulation systems exhibiting execution of the cortical microcircuit model 20× faster than biological real-time Introduction Research in the field of computational neuroscience relies on highly capable simulation platforms. With real-time capabilities surpassed for established models like the cortical microcircuit, it is time to conceive next-generation systems: neuroscience simulators providing significant acceleration, even for larger networks with natural density, biologically plausible multi-compartment models and the modeling of long-term and structural plasticity. Methods Stressing the need for agility to adapt to new concepts or findings in the domain of neuroscience, we have developed the neuroAIx-Framework consisting of an empirical modeling tool, a virtual prototype, and a cluster of FPGA boards. This framework is designed to support and accelerate the continuous development of such platforms driven by new insights in neuroscience. Results Based on design space explorations using this framework, we devised and realized an FPGA cluster consisting of 35 NetFPGA SUME boards. Discussion This system functions as an evaluation platform for our framework. At the same time, it resulted in a fully deterministic neuroscience simulation system surpassing the state of the art in both performance and energy efficiency. It is capable of simulating the microcircuit with 20× acceleration compared to biological real-time and achieves an energy efficiency of 48nJ per synaptic event. . Introduction Computational neuroscience is a very broad and multi-faceted research field. Starting at the molecular level up to the modeling of human behavior, a very wide scale in time and space is spanned with no technical system capable of simulating the complete stack. While tremendous progress has been made in recent years by the community, the question of how the brain transforms information is still a puzzle. To gain deeper insights, the simulation of neuronal network models of natural density is considered essential. Today, there exist various neuroscience simulators targeting different resolutions and abstraction levels. Examples include dedicated neuromorphic hardware systems such as SpiNNaker , BrainScaleS (Schemmel et al., 2008), Bluehive (Moore et al., 2012) or Loihi (Davies et al., 2018), and software frameworks like NEST (Gewaltig and Diesmann, 2007) running on conventional CPU-based systems or GeNN (Yavuz et al., 2016) . /fncom. . and NeuronGPU (Golosio et al., 2021) running on GPU-based systems. Recent iterations of these systems target a broader range of tasks for example in the area of Machine Learning, and feature higher degrees of flexibility and efficiency (Mayr et al., 2019;Billaudelle et al., 2020). In addition, recent dedicated systems exist that target the simulation at higher degrees of abstraction (Wang et al., 2018) or aim at solving Machine Learning tasks (Panchapakesan et al., 2022). On one side, the mentioned variety together with advances in computational capabilities and the development of simulator-independent model description languages (e.g., PyNN by Davison et al., 2009) pushed the domain of computational neuroscience to study neural network models of increasing complexity. These include large-scale models such as the cortical mesocircuit and multi-area model (van Albada et al., 2020). On the other side, spiking networks are gaining traction in industry to address technical problems as targeted by the Loihi platform (Dey and Dimitrov, 2021). Among the various neuroscience models, the cortical microcircuit (Potjans and Diesmann, 2014) has become a widelyused benchmark to evaluate simulators (Knight and Nowotny, 2018;van Albada et al., 2018;Rhodes et al., 2019;Knight et al., 2021;Heittmann et al., 2022;Kurth et al., 2022), driving novel systems toward innovative designs and higher performance. For instance, an initial mapping on SpiNNaker was operating 20× slower than biological real-time . However, deeper understanding including insights into the learning processes of the human brain requires the simulation of long-term neurodynamical processes, i.e., simulations at higher speed. Along this line, solutions based on GPU-enhanced simulators reduced the slowdown to 1.8× (here and in the following: with regards to biological real-time) (Knight and Nowotny, 2018). Shortly thereafter, the first real-time simulation of the cortical microcircuit was run on SpiNNaker (Rhodes et al., 2019). To the best of our knowledge, the fastest realization uses the IBM Neural Supercomputer achieving an acceleration of 4.06× (Heittmann et al., 2022). The major challenge in designing such a system lies within the required flexibility to accommodate new insights from the neuroscience domain that change the specification and requirements. Some years back, simulation time was commonly progressing in discrete steps of 1 ms whereas nowadays, 0.1 ms are used to better capture the short delays along local axons (Potjans and Diesmann, 2014). Similarly, a supported number of 1,000 synapses per neuron was assumed to be sufficient for many earlier systems. Recent insights suggest the average fan-out to be higher, causing major performance losses on these systems. The required degree of biological realism is still under discussion including seemingly simple questions such as required numeric precision or suitable compartment size of dendrites. The resulting volatility in biological models must be taken into account in the design of new systems. More complex questions relate to the modeling of plasticity. Nowadays, three-factor rules (Kuśmierz et al., 2017) that modulate simple spike-timing-based learning mechanisms are considered. These advances pose new requirements on the computation and communication capabilities. To account for this and other developments, computational neuroscience requires a next-generation system to help to efficiently gain new insights. In turn, these insights will redefine the technical specification for The proposed development framework for accelerated neuroscience simulations. (A) The static and dynamic simulators are jointly used for exploring the design space. We calibrate the dynamic simulator according to the performance analysis done by the static simulator and then update our analytical model considering the observed dynamic behaviors. (B) The dynamic model will be validated and calibrated by the FPGA cluster. Afterward, the bottlenecks of the design can be studied by both simulation platforms available here. (C) The FPGA cluster is used for both validating the dynamic simulation and fine-tuning its parameters. For this, it can perform accelerated simulations of biological neural networks. this system. This chicken-and-egg problem is best addressed in an evolutionary process relying on reciprocal advances on both sides. In this ever-changing environment, we derive the need for a flexible system, capable of performing simulations of largescale neuronal networks observable down to membrane potentials. Its complexity has to reach realistic densities, simulated in an accelerated fashion to also capture long-term effects, e.g., regarding synaptic plasticity. Furthermore, the system behavior needs to be fully deterministic in order to reproduce results, operate with intermittent system states and precisely capture the impact of parameter variations in the neuroscience model (as opposed to observing irrelevant variations relating to the simulation environment). Finally, we target a scalable system that is future-proof toward more complex or larger models. To achieve appropriate speed-up at the same time, the problem needs to be distributed over many compute nodes to overcome computation and routing bottlenecks, among others. In this work, we present our neuroAIx-Framework which is suited to evaluate and benchmark prospective system architectures in a highly flexible and performant manner. It consists of three pillars as illustrated in Figure 1-(1) an empirical modeling tool (static simulator) for fast design space exploration at a coarse resolution, (2) a virtual prototyping platform (dynamic simulator) for accurate performance estimations, and (3) a cluster of interconnected FPGA boards (FPGA cluster) for evaluation and simulator calibration. In the following sections, the workflow using this framework is illustrated with various examples. In particular, we focus on steps B and C by realizing the FPGA cluster which should not be considered a fixed, final solution to the quest of finding the future neuroscience simulation platform. Instead, the cluster is an evaluation platform that functions as a proof-of-concept for this workflow. We use it to refine and calibrate the first two pillars such that they provide more trustworthy quantitative results in future explorations. While the third pillar is mainly intended as an emulator of a future system, it is fully functional and already . /fncom. . speeds up neuroscience simulations. More specifically, we are able to simulate the cortical microcircuit model at an acceleration of 20× which is, to the best of our knowledge, the fastest solution so far. The proposed framework can be utilized to constrain the design space of neuroscience simulation systems, identify the costs and bottlenecks, explore solutions and validate ideas. In our previous work, we studied suitable communication architectures-a major bottleneck in accelerated simulation of large-scale networks-utilizing the static and dynamic simulators (Kauth et al., 2020). Similarly, Kleijnen et al. (2022) focused on simulation regarding heterogeneous neural networks and corresponding mapping algorithms. However, this work extends this to the characterization of all relevant building blocks that are necessary for a dedicated neuroscience simulation system, and sketches their implementation in the presented evaluation platform. We believe that the fast-prototyping feature of our method is an essential aspect to close the gap between system design and the fast-moving domain of computational neuroscience, leading to even faster progress in both domains in the future. To summarize, this paper presents the realization of a coherent framework to explore future neuroscience simulation systems. It allows to 1. Perform fast system exploration and precisely analyze requirements of larger scale models emulating system behavior in a cycle-accurate fashion, 2. Simulate the 1 mm 2 cortical microcircuit model at an acceleration of more than 20×, and 3. Support computational neuroscience research, aiding the evaluation of new neuron models, novel plasticity rules as well as parameter sweeps in an accelerated fashion. This contribution intends to establish a basis for future interaction between the neuroscience community and engineers working toward next-generation large-scale neuromorphic accelerators. . Materials and methods . . Static simulator Expressing a platform's performance as function of model and system parameters in an analytical form leads to overly complex equations, especially considering the stochastic instantiation of synapses or spikes. As our goal is to build a flexible and, importantly, scalable platform, this pillar focuses on guiding the design of a suitable communication architecture. Hence, we developed a C++-based numerical simulator to extrapolate system performance in a highly efficient manner. It is used to explore communication architectures, not yet accounting for other bottlenecks such as memory and computation. The simulation tool operates on the assumption of a homogeneous network topology. One arbitrary node is used as starting point in the calculations. According to a welldefined average spike rate or based on the evaluation of some existing simulation, the selected nodes emit spikes with a specific probability. These spikes travel to a randomly distributed set of target neurons. Based on an evaluation of the distance each packet has to traverse, a numerical solver calculates bandwidth requirements and speed-up as bounded by the communication. This empirical approach has been cross-validated with examples that are simple enough to be expressed in an analytical form. As one example, the bandwidth requirement in a broadcast approach for a mesh-like network topology is driven by the number of neurons per node, the firing rate, the system's acceleration factor, the number of target nodes and the size of each spike message (Kauth et al., 2020). The fast collection of quantitative data using this approach enables quick architectural exploration and early pruning of unsuitable directions. For example, the required number of network hops to deliver a package severely constrains the choices of suitable network topologies and routing schemes. The modeling of the most promising candidates can be refined while larger sample sets increase the confidence in the numeric results. This approach considers average system loads only, i.e. there is no notion of queuing, no unbalanced distribution of tasks nor any other dynamics considered in the evaluation of the system. Hence, we call this pillar static simulator. While it provides a much simplified assessment of the capabilities of different communication architectures, its key benefit is its speed-networks from thousands to millions of compute nodes can be evaluated in seconds, enabling the exploration of a vast design space. . . Dynamic simulator As many system architecture iterations will be necessary due to the mentioned chicken-and-egg problem, rapid prototyping is essential. Hence, it is necessary to virtually evaluate the prospective system architectures before finalizing the specification or even starting concrete design activities. For this, we developed a generic virtual prototype modeling architectural components like memories, routers and schedulers at varying levels of accuracy. These range from coarse behavioral models down to bit and cycle-true functional descriptions. It is an event-driven simulation model that emulates hardware platforms to capture their dynamic behaviors. In contrast to the static simulator, this dynamic simulator incorporates dynamic behavior such as congestion, and not only focuses on communication aspects but also on memory and computation. The dynamic simulator is written in SystemC, a C++ library used to model functional aspects of hardware systems with a high-level software language. The architectural components of the hardware are encapsulated in corresponding modules-SystemC's basic building blocks-connected to each other using ports. In a bottom-up perspective, the core module is the compute node that updates the state variables during each timestep. It aggregates all computations related to the neurons hosted on a specific node in the system. Details of the actual computation are omitted. Instead, only the computation latency is used to capture performance capabilities as function of neuronal model complexity. For this, the module absorbs spikes and generates new ones according to a predefined statistical distribution. Furthermore, each node runs its own synchronization process which is necessary for cycle-accurate . /fncom. . behavior. Details on the synchronization process will be elaborated in Section 2.3.1.5. In the mid-level, the network module has been designed to connect the instantiated nodes according to the specified topology by cable modules. These cables are implemented as SystemC channels. Relevant system properties such as transmission delay and bandwidth are captured by specifying the cable length, transceiver delay and bandwidth. The top-level module covers the user interface, starts the simulation, sets the configuration and calculates statistics at the end of the simulation. Design parameters can be specified in a configuration file at run-time, organized in four categories: biological parameters (e.g., firing rate, number of neurons, connectivity distribution), simulation parameters (e.g., number of simulation steps), interconnect (e.g., topology, routing, bandwidth) and hardware architecture parameters (e.g., number of neurons per node, number of workers, buffer depths, pipeline stages). The modular and hierarchical design of the dynamic simulator allows to test varying scenarios, develop different network topologies or even communication schemes without changing the aforementioned modules, as it already contains all necessary building blocks. Results of dynamic simulation are fed back to the static simulator to improve accuracy by refining its empirical models. At the same time, the dynamic simulator itself can be refined using measurement results from the physical system. . . FPGA cluster As key component, the FPGA cluster is a fully operational platform capable of running large-scale neural network simulations. At the same time, the framework incorporates a high degree of flexibility to evaluate alternative design choices. On one side, these are triggered by the exploration and profiling within the different pillars. On the other side, they emanate from new learnings in neuroscience and the respective research in modeling biological processes of the brain. Our objective is to conceive and realize a platform that provides both the necessary flexibility and adequate performance to evaluate meaningful test cases. In the following, we will therefore elaborate general design principles and hardware concepts needed to implement such a flexible platform. For each component, we then sketch certain design decisions employed in our evaluation system. Measurement results on this system are used later to calibrate the dynamic and static simulators, which then in turn allow for more realistic design space explorations. To this end, we designed a cluster of 35 FPGA boards. field-programmable gate arrays (FPGAs), in contrast to dedicated application-specific integrated circuits (ASICs), allow for rapid prototyping, while at the same time offering a wider flexibility and potential performance than GPU-based implementations. As the basis of this cluster we chose the NetFPGA SUME board (Zilberman et al., 2014) as it provides a high number of transceivers with a theoretical total bandwidth of over 100 Gbps and two memory channels to the 8 GB DDR3 with direct connections to the programmable logic. In the current setup, 4 SFP+ ports with each 6.25 Gbps and 10 SATA ports with each 6 Gbps are used for interconnecting the FPGA boards and host communication. Eight of these SATA ports are made available using a custom PCIe breakout board. Although not all of these connections are required for the 35 node setup, they are well suited for evaluating different network topologies. For an even larger number of nodes, reconfigurable switching solutions are recommended, see e.g., Meyer et al. (2022). Figure 2A shows a picture of the connected cluster. In any case, the developed environment is not dependent on any specific FPGA board, number of nodes, port count or network topology. This flexibility is considered a key advantage over fixed neuromorphic simulators in the context of exploring future hardware architectures. Apparently, the added flexibility leaves room for further improvements in performance once a suitable architecture has been identified. . . . Node architecture Due to the homogeneous network architecture of our system, the neuron mapping to the individual nodes is irrelevant for neural networks with roughly evenly-distributed connectivity. As this is considered realistic (Potjans and Diesmann, 2014), our current implementation distributes all neurons equally in a round-robin fashion. The individual nodes handle all necessary processing related to the local neurons and communication of spikes. The simulation is time-driven (as opposed to event-driven)-each neuron's state variables are updated in successive discrete timesteps, often 0.1 ms, set by the minimal synaptic latency (Brette et al., 2007). In the following sub-section, we present the components of a neuromorphic accelerator architecture. A high-level overview is depicted in Figure 3. . . . . Workers The actual computation of the neuronal dynamics is scheduled to the available workers at the node. Firstly, this computation requires a fast memory for the state variables of each neuron available at the worker. We opted for on-chip BRAM-blocks of SRAM-since the required capacity is low while block RAMs (BRAMs) offer a single-cycle access latency. Secondly, the worker contains the implementation of a neuron model. This either requires the computation of a simple matrix multiplication for analytically-solvable models such as LIF neurons with CUBA synapses (Rotter and Diesmann, 1999) or entails the execution of a numeric ODE solver to determine the solution for more complex neuron models such as Izhikevich (Izhikevich, 2003). As the neuron model processing can be executed in parallel, increasing the number of workers provides direct speed-up here. The interface to these workers and their memories requires only input and output streams of spikes, making the neuron model easily interchangeable. The state update of a neuron can result in the creation of an action potential. In that case, a corresponding network message is created and forwarded to a router. Apparently, the router also passes incoming spikes to the workers. Depending on the time of origin and the synaptic delay, incoming spike messages are to be considered in the computation in a specific timestep. Since this timestep does not necessarily correspond to the current one, spike messages have to be stored temporarily. The impact of multiple spikes onto the same target neuron in the same timestep . /fncom. . can typically be lumped into one synaptic input (Rotter and Diesmann, 1999). This offers the possibility of not having to buffer an arbitrarily large number of incoming messages. Instead, only a single synaptic input for each neuron and any future timestep has to be stored. Since the number of future timesteps which can contain such synaptic inputs is limited by the largest synaptic delay present in the neural network, it is usually implemented as a ring buffer. Synaptic inputs are forwarded by the purely combinational local router to the target ring buffers in a roundrobin fashion. In turn, the worker reads the synaptic inputs to all neurons for the current timestep sequentially from the ring buffer. As new spikes can be captured in the ring buffer asynchronously at any time, the ring buffer and the worker are decoupled in terms of congestion. In our system, each worker can calculate up to 255 LIF neurons with CUBA synapses, with 10 of these workers being instantiated per FPGA. The number of neurons is largely limited by the available BRAM which is used for the ring buffers. Thereby, models with up to 89,600 neurons can be handled by the cluster of 35 FPGAs. While more workers, and hence neurons, could potentially fit onto the FPGA with further optimization, this is already sufficient for simulating the cortical microcircuit model, and spending effort on a highly specific and optimized realization is not in the focus of the exploration. Running at a frequency of 189.383 MHz (chosen to . /fncom. . be synchronous to the DRAM memory to minimize clock domain crossing) and 30 pipeline stages, the neuron dynamic computation of a fully occupied node takes around 255+30 189.383 MHz ≈ 1.5 µs. In this configuration, the corresponding ring buffers can store incoming spikes of up to 64 future timesteps for excitatory and 32 future timesteps for inhibitory synapses, as determined by the maximum synaptic delays in the cortical microcircuit model. . . . . Synapse lookup Each neuron can be connected to many thousands of other neurons. Hence, each node requires a mechanism to assign spikes of presynaptic to postsynaptic neurons. This assignment is stored in the form of so-called synaptic lists and contains information about the assignment by means of unique neuron identifiers as well as the weight and delay of all related synapses. Due to the large number of synapses in natural-density neural networks, these lists must typically be stored in an off-chip memory such as a DRAM. The point in the system where these lists are accessed (the lookup) depends on the used casting scheme. If the network operates in broadcast mode, the presynaptic neuron ID of each generated spike is simply distributed over the entire network. The receiving nodes then lookup the synaptic information (weights, delays, targets) for all synapses between the presynaptic neuron and all local postsynaptic neurons for every incoming spike message. On the other hand, when using unicast the lookup is performed directly on the spike-emitting worker, prior to sending the spike message to the outgoing router. In this scheme, all postsynaptic neurons are addressed individually with a unicast message. In a system with flexibility to support both casting schemes, the hardware performing this lookup should be added both before and behind the workers and then be bypassed depending on the used casting scheme, as shown in Figure 3. Our lookup module was designed with this flexibility in mind, as it supports varying the number of prefetched words and parallel memory accesses at runtime. This allows us to tune it based on bottleneck measurements and simulation results from the dynamic simulator. . . . . Router In the present design, each FPGA node contains a dedicated router unit. Given the application requirements, our router is designed for ultra-low latency communications and high bandwidth of more than 12 Gbit/s per port while all ports can operate fully in parallel. Since the cluster is a development platform, it is also essential to be topology-agnostic, e.g., be flexible in terms of number of inputs and outputs ports. Furthermore, our router currently supports three different modes: emulation, bottleneck measurements and debugging. The last two modes follow simple routing algorithms, for example, forwarding packets in a certain direction. The emulation mode supports both unicasting and broadcasting, a variety of routing algorithms [e.g., best neighbor, windmill and xy routing (Kauth et al., 2020)], and various mesh-and tree-based topologies. Based on the packet type (synchronization packets, configuration packets or spike packets), the router redirects packets to the correct target(s). In all cases, a prioritized round-robin arbiter takes care of time-critical messages first. . . . . Memory A significant amount of data has to be sent to and received from such a simulation platform, comprising configuration data, dynamic neuron state information and result data (spikes, voltage traces, etc.). The on-chip BRAM is typically insufficient to hold all of this. Larger off-chip storage (e.g., DRAM) offers plenty of storage capacity but introduces potential throughput and latency bottlenecks. In the present realization, the static and dynamic state information of the neuron models is kept in the BRAM providing a reasonable trade-off between performance and storage requirement. With typically less than 1 kB needed for each neuron's state and ring buffer, the available 6.77 MB BRAM on our system can already accomodate many thousands of neurons. The memory requirement for the connectome, however, is considerably higher. In the case of the microcircuit with about 300 million synapses, several gigabytes of data are anticipated. Although this amount of data is distributed across all nodes, the sparse adjacencies require a sophisticated form of organization. A simple approach is to store the data in a single connection list per neuron. Since these lists have different lengths, base addresses must additionally be stored in the form of a lookup table. An alternative way is to pad the lists up to a common length. Although this method wastes some memory, it reduces latency by avoiding a second non-linear memory access. In our case, each synapse consists of a 16 bit target neuron identifier, an 8 bit delay, and a 32 bit fixed-point weight, resulting in 2.1 GB for the microcircuit connectome. This connectome would already fit into the DRAM of a single node, while it would not even come close to fit into the BRAM of the complete cluster. Since we are targeting a distributed system, the connectome is split over all nodes and stored as padded synaptic lists with common length. On the one hand, this padding causes synaptic lists to occupy 200 MB of DRAM memory on each node. On the other hand, the memory address of any synaptic list can be calculated (using source neuron identifier and fixed list length). On every incoming spike, a DRAM read access is started to retrieve the corresponding synaptic data. The amount of data requested depends on the neuronal fan-out. Since the use of external memory not only limits the size of the synaptic lists, but also drastically restricts the achievable simulation speed due to bandwidth limitation and comparatively high latency, any approach completely eliminating time-critical external memory accesses would be preferable. For instance, algebraic definitions of the connectivity pattern would allow to compute synapse configuration data on-the-fly, which reduces memory requirements significantly (Roth et al., 1995). When combined with online computation of other parameters, such as synaptic efficacy or axonal delay, based on deterministic random distributions, external memory is no longer a critical resource (Wang et al., 2014). However, as this precludes the implementation of neuronal plasticity (where individual weights will have to be adjusted and new synapses added) and the simulation of specific connectomes (e.g., extracted from actual biological tissue) we excluded this method in the current exploration. . . . . Synchronization Due to fluctuating loads, caused by local spike bursts or coincidentally increased routing via certain nodes, the individual nodes of the network can reach locally varying processing latencies. A commonly used scheme to prevent the nodes from drifting apart are global barrier messages (e.g. Heittmann et al., 2022). Global synchronization, however, causes the entire system to run at the speed of the slowest node. Therefore, we use local synchronization in our system. Instead of forcing all nodes in the network to the same timestep, we limit the maximum time difference of directly neighboring nodes to the minimum synaptic delay across all synapses. This enables compensation of fluctuations in the simulation speed, maximizing the overall speed of the system. In our synchronization scheme, a synchronization message is sent to each neighbor after each computation of the neuronal dynamics (see Figure 4). This message is always sent after the last generated spike for the corresponding direction. When a node receives such a message, it can assume that it has received all spikes for this timestep from the corresponding neighbor. Since a computation is started only after all synchronization messages have been received and either forwarded or consumed, it can be guaranteed that each spike message has been forwarded by at least one node. However, the topology we use has a worst case latency of two hops (topology and routing algorithm will be derived and explained in Section 3), while the microcircuit has some synapses with a delay of only one timestep. Therefore, a second synchronization per timestep is used, which, however, is not initiated after completion of the computation, but after receiving the first synchronization message of all neighbors. Accordingly, the next computation is only started after all second synchronization messages have been received. In general, the number of synchronizations per timestep can be set to the worst case number of network hops required by a message, as each synchronization guarantees that a message will travel at least one hop per synchronization step. This procedure is sufficient for timely arrival of all spikes but not always necessary depending on network bandwidth and load. Performing multiple synchronizations increases the latency between timesteps and consequently slows down the simulation. In our implementation, this can be avoided by manually selecting a smaller number of synchronizations and monitoring spikes arriving too late via corresponding error registers. This highlights the trade-off between correctness and reproducibility on the one hand, and speed-up on the other. . . . Interconnect The design of an efficient interconnect architecture both in between nodes and from nodes to host systems is crucial. While the former can limit weak and strong scalability, the latter is generally considered a challenging task for neuromorphic platforms where large amounts of data need to be transferred from and back to host systems regularly (Knight and Nowotny, 2018) causing the setup time to exceed simulation time in many systems (Schemmel et al., 2008;Furber et al., 2013). For both challenges, wireless links seem like a promising solution, being able to deliver all spikes in one hop in-between nodes, and sending results back to a number of hosts using the same technology. But while there has been some work on wireless HPC with technologies like free-space optic or 60 GHz radio communication (Li et al., 2020), the reduced energy-efficiency and high error rates make this option at the current state infeasible. Suitable alternatives and our respective implementation will be outlined in the following. . . . . Node to node As a deterministic neural network simulation requires intime arrival of spike messages, it is clear that latency and bandwidth are major factors determining system performance -the less network hops a message needs to arrive at all destinations, the better (Kauth et al., 2020). Communication via SerDes MGT, coupled with 64/66b encoding, offer a good tradeoff between low latency, high-speed and reliability, given a suitable transceiver technology. However, the scalability of the communication architecture is limited by the topology, routing algorithms and casting methods used in the system. In our previous work, we have assessed conventional electrical/optical toroidal mesh topologies and showed how such connection schemes are favorable for neighbor-only communication (Kauth et al., 2020). We also demonstrated that they fail transferring spikes to nonneighbor nodes in an accelerated fashion due to excessive latency, especially when white matter connections are simulated. To tackle this, we introduced so-called long hop connections-superimposed meshes-and appropriate routing algorithms. The exact set of communication methods employed in our system will be derived from the dynamic simulator in Section 3. As mentioned before, the presented FPGA cluster builds on dedicated SerDes transceivers as available in modern off-the-shelf FPGAs. As communication errors are inevitable, proper error handling is essential. While biological neural networks are inherently noisy (Rolls and Deco, 2010), the exact type and amplitude of this noise must remain a model parameter. In a deterministic system, we need to be able to adjust the noise and turn it off at will, which is not possible in the case of uncorrected communication errors. Given the low bit error rate of state-of-the-art communication lines, an acknowledgment-based flow control combining with CRC and retransmission can guarantee proper error detection and correction. In our exploration platform, we employ an adapted Go-Back-N ARQ algorithm where a received packet is only accepted if CRC checks pass, correct packet order is given and receiving buffers are free. Lastly, deadlock handling is a crucial aspect. The common topology-agnostic deadlock handling scheme-dropping packets in critical cases-is not acceptable in deterministic simulators. In the case of mesh-like topologies, even when containing long hops, a combination of turn restricted routing and virtual channels can avoid deadlocks from happening altogether (Sobhani et al., 2022). In our system, we observed no deadlocks at all in a multitude of large-scale neural network simulations, no matter which topology and routing was used. However, we expect proper deadlock handling to become ever more important in more complex networks. . . . . Host to system A crucial aspect of the targeted exploration platform is the connection from a host machine to the individual nodes. High bandwidths are necessary as neuronal simulations require large amounts of setup and connectivity data to be uploaded. For instance, the cortical microcircuit model (Potjans and Diesmann, 2014) contains around 300 million synapses-with around 8 B per Frontiers in Computational Neuroscience frontiersin.org . /fncom. . FIGURE Synchronization across nodes using two neighbor-to-neighbor synchronization events per timestep. synapse (considering synaptic weights, delays and IDs) this results in 2.4 GB of data to be uploaded. Most modern FPGAs contain multi-gigabit transceivers (MGTs) which are able to handle this amount of data in a few seconds at most. They can for example be interfaced using Gigabit-Ethernet links, providing access to the FPGA cluster over an existing network infrastructure. However, the employed NetFPGA board does not contain a dedicated TCP/IP stack. To avoid having to use precious FPGA logic for this, we use a dedicated communication node as an interface between the network and the cluster (see Figure 2B). We opted for a Xilinx Zynq board (ZCU106) which contains both an ARM processor and FPGA logic. A TCP server running on the ARM sends data received on the RJ45 interface to the FPGA logic via a dedicated AXI bus, and vice versa. The Zynq board is programmed with the same reliability layer and SerDes MGT that are used for communication between the nodes. Thus, no additional logic is required on the simulation nodes. . . . Configuring and debugging A system that is used to evaluate different architectures should not only be designed for flexibility, but also be rapidly reconfigurable. Most FPGA boards primarily support configuration using a bitstream, which is transferred via JTAG either directly to the device, or to an on-board flash memory for non-volatile storage. Since this usually requires a direct connection to the board using for example USB, it is not feasible for a large cluster. Depending on whether the FPGA itself has read/write access to the flash memory, two generic configuration schemes for a distributed system are possible. Firstly, if flash access is available, a host machine could directly transfer the bitstream to one connected node, which then broadcasts it through the system. This concept lends itself to our system as broadcast is a necessary feature for any exploration anyway. Secondly, if no flash access is available, partial reconfiguration can be utilized to load an initial design once into the flash. Further modules can be loaded later in the exploration process. However, as our system is still actively developed, we use JTAG and UART over USB for programming and, importantly, debugging individual boards (shown on the left in Figure 2B). Regardless of the approach chosen, partial reconfiguration can also be used to replace modules that change frequently, such as the neuron model in the workers. This reconfiguration is considerably faster in comparison to complete reconfiguration. In addition, the rest of the system retains its state and can be used directly. . . Neural network testcases . . . Background The dynamics in biological neuronal networks happen in a wide range in terms of time and space resolution -they are inherently multi-scale (Silver et al., 2007). In the domain of biological neuron models, the LIF model can be considered least complex as it focuses mainly on sub-threshold behavior, while still providing meaningful dynamics in large scale simulations. The existing broad variety of models provides better plausibility (Izhikevich, 2004) with extensions that divide the cell body into multiple compartments. The rich variety of synapse behavior is reduced in most simulations to the modeling of the PSC. The most basic model assumes the transfer of a charge packet at the time of arrival of a pre-synaptic action potential (delta function). Including some essential temporal behavior the CUBA or COBA models (Vogels and Abbott, 2005) induce an instantaneous rise combined with a more plausible exponential decay. . /fncom. . As much as solving the underlying differential equations of the more complex models will start to impact performance from a certain complexity, it is not part of the subsequent evaluation that focuses on system bottlenecks with respect to handling the spike messages. The regarded large-scale testcases only include LIF neurons with CUBA synapses which can be efficiently solved in a closed form-the so-called exact exponential integration (Rotter and Diesmann, 1999). . . . Microcircuit The cortical microcircuit model is a full-scale spiking network model of a unit cell of early mammalian sensory cortex, covering 1 mm 2 of its surface. It consists of 77,169 LIF neurons organized into four layers of inhibitory and excitatory populations (Potjans and Diesmann, 2014, Figure 1). The details of the neuron model and its simulation parameters are available in Potjans and Diesmann (2014, Tables 4, 5). The neurons are connected randomly via ∼0.3 billion synapses with population-specific connection probabilities. The synaptic strengths as well as transmission delays are distributed normally (Potjans and Diesmann, 2014, Table 5). Besides synaptic connections internal to each neuronal population and in between different populations, every neuron additionally receives Poisson-distributed inputs. These emulate external cortical or thalamic input. The microcircuit belongs to the smallest networks of natural density, i.e., modeling a realistic number of connections with realistic connection probabilities. At the same time, it exhibits firing rates and irregular activity that match experimental in-vivo findings. Therefore, it poses constraints on communication, computation and memory bandwidth that are both challenging and realistic at this scale. Hence, it has become a well-accepted model by the computational neuroscience community and as a result a benchmark to evaluate neuroscience simulators. While other tasks have been used as benchmarks for individual systems in the past, such as randomly-connected networks of 100k neurons (Stromatias et al., 2013) or variations of the balanced Brunel network (Brunel, 2000), the cortical microcircuit is the most widely adopted and commonly-used benchmark to the best of our knowledge. It will therefore be the basis of later analyses and comparisons. It is important to note that the cortical microcircuit is simulated with a timestep of 0.1 ms (as opposed to the 1 ms frequently used in the past) to properly account for small synaptic delays of local axons, and a fanout of around 4,000 (instead of 1,000), increasing its complexity compared to older benchmarks [see e.g. the works of Moore et al. (2012) and Furber et al. (2014)]. For the following benchmarks of our system, we use the cortical microcircuit implementation Potjans_2014 from the PyNEST framework (NEST:: v3.3) without any changes, except setting the poisson_input switch to False . This way, the Poisson input to each neuron is emulated using DC input instead, which was shown by Potjans and Diesmann (2014) to be qualitatively equivalent. The PyNEST implementation is used to initialize the connectome and neuron state variables of the microcircuit as configuration for our cluster. Furthermore, NEST simulations using the very same initializations are run on a traditional HPC cluster, serving as a golden reference to compare and verify the FPGA cluster results to. We run all simulations for 15 min of biological time. . Results In this chapter, in accordance with the three pillars presented, we first establish a network topology and routing scheme, based on assessments with the static simulator. We then characterize the hardware system resulting from this interconnect solution to get an understanding of the system behavior. However, the dynamic simulator consists of individual components, each modeling a specific hardware unit that interacts with others, causing interfering latencies. Therefore, it is important to extract isolated information about their behavior to calibrate the dynamic simulator. For this purpose, we systematically design neural networks which first individually stress the different components of a single node and later reveal the influence of the interconnect. Finally, the dynamic simulator is calibrated using the measurements and compared to the system's real performance on the cortical microcircuit. The overarching purpose of this analysis is to (1) showcase our methodology, (2) analyze the speed and efficiency of our hardware cluster, and (3) tune the dynamic simulator so that it can reliably estimate changing system requirements posed by new neuroscience insights and applications in the future (such as changing average firing rates or more complex neuron models). This last point is key in overcoming the chicken-and-egg problem of neuromorphic simulator design described before. We start this analysis by devising a baseline network topology and corresponding routing algorithm using the static simulator. In previous work, we found that long-hop connections are crucial for accelerated, large-scale neuromorphic systems that contain thousands of nodes (Kauth et al., 2020). However, for our case of 35 nodes, we have enough transceivers to realize a more closelyconnected topology that is simple to route through. While an allto-all connection would require too many transceivers, a simple trade-off is the topology shown in Figure 5A. The nodes, arranged in a homogeneous 2D mesh, are each connected to all nodes in the same row (using 6.25 Gbit/s SFP links) and column (using 6 Gbit/s SATA links). Spikes are broadcasted in a two-step fashion-firstly, sent toward all nodes on the same x and y axes as the source node ( Figure 5B), and secondly, vertically (or horizontally) forwarded from the nodes on the x (or y) axis ( Figure 5C). This xy routing algorithm restricts the possible turns, however deadlocks cannot be eliminated (see Section 2.3.2.1). The static simulator estimates the maximum network bandwidth requirement using this topology to be <1 Gbps/node. This is suitable and therefore, the proposed topology and routing algorithm will be used in the following. In all following experiments, we measured the duration of the simulation τ using a driver for the FPGA cluster running on a host computer. The acceleration factor a is calculated based on the time resolution h, which is set to 0.1 ms in all simulations, and the number of simulated time steps n according to Equation 1. . . Characterization of nodes and interconnect An effective way to showcase the workflow of our development framework is the exploration of limitations and bottlenecks of the compute nodes utilized in our FPGA implementation. The outcome of this characterization can be later used to both finetune the dynamic simulation and guide the development of larger compute clusters. While we will provide an analysis tailored to our compute nodes, the methodology can be generically applied to other systems. The major bottlenecks in any distributed system can broadly be attributed to the areas of computation, communication, and memory access. Their individual influence on system performance depends on the computational load of the targeted simulation. While the computation latency is a direct function of the pipeline depth of the chosen implementation and the number of neurons calculated per parallel worker (at least for analytically solvable models), it is only expected to be a bottleneck in scenarios with low network activity. The higher the neuronal firing rates, the more synaptical information will have to be looked up. Furthermore, this can lead to major challenges in system scalability as the required communication bandwidth increases. In this first step of characterization, we design a set of neural networks that are intended to selectively exclude usage of certain hardware components-for example, a network where no neuron fires never accesses off-chip memory. The goal is to individually explore limitations of the hardware components that are still used in these cases. While not biologically accurate, these serve to gain a better understanding of the systems bottlenecks. We implement all neuronal networks in NEST:: (Gewaltig and Diesmann, 2007) and extract information regarding neurons and synapses (neuronal states, connectome, etc.) to configure the hardware cluster. We use LIF neurons with CUBA synapses, initialized with the same model constants as neurons in the cortical microcircuit (Potjans and Diesmann, 2014). Subsequently, we obtain performance parameters such as bandwidths or latencies of the remaining components from the observed acceleration factors. These findings can be used to create simplified system models and to calibrate the dynamic simulator later on. . . . Computational bottleneck In this first test, we want to investigate the influence of computation. More precisely, we consider the time needed to compute the dynamics of a single neuron. Therefore, all kinds of inter-node communication and memory accesses should be avoided. For the realization of this test, neurons are loaded onto a single node. By setting the initial membrane potential below the action potential threshold, they are prevented from generating any spikes. However, local synchronization still starts the computation of the neuronal dynamics of the next timestep. For this purpose, each worker sends a message through the router to the scheduler after completing its computation. This influence cannot be avoided without fundamentally changing the behavior of the system. Here and in the following, the system is set up as described in Section 2.3.1.1, i.e., each node contains 10 workers that can compute up to 255 neurons each. The bars "1 × 1" of Figure 6 show the durations per timestep of this scenario when computing different numbers of neurons per worker NpW. The total duration of one timestep can be expressed as τ s = t 0 + τ neuron · NpW where τ neuron is the time required to calculate a single neuron and t 0 the remaining, in this case constant, duration of each timestep. The value of t 0 entails all other latencies Frontiers in Computational Neuroscience frontiersin.org . /fncom. . . . . Network latencies In the next step, we measure transceiver latency. Again, there is no possibility for a completely isolated observation, since not all other influences can be excluded. However, the synchronization messages of our system offer the possibility to observe transceiver latency without having to generate spikes. These messages have the same size as spike messages of 128 bit and, unlike spikes, do not cause memory access. We therefore deploy a neural network that, as before, never spikes. However, now it is simulated on multiple nodes instead of just one to cause synchronization messages between the nodes. Furthermore, for the sake of simplicity, only measurements with one neuron per worker will be considered below. All non-excludable influences such as local synchronization, scheduling and clock domain crossings (CDCs) can be eliminated by regarding the difference to the single node case. The latency when sending a 128 bit packet is: τ 2×1 s − τ 1×1 s = 1,549 ns − 755 ns = 794 ns. This number does not represent the round-trip time because messages can be sent in both directions at the same time. However, the larger the network, the higher the chance of nodes having to wait for each other, increasing total latency. For example, the measured delay in a 5 × 1 network increases to an average of 874 ns. For the same network sizes in the y-dimension, latencies result in 815 ns (1 × 2) and 944 ns (1 × 5), respectively. This difference can be explained by the fact that the horizontal interconnects are operated at a frequency of 6.25 Ghz instead of 6 GHz and the latency of the solution is largely dependent on this. Due to the two-hop synchronization required for the present topology, we expect the duration of the largest network (5 × 5) to be approximately doubled to 1,888 ns, compared to 1 × 5. The measurement results indicate a small additional overhead with a duration of 2,144 ns, which can again be attributed to the larger network size. . . . Local communication Next, the impact of spikes on system runtime must be examined. A spike passes through multiple interacting hardware components on each node that introduce additional latencies and bandwidth limitations. This behavior has to be captured by respective modules of the dynamic simulator. For this purpose, we configure neurons to generate one action potential at each timestep. This is achieved by changing the neuronal refractory period to 0.1 ms and applying the maximum possible external input current. Furthermore, we set each neuron's fanout to zero to avoid lookups, minimizing the influence of memory accesses. Figure 7 shows the results of this experiment for different numbers of neurons on one and two nodes. We now investigate the overhead of a single spike generation, based on the first bar in Figure 7. Compared to the simulation of a single neuron that never spikes, a timestep now lasts 0.903 ns − 0.755 ns = 148 ns longer. This time difference includes local routing and a single memory lookup (which is always performed by the system to retrieve the length of the synaptic list for each incoming spike). However, a large part of this delay is consumed by the local synchronization packets even without a spike, which is why this difference cannot be broken down into individual contributions. Presumably, however, the major part is due to the memory latency. Scaling up to generating 20 spikes by 20 neurons, the impact per spike is calculated to be 1.73 µs−0.903 µs 20−1 = 43.526 ns. On two nodes, generating a single spike takes only 59 ns longer than simulating without spikes, compared to the 148 ns overhead on one node. This is expected given that some of the processes involved in the spike handling can take place in parallel to the synchronization between the nodes. Calculating the effects of a spike with 20 neurons per node yields 2.984 µs−1.608 µs 40/2−1 = 72.421 ns. This is significantly larger than the impact per spike on one node because we use broadcasting. With two nodes, each node . /fncom. . FIGURE System performance with spikes, but without memory lookups. Acceleration factor is limited by neuron computation, synchronization and spike transmission. has to process locally generated spikes and incoming spikes from other nodes. . . . Memory bandwidth Now the memory connection, which has been largely ignored up to now, will be measured to calibrate the simulator's memory model. Here, again, we will try to exclude as many other influences as possible. Accordingly, the following experiments are carried out on a single node only, such that no network communication takes place. Furthermore, to avoid congestion after the memory lookup, the target neurons of the synaptic lists are assigned in a round-robin fashion and therefore evenly distributed over the workers. In principle, memory accesses can be characterized by two main factors: access latency and data transfer rate. To be able to determine these two separately, different measurements have to be carried out. We decided to vary both the number of accesses and the amount of data requested per access. To measure the resulting memory bandwidth, we count the number of memory accesses during one simulation. Then, we divide the number of bytes read from memory during the entire simulation by the simulation's runtime. This gives us an average memory bandwidth for one run. However, the existing overheads for the computation of neuron dynamics as well as the already described local synchronization are part of these measurement results. To conduct the experiments, each neuron is again configured to generate an action potential in each timestep so that timesteps without a lookup do not have to be accounted for in the calculation. Our first variation parameter, the number of accesses, can be varied by changing the number of neurons -each additional neuron creates an additional parallel memory read request (e.g., four neurons result in four parallel requests). The amount of requested data per access is set by the neuronal fanout. The results of sweeping both parallel accesses and request lengths is shown in Figure 8. As expected, the average memory bandwidth utilization increases with both the requested list length and the number of requests. A single request can only use one of the two memory channels and therefore results in less than half of the maximum achievable data transfer rate. The experiment shows that in the optimal case, the hardware achieves at least 90% of the theoretically possible bandwidth for long synaptic lists. Consequently, there is hardly any further potential for optimization in this case, given that the memory is not utilized at all times. With shorter synaptic lists, however, the achieved bandwidth drops drastically and can only be compensated to a limited extent by the parallelization of requests. For significantly larger clusters, a lookup in the target node, as performed in the case of broadcast, becomes difficult. Possible solutions to this problem have already been presented by Kauth et al. (2020). Equation 2 shows the relationship between memory latency, bandwidth and access time. In general, a certain time t lat elapses between the memory request and the first byte received. In the case of DDR-SDRAM, this is often one to several 100 ns. In a simplified model, the requested data is then transferred with the available maximum memory bandwidth BW max . In our measurements, besides the number of bytes requested, only the total access time is available. Figure 8 shows the apparent average memory bandwidth BW mean . Based on the measured data, the approximate memory latency is calculated using Equation 2 as t lat = 1,000 Synapses · . FIGURE Memory bandwidth experiment: achieved mean memory bandwidth for di erent numbers of parallel accesses and di erent synaptic list lengths. The neural network is configured to have evenly distributed synapses and therefore less congestion in subsequent components of the system, enabling this case to examine the lookup bottleneck. 8 Byte Synapse · ( 1 5 GB/s − 1 10 GB/s ) = 800 ns. This value is far above the real memory latency. It includes several other latencies occuring during spike generation and lookup. Nevertheless, it can be used for calibration of the dynamic simulator. . . . Local routing and ring bu er In the previous experiment, we assumed that the targets of the synaptic lists read from memory are evenly distributed to explore the memory's capabilities. In practice, however, this may not always be the case. Local routing, consisting of a simple roundrobin arbitration, can become a bottleneck if many spikes target the same ring buffer. Consequently, if the targets of synaptic lists are unevenly distributed to a significant extent, the data transfer rate of the lookup is limited due to back-pressure. In the following experiment, we will therefore determine the achievable total bandwidth of the ring buffer for such poorly distributed synaptic targets. For this purpose, the synaptic lists are generated in such a way that each neuron has only synaptic connections to itself (multiple autapses). As shown in Figure 9, the synaptic list length and the number of neurons, and thus the number of parallel requests, are varied, just like in the previous experiment. Now, for a single access, the bandwidth converges as expected to the bandwidth of a single ring buffer of ∼ 1.5 GB/s. Similarly, the total bandwidth of two parallel accesses converges to about 3 GB/s. For more than two simultaneous accesses, an anomaly can be observed. The total bandwidth increases rapidly to a list length of 96 synapses, only to drop again significantly. While the increase can be explained by the higher speed of the DRAM when fetching longer lists, as already demonstrated in the previous experiment, the subsequent drop is due to the poor distribution of synaptic targets. With small synaptic list lengths, frequent switching of the local router takes place. Internal FIFOs can thus compensate for the limitation of individual ring buffers. However, this becomes increasingly difficult in the case of longer synaptic lists due to relatively small FIFOs. The case demonstrated here is a worst-case scenario and realistic neural networks have lower requirements. However, depending on the degree of non-uniformity of the synaptic lists, their length and the memory bandwidth, the achieved memory bandwidth can still be significantly lowered by congestion (compared to optimal case in Figure 8). To tackle this, for example, the capacity of FIFOs prior to the ring buffers can be adjusted. It is also possible, at the cost of higher latencies, to divide the lookup of longer lists into several smaller memory requests and interlace them. However, in the upcoming experiments we will focus on the microcircuit model where the capacity of existing FIFOs were designed to be sufficient to compensate any congestion. . . Characterization of cluster Now that we have all the data we need to calibrate the dynamic simulator, we are interested in some other aspects of our system. Scalability is a fundamental requirement for neuromorphic simulators that want to study significant parts of the human brain. As neuroscience experiments cover widely varying degrees of network size and complexity, understanding how both small and large workloads perform on different cluster sizes is crucial. Our system serves as a validation and calibration basis for the dynamic simulator, and is therefore not designed for scalability by several factors. However, given that the hardware platform can . FIGURE Memory bandwidth experiment: achieved mean memory bandwidth for di erent numbers of parallel accesses and di erent synaptic list lengths. The neural network is configured to have unevenly distributed synapses and therefore more congestion in subsequent components of the system, enabling this case to examine the ring bu er bottleneck. already perform neuroscience simulations, we want to investigate which network configurations best support which complexity. The characterization of individual nodes has already shown how the additional latency and synchronization imposed by a small cluster of independent nodes counteract the decreased nodal load of off-chip memory accesses and compute. To further assess the behavior of the system, we therefore perform strong and weak scaling experiments. In an effort to explore complex interactions between different bottlenecks, we choose the cortical microcircuit as a realistic benchmark (cf. Section 2.4). The example PyNEST implementation lends itself as a suitable testcase as it allows scaling the number of neurons with automatic adaptation of the connectivity to more or less maintain the mean firing rate. . . . Strong scaling In strong scaling benchmarks, the number of nodes is increased while the problem size is fixed. If the required time for solving the problem reduces linearly with increasing processing power, the system is considered to show strong scaling behavior. Since the simulation of biological neural networks is largely based on communication between a large number of neurons, we don't expect strong scaling to apply here. On the contrary, in some cases, we could even expect smaller clusters of highly utilized nodes to achieve higher simulation speed, since this keeps a large part of the communication within the nodes. The optimal cluster size for computing a given neural network is a trade-off between latencies introduced by the interconnect and local bottlenecks, such as memory and computation, making it difficult to predict. Three different strong scaling experiments are shown in Figure 10. In these examples, the speed of the simulation increases with increasing cluster size, but significantly sub-linear, so that strong scaling, in the strict sense, does not hold here. . . . Weak scaling On the other hand, in weak scaling benchmarks, larger problems should be solved in the same time by using more hardware. This is a characteristic that we strictly demand from simulators of biological neuronal networks. Specifically, the cortical microcircuit with its size of about 1 mm 2 represents only a tiny part of the whole brain. Thus, to meet our long-term goal of simulating a significant portion of the human brain in an accelerated manner, a benchmark of the microcircuit is only sufficient together with the property of weak scalability. To investigate weak scalability in our system, we performed an experiment with multiple simulations on different cluster sizes, each with 1,929 neurons per node, as shown in Figure 10. This puts our system at a relatively high load which should represent a realistic case. Network sizes below 2 × 2 were excluded since they require less than two synchronizations and are therefore not directly comparable. In general, investigating weak scaling requires a large number of nodes since small decays of operation speed can either continue or saturate with growing network sizes, e.g., when caused only by small deviations between the nodes. As can be seen, while there is a slightly decreasing trend from 4 × 2 to 5 × 6, the acceleration factors in general all aggregate around 20× acceleration. Larger clusters are required to properly judge whether weak scaling applies or not. Previous simulations have shown, however, that the use of broadcasting prohibits scalability to a large extend (Kauth et al., 2020). Firstly, network load . /fncom. . increases proportional to the number of nodes, independent of the neuronal fanout. Secondly, broadcasting requires postsynaptic lookups, resulting in a higher number of smaller memory accesses compared to unicasting. As shown in our previous work, longhop-based topologies with a dedicated, directed casting scheme are suitable solutions for large scale networks. In a network of our current size, however, this scheme would still be disadvantageous. . . Testcase: microcircuit To conclude the measurements on the hardware system, we perform a full sweep over several orders of magnitude by scaling the number of neurons in the cortical microcircuit model (fanout is kept at full-scale) and simulating it on different network configurations. This should answer the question how our system performs for realistic use cases of varying complexity. While previous experiments can be understood as ways to better understand the limitations and behavior of our platform, this experiment is relevant to neuroscientists who aim to accelerate and parallelize their experiments -the faster the FPGA cluster is at small-and large-scale experiments, the more usable it is to aid neuroscience research in the future. The results are shown in Figure 11. The simulation of small neuronal networks works best on small clusters, particularly in the case of a single node where no external communication takes place, resulting in the highest achievable acceleration factor for a 0.1 % microcircuit of 124.36. At larger scales, memory access starts to limit system speed. At this point, distribution among several nodes becomes advantageous. In contrast, large cluster configurations do not achieve significantly larger acceleration factors even with smallest neural networks, since synchronizations are a major limiting factor. Accordingly, scaling up the neural network reduces the simulation speed only slightly. For example, the cluster of 35 nodes reaches an acceleration factor of 33.78 when simulating 77 neurons, while the ∼1,000 times larger full-scale microcircuit with 77,169 neurons can still be simulated with 20.36×. . . Correctness The entire system was designed with reproducibility and determinism as key features in mind. However, the exact simulation results will still deviate from any ground truth generated on a different system due to certain design decisions in hardware. In our case, we for example use 32 bit fixed-point for saving and accumulating weights in the ring buffers before calculating the neural state update in 32 bit floating-point. The resulting deviations compared to a 64 bit floating-point operation are small, yet accumulate over time, possibly leading to a neuron spiking one timestep earlier or later, which in turn affects many other neurons. As spiking neural networks are chaotic systems sensitive to even small perturbations (van Vreeswijk and Sompolinsky, 1998), the resulting network activity on different systems can hardly be compared on a spike-by-spike basis. The most simple and direct comparison to NEST can be drawn by regarding the total number of generated spikes. For a specific microcircuit initialization, 222,545,972 spikes were generated on the hardware platform, compared to 221,532,831 spikes generated when running NEST on a HPC platform. The resulting deviation of 0.46% is noticeably smaller than the difference between two different microcircuit initializations executed on the same system, which we observed to reach more than 1%. However, this comparison is fairly limited as it does not capture any dynamic behavior of the network. To properly judge the correctness of a given simulation, we follow the established way of comparing the network activity of simulations on our system to some ground truth results, using spike-based statistics. In our case, we take NEST simulations from a high-performance computing cluster as ground truth. In particular, we compare the following well-established statistics (Gutzen et al., 2018): • Time-average firing rates of single neurons. • Coefficients of variation of inter-spike intervals. • Pearson correlation coefficients between the spike trains of a randomly sampled set of 200 neurons, binned at 2 ms. FIGURE Achieved acceleration factor for cortical microcircuit at di erent scalings of neurons. Scaling of synapses is kept at %. Note that the × network has network synchronization disabled while × and × networks perform only a single synchronization per timestep. All other networks require two synchronizations per timestep. FIGURE Comparison of spike-based statistics measured on our platform and NEST simulations run with di erent seeds (min-max values marked as gray area). The simulations were run for min of biological real-time. In Figure 12, the spiking statistics of our largest experiment, the full-scale cortical microcircuit, and the corresponding results from the NEST ground truth are shown. In particular, we run 10 simulations in NEST for 15 min of biological time using different seeds to estimate the range of acceptable deviation of neuronal states and connectome (resulting min-and max values are plotted as a gray corridor). Thereby, as is common in the literature, the first 10,000 timesteps are ignored in order to exclude transient effects. We can see that the deviation of results on our cluster to the reference is minimal. Compared to second order statistics . reported in other works Rhodes et al., 2019;Heittmann et al., 2022), it can be seen that we are well in the accepted range of deviation to the baseline. Neuroscience research can therefore be safely conducted on the cluster as on any other system. . . State of the art Before contextualizing our work within current state-of-the-art systems, we want to highlight energy efficiency as an additional metric for comparison. While not focus of the development effort of this platform, energy efficiency is generally one key motivation for developing brain-inspired algorithms and hardware. In particular, the energy per synaptic event in the human brain has been estimated to be in the range of 19-760 fJ . In light of these impressive values, the neuromorphic computing community takes the human brain as a major inspiration for novel computing architectures. To better assess how close or far we are to these numbers, and to better compare to existing systems, we derived the energy consumption per synaptic event for our platform. While running the full cortical microcircuit simulation, we measured the power consumptions of multiple nodes using a current clamp, averaged them, and verified these results inquiring the on-board power management unit via IIC. Extrapolating the resulting 26.54 W to all 35 nodes, we arrive at a total power consumption of P = 928.9 W. Using the on-board power management unit, we measured that less than half of this is consumed by the FPGA itself-off-chip memories and periphery draw most power. The resulting energy per synaptic event is calculated, as usual, as the total energy consumption of the system during simulation (given as measured power integrated over simulation time t) divided by the number of all occurring spikes S times the average neuronal fanout f o , resulting in the expression: E syn.ev. = P· t S·f o . With a total number of S = 222,545,972 occurring spikes, an average fanout of f o = 3,880 and a simulation time of t = 15 min/20.36, we arrive at an energy of 47.55 nJ per synaptic event. Table 1 shows the achieved acceleration and energy efficiency of various recent state-of-the-art systems. We focus this comparison on systems running the cortical microcircuit model. It has been seen in the past that efficiency measurements compare poorly when switching simulation benchmarks. For instance, while Stromatias et al. (2013) reported SpiNNaker to have an energy consumption of ∼ 20 nJ per synaptic event with a network of 200k randomlyconnected Izhikevich neurons, the simulation of the cortical microcircuit drew on average 5,800 nJ per synaptic event on the same system (van Albada et al., 2018)-a difference of over two magnitudes. For this reason, comparisons to platforms not simulating the same task is inconclusive. We focus our analysis on the microcircuit due to its widespread adoption in the community as a benchmark for neuroscience simulations. As can be seen, our system compares favorably in both measures to existing platforms. In terms of speed-up, we outperform the currently fastest platform by more than 5×. Along the same line, our platform provides 10× lower energy per synaptic event than the state of the art. The energy efficiency of 48 nJ per synaptic event is mainly driven by the achieved acceleration factor. Here, as well, it is important to mention that this energy efficiency is not yet the final frontier, even on our system. The off-the-shelf FPGA we use was designed as an evaluation platform and is therefore not optimized in terms of power consumption. Even in idle state, each board requires almost the full power measured during the simulation. Reasons for the performance can be manifold and the systems are too complex to investigate exact differences. One of the reasons is local synchronization. Some other systems use global synchronization which requires packets to travel a longer distance and pass through a central network node which potentially becomes a bottleneck. Another possible reason is the network topology. Due to our small cluster size relative to SpiNNaker or CsNN, we can easily reach a high connectivity, reducing the mean network latency. However, there are solutions to solve this problems even for large cluster sizes by using long hop connections (Kauth et al., 2020) instead of neighbor-only topologies like the hexagonal mesh of SpiNNaker. Smaller systems like single GPU simulators on the other side suffer from limitations of the compute power. Memory integration also varies greatly between our system and others. General purpose computers usually have an inherently good memory interface, which is often surpassed many times over by GPUs. With these systems, the bottlenecks are likely to be elsewhere, such as in the network, which in turn creates a massive impact on their scalability. Here, the freely distributable MGTs of FPGAs are the decisive advantage over GPUs. On the one hand, GPU-based simulators impress with their simplicity in commissioning and configuration, as well as with high simulation density due to their fast memory interface and the large number of execution units. However, their communication capability, which is designed for 1-to-1 transmission, makes it difficult to combine them into larger systems. FPGA boards, on the other hand, usually have more limited memory interfaces. . . Simulator assessment In this last step, the first iteration of our three-pillar approach will be completed. After we used the results from our synthetic measurements, presented in Section 3.1, to calibrate the dynamic simulator, its accuracy will now be assessed. For this purpose, we use the microcircuit measurements from Section 3.3 as a reference and examine the simulator's predictions for the same scenarios. It is important to note that the measurements of the microcircuit were not used in any way to calibrate the dynamic simulator further. Figure 13 shows the comparison between hardware measurement and the prediction of the dynamic simulator, for the smallest and largest cluster configuration and different scaling of the cortical microcircuit. While the uncalibrated simulator allows qualitative comparisons, the absolute values are far from reality. This is remarkable since the modules of the simulator were adjusted using public specifications from data sheets. After calibration, differences can still be observed, but the results now allow quantitative predictions. Furthermore, differences are to be expected, especially with small networks, since compromises were made in the development of the simulator in . /fncom. . order to keep performance high. For example, small elements such as certain FIFOs or the multiplexing of messages were omitted, which only have a significant influence on the acceleration factor for small networks. Most importantly, the digital twin has to provide a good estimate for larger scenarios which can be confirmed by the measurements shown here. . Discussion Our research targets the development of hardware systems that execute neuronal network models of natural density. A wide variety of solutions exists today reaching from pure software solutions running on HPC clusters over dedicated digital hardware systems to approaches mapping the computations into the analog domain. None of the existing solutions meet the future requirements on computational capability in terms of model complexity and simulation speed while, at the same time, offering the required flexibility and determinism. Flexibility is a requirement in the exploratory research as done in the domain of computational neuroscience, and determinism offers the ability to reproduce experiments and investigate noisy properties. While being flexible, CPU or GPU based clusters are designed to support scientific simulations with fundamentally different requirements on the transformation of information, i.e., the way computations are executed and how results are communicated. . /fncom. . Any analog and specific hardware realization of a neuromorphic system defines its capabilities during the specification phase based on the analysis of the biological requirements (e.g., average firing rates, minimal synaptic delays, etc.) that are known at the time. As much as this approach has the potential to reach high performance, it is the source of the chicken-and-egg problem as a realized system becomes outdated by the knowledge it helps to gain. The neuroAIx-Framework overcomes these limitations by relying on three pillars: a fast analytical static simulator for design exploration, a slower, iterative dynamic simulator for accurate estimation of system behavior, and the FPGA cluster itself. On one side, learning in the first pillar will directly constrain the design space in the next, more involved exploration of the second pillar and so forth. On the other side, learnings, as well as calibration data, feed back to the earlier pillars to calibrate the models and thereby refine their quantitative assessment. The inaccuracy of the predictions of our cycle accurate model based on data-sheet specifications have shown the relevance for this calibration. As the models of any analyzed system architecture and neuroscience experiment are specified as code, modifications are possible throughout. More specifically, Table 2 demonstrates the flexibility of our framework by giving explicit examples of possible revisions. Depending on latest biological requirements and available hardware, the expense of adaptions of the three pillars can be estimated. In general, we consider this strictly coupled, multilevel prototyping most suitable to overcome the chicken-and-egg problem by short iteration loops due to the ability of performing rapid exploration, estimation and precise predictions. As an application example of minimal complexity, the microcircuit was considered as as baseline. Study of prior art points to three potential bottlenecks in such hardware systems: (1) communication of spikes, (2) computation of neuronal dynamics, and (3) off-chip memory transactions. During the development of the evaluation platform, we already followed the presented methodology leading to the conclusion that a compute cluster with a proprietary communication fabric (Kauth et al., 2020) would be best suited to execute the emulation at an acceptable speed. Following the microcircuit model, we realized support for the LIF neuron model. The FPGA structure allows executing this with a high degree of parallelism in time and space. Rather instrumental are the many local memories providing on-chip storage for the system state of the individual neuron models, which already reduces the burden on the external memory interface. Evaluations based on the first and second pillar indicated having two memory channels directly attached to the programmable logic would match the performance of the other system components. At the same time, High-Bandwidth Memories (HBM) appeared of no additional benefit as the latency of random accesses is decisive. Performance evaluations of the realized system confirmed this prediction. This way, we avoided a transition to model-specific optimizations such as on-the-fly generation of connectivity information, preserving the option to upload predefined connectomes as well as the flexibility to accommodate plasticity. Just as HPC clusters get continuously updated, more recent FPGA generations provide up to 8× faster transceivers, four memory channels and more and faster logic resources. Hence, we see a persistent advantage in using such FPGA clusters retaining the demonstrated 20× speed-up w.r.t. biological real-time, i.e. 10× speed-up over non-FPGA platforms. This comes on top of the inherent flexibility and the deterministic operation of our system. Even the energy per synaptic event of 48 nJ is 10× less than any other platform although this was no optimization criterion during the design of the system. In conclusion, an upscaled FPGA cluster could act as an intermediate system solution before next-generation neuroscience simulation platforms become available. As a next step, we are realizing a high-level web-based interface to specify, execute and analyze neuroscience simulations on the cluster. Data availability statement The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation. Author contributions KK and TS contributed to the development of the FPGA cluster, implementation of the microcircuit model, performed the simulation and data analysis, and wrote the paper. KK supervised the development of the static-and dynamic simulators while VS has contributed to the development of the dynamic simulator. TG was the supervisor of the whole project. All authors contributed to the article and approved the submitted version. Funding This project has received funding from the Helmholtz Association's Initiative and Networking Fund under project number SO-092 (Advanced Computing Architectures, ACA), and the Federal Ministry of Education and Research (BMBF, Germany) through the project NEUROTEC II (grant number 16ME0399) and Clusters4Future-NeuroSys (grant number 03ZU1106CA).
2023-04-20T13:19:23.414Z
2023-04-20T00:00:00.000
{ "year": 2023, "sha1": "6ab3709a5b712360f60d65d7c061d8d10d91680a", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Frontier", "pdf_hash": "6ab3709a5b712360f60d65d7c061d8d10d91680a", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Medicine" ] }
133608716
pes2o/s2orc
v3-fos-license
Antimicrobial and Antibiofilm N-acetyl-L-cysteine Grafted Siloxane Polymers with Potential for Use in Water Systems Antibiofilm strategies may be based on the prevention of initial bacterial adhesion, the inhibition of biofilm maturation or biofilm eradication. N-acetyl-L-cysteine (NAC), widely used in medical treatments, offers an interesting approach to biofilm destruction. However, many Eubacteria strains are able to enzymatically decompose the NAC molecule. This is the first report on the action of two hybrid materials, NAC-Si-1 and NAC-Si-2, against bacteria isolated from a water environment: Agrobacterium tumefaciens, Aeromonas hydrophila, Citrobacter freundii, Enterobacter soli, Janthinobacterium lividum and Stenotrophomonas maltophilia. The NAC was grafted onto functional siloxane polymers to reduce its availability to bacterial enzymes. The results confirm the bioactivity of NAC. However, the final effect of its action was environment- and strain-dependent. Moreover, all the tested bacterial strains showed the ability to degrade NAC by various metabolic routes. The NAC polymers were less effective bacterial inhibitors than NAC, but more effective at eradicating mature bacterial biofilms. Introduction When biofilm communities develop on industrial surfaces, they constitute a reservoir of various bacterial strains, including pathogens and opportunistic pathogens. The formation of biofilms often results in difficult-to-treat, chronic infections. In the case of technical materials in drinking water systems, they can lead to secondary contamination of water and biocorrosion processes [1,2]. Moreover, adhered bacterial cells produce extracellular polymeric substances (EPSs) which promote the development of biofilm structures. Bacterial biofilms usually express starvation phenotypes and defense mechanisms. As a consequence, they become more resistant to biocids than planktonic cells [3]. Potential antibiofilm strategies may be based on prevention of initial bacterial adhesion, inhibition of biofilm maturation and biofilm dispersion or eradication [4]. One common approach is to treat or incorporate surfaces with biocides, such as metal ions or metal nanoparticles (silver, cooper and mercury) or chemical agents (triclosan, chlorhexidine and quarternary ammonium salts). However, these approaches are limited, in particular by the short lifetimes and toxicity of such biocides [5]. Bacterial Isolates In total, 40 bacterial monocultures were isolated from the biofilm samples, using the plate reduction technique on PCA agar. The bacterial morphotypes varied, with beige, cream, colorless or dark-violet colonies of regularly shaped small (2 mm) and irregular larger (4-5 mm) colonies, all with a characteristic slimy appearance. In general, the morphotypes were Gram-negative bacteria. The mixed bacterial populations were very difficult to separate. Therefore, the streak plate procedure was performed several times on PCA agar to obtain each morphotype as a pure culture. Finally, six bacterial morphotypes with the ability to form characteristic slimy colonies on agar plates were selected for genetic identification. The bacterial monocultures were identified at the species level by sequencing their 16S rRNA genes, which were amplified through PCR. The nucleotide sequences were compared with those obtained from the National Center for Biotechnology Information (NCBI) and deposited in the GenBank database with accession numbers. The isolated bacterial strains were identified as Agrobacterium tumefaciens, Aeromonas hydrophila, Citrobacter freundii, Enterobacter soli, Janthinobacterium lividum and Stenotrophomonas maltophilia ( Table 1). Some of these isolates were typical water microbiota. According to the literature, Gram-negative bacteria are commonly found in water systems. Studies conducted by Penna et al. [24] on water samples collected from public distribution installations confirmed the presence of the Gram-negative bacteria Pseudomonas sp., Flavobacterium sp., Acinetobacter sp. and Enterobacteriaceae, C. freundii and E. soli (coliforms) are opportunistic pathogens often found in water, sewage and the intestinal tracts of animals and humans [25][26][27]. A. tumefaciens originates in plants and soil [28,29]. S. maltophilia frequently colonizes humid abiotic surfaces in water installations, mechanical ventilation systems and medical devices [30], as do A. hydrophila [31][32][33] and J. lividum, which are often isolated from water and water-living animals [34]. S. maltophilia is closely related to Pseudomonas and Xanthomonas genera. They are opportunistic pathogens that cause human respiratory tract infections, endocarditis, bacteremia, meningitis and urinary tract infections, which are often difficult to treat [35]. Despite their different taxonomic characteristics and biochemical features, the bacterial isolates were able to form slime on the PCA agar plates, often in colonies with compact structures. It seems that this feature helps them to adhere to or coaggregate with other cells and surfaces, while strengthening the structures of the biofilms. Growth Inhibition of Bacteria by NAC Six bacterial strains, A. tumefaciens, A. hydrophila, C. freundii, E. soli, J. lividum and S. maltophilia, were used to evaluate the antibacterial activity of NAC. The influence of this compound was determined by the standard two-fold dilution method in two kinds of culture media: minimal M1 (50-fold diluted buffered peptone water, BPW) and rich M2 (Triptic Soy Broth, TSB). No significant reduction in bacterial density was observed after 48 h of incubation for any of the strains in M2 medium, except A. tumefaciens. The NAC concentrations at which bacterial growth [ • McF] was slightly inhibited were similar for all the studied strains, at 0.25% (w/v). However, at this concentration, the inhibition of bacterial growth by NAC was significantly higher in the minimal M1 medium (Figure 1). These results show that the antibacterial activity of NAC depends on the availability of organic compounds which in effect can protect planktonic bacterial cells. Some of these isolates were typical water microbiota. According to the literature, Gram-negative bacteria are commonly found in water systems. Studies conducted by Penna et al. [24] on water samples collected from public distribution installations confirmed the presence of the Gram-negative bacteria Pseudomonas sp., Flavobacterium sp., Acinetobacter sp. and Enterobacteriaceae, C. freundii and E. soli (coliforms) are opportunistic pathogens often found in water, sewage and the intestinal tracts of animals and humans [25][26][27]. A. tumefaciens originates in plants and soil [28,29]. S. maltophilia frequently colonizes humid abiotic surfaces in water installations, mechanical ventilation systems and medical devices [30], as do A. hydrophila [31][32][33] and J. lividum, which are often isolated from water and water-living animals [34]. S. maltophilia is closely related to Pseudomonas and Xanthomonas genera. They are opportunistic pathogens that cause human respiratory tract infections, endocarditis, bacteremia, meningitis and urinary tract infections, which are often difficult to treat [35]. Despite their different taxonomic characteristics and biochemical features, the bacterial isolates were able to form slime on the PCA agar plates, often in colonies with compact structures. It seems that this feature helps them to adhere to or coaggregate with other cells and surfaces, while strengthening the structures of the biofilms. Growth Inhibition of Bacteria by NAC Six bacterial strains, A. tumefaciens, A. hydrophila, C. freundii, E. soli, J. lividum and S. maltophilia, were used to evaluate the antibacterial activity of NAC. The influence of this compound was determined by the standard two-fold dilution method in two kinds of culture media: minimal M1 (50-fold diluted buffered peptone water, BPW) and rich M2 (Triptic Soy Broth, TSB). No significant reduction in bacterial density was observed after 48 h of incubation for any of the strains in M2 medium, except A. tumefaciens. The NAC concentrations at which bacterial growth [°McF] was slightly inhibited were similar for all the studied strains, at 0.25% (w/v). However, at this concentration, the inhibition of bacterial growth by NAC was significantly higher in the minimal M1 medium ( Figure 1). These results show that the antibacterial activity of NAC depends on the availability of organic compounds which in effect can protect planktonic bacterial cells. Numerous studies have confirmed that organic matter interferes with the activity of antimicrobials [36][37][38]. The interaction between organic matter and antimicrobial substance leads to reduced antimicrobial activity. Therefore, the antimicrobial activity of NAC can be obscured by the environment and depends strongly on its chemical composition. Similarly, Yin et al. [39] suggest that NAC may be unsuitable as an antibacterial agent in the presence of high concentrations of organic matter. Numerous studies have confirmed that organic matter interferes with the activity of antimicrobials [36][37][38]. The interaction between organic matter and antimicrobial substance leads to reduced antimicrobial activity. Therefore, the antimicrobial activity of NAC can be obscured by the environment and depends strongly on its chemical composition. Similarly, Yin et al. [39] suggest that NAC may be unsuitable as an antibacterial agent in the presence of high concentrations of organic matter. Adhesion Abilities in the Presence of NAC Bacterial attachment to the native glass surface in M1 culture medium was assessed by luminometry throughout the 6-day incubation period. This minimal medium was used because, according to the literature, both biofilm formation and biofilm resistance to antimicrobials may be stimulated in a water environment poor in carbon sources. Myszka and Czaczyk [40] report that starvation has a greater impact on the process of cell attachment. Other studies conducted by Rochex and Lebeaultthe [41] have shown that the extent of biofilm accumulation increases as the nitrogen concentration falls from C/N = 90 to C/N = 20. In our study, for culture media M1 and M2, the C/N ratios were likewise very low, at 3.63/3.95. However, in the M1 medium, the availability of organic compounds was very limited (0.2 g/L) ( Table 2). Adhesion Abilities in the Presence of NAC Bacterial attachment to the native glass surface in M1 culture medium was assessed by luminometry throughout the 6-day incubation period. This minimal medium was used because, according to the literature, both biofilm formation and biofilm resistance to antimicrobials may be stimulated in a water environment poor in carbon sources. Myszka and Czaczyk [40] report that starvation has a greater impact on the process of cell attachment. Other studies conducted by Rochex and Lebeaultthe [41] have shown that the extent of biofilm accumulation increases as the nitrogen concentration falls from C/N = 90 to C/N = 20. In our study, for culture media M1 and M2, the C/N ratios were likewise very low, at 3.63/3.95. However, in the M1 medium, the availability of organic compounds was very limited (0.2 g/L) ( Table 2). Table 2. Chemical characteristics of culture media used in the study. According to the literature, NAC can reduce the formation of biofilms by S. epidermidis [6], P. aeruginosa [10], H. pylori [11], S. maltophilia and B. cepacia [7] on various biotic and abiotic surfaces. Biofilms have been reported to be more affected by NAC than planktonic cultures, suggesting specific antibiofilm activity against the tested bacteria [7]. The results of our study confirmed the antibiofilm action of NAC. However, the final effect was strain-dependent. The variable activity of NAC may be due to the chemical nature of the EPS, as well as to the enzymatic activity of the bacterial strains capable of NAC degradation. The chemical characteristics of bacterial EPS are strain-dependent, and also depend on the age of the biofilms [42]. Therefore, it was decided to evaluate the capacity of the According to the literature, NAC can reduce the formation of biofilms by S. epidermidis [6], P. aeruginosa [10], H. pylori [11], S. maltophilia and B. cepacia [7] on various biotic and abiotic surfaces. Biofilms have been reported to be more affected by NAC than planktonic cultures, suggesting specific antibiofilm activity against the tested bacteria [7]. The results of our study confirmed the antibiofilm action of NAC. However, the final effect was strain-dependent. The variable activity of NAC may be due to the chemical nature of the EPS, as well as to the enzymatic activity of the bacterial strains capable of NAC degradation. The chemical characteristics of bacterial EPS are strain-dependent, and also depend on the age of the biofilms [42]. Therefore, it was decided to evaluate the capacity of the tested bacteria to degrade NAC. Bacterial Capacity for NAC Degradation A comparison of the HPLC chromatograms and MS spectra for solutions obtained after the incubation of bacterial cells with the control NAC solution reveals that all the tested bacterial strains showed ability to degrade NAC in water, when other potential sources C and N were absent. Figure 3A presents examples of chromatograms for three tested bacterial strains: A. hydrophila, C. freundii and E. soli. These chromatograms have slight differences from each other, but present definite differences from that of the control sample. showed ability to degrade NAC in water, when other potential sources C and N were absent. Figure 3A presents examples of chromatograms for three tested bacterial strains: A. hydrophila, C. freundii and E. soli. These chromatograms have slight differences from each other, but present definite differences from that of the control sample. The LC-MS spectra for all the analyzed supernatants show a peak at the retention time of 9.1 min, for which m/z is 325.1. This signal corresponds to the [M + H] + product of NAC oxidation to the disulfide derivative. More polar compounds were found in the supernatants in comparison to those in the NAC solution without bacterial incubation ( Figure 3B). Small amounts of alanine (visible as a peak at m/z = 90.5 on the MS spectra, corresponding to [M + H] + alanine) were present in the supernatants. Surprisingly, the supernatants did not contain cysteine, which is the product of deacetylation of NAC. The peak retention time of A. tumefaciens was 2.7 min (m/z = 283.1) ( Figure 3C). The m/z corresponds to the disulfide derivative formed from the NAC and cysteine. A compound of The LC-MS spectra for all the analyzed supernatants show a peak at the retention time of 9.1 min, for which m/z is 325.1. This signal corresponds to the [M + H] + product of NAC oxidation to the disulfide derivative. More polar compounds were found in the supernatants in comparison to those in the NAC solution without bacterial incubation ( Figure 3B). Small amounts of alanine (visible as a peak at m/z = 90.5 on the MS spectra, corresponding to [M + H] + alanine) were present in the supernatants. Surprisingly, the supernatants did not contain cysteine, which is the product of deacetylation of NAC. The peak retention time of A. tumefaciens was 2.7 min (m/z = 283.1) ( Figure 3C). The m/z corresponds to the disulfide derivative formed from the NAC and cysteine. A compound of m/z = 118.1 with a retention time of 2.4 min was observed among the metabolites of E. soli. This was tentatively assigned to N,N-dimethylalanine ( Figure 3D). Due to the ability of the tested bacterial strains to degrade NAC, subsequent studies were conducted on NAC polymer derivatives with different structures: ladder-like (NAC-Si-1) and linear (NAC-Si-2). Two bacterial strains were used, A. hydrophila and C. freundii. These exhibited the best adhesive abilities, comparable resistance to NAC as well as a capacity for NAC degradation. Growth Inhibition by NAC and NAC-Grafted Polymers The antibacterial activities of NAC and of the polymers NAC-Si-1 and NAC-Si-2 were evaluated in two kinds of culture media: minimal M1 and enriched M2 (Figure 4). [15] or cysteine dioxygenase (CDO) (EC 1.13.11.20) [14]. For example, CDS and CDSH activities have been found in Salmonella enterica and E. coli, and CDO is common among species within the phyla of Actinobacteria, Firmicutes and Proteobacteria. These enzymes differ in terms of action: CDO irreversibly oxidizes the sulfhydryl group of cysteine to cysteinesulfinic acid, whereas CDS and CDSH, which appear to be the major cysteine-degrading agents, are sulfide-producing enzymes [14,15]. Due to the ability of the tested bacterial strains to degrade NAC, subsequent studies were conducted on NAC polymer derivatives with different structures: ladder-like (NAC-Si-1) and linear (NAC-Si-2). Two bacterial strains were used, A. hydrophila and C. freundii. These exhibited the best adhesive abilities, comparable resistance to NAC as well as a capacity for NAC degradation. Growth Inhibition by NAC and NAC-Grafted Polymers The antibacterial activities of NAC and of the polymers NAC-Si-1 and NAC-Si-2 were evaluated in two kinds of culture media: minimal M1 and enriched M2 (Figure 4). Aqueous solutions of NAC and NAC-Si-1 were used at concentrations of 0.25% (w/v). Due to its limited solubility, NAC-Si-2 was applied at 0.05% (w/v). After 48 h incubation in minimal M1 medium, a significant reduction in bacterial growth was observed only in the case of native NAC. In enriched M2 medium, the strains were inhibited only slightly in the presence of NAC and its derivatives. Growth inhibition was similar for both tested strains. These results show that the wildbacterial isolates are more resistant to NAC, NAC-Si-1 and NAC-Si-2 in M2 medium than the model collection strains E. coli and S. aureus described in our previous report [23]. Nevertheless, in M2 medium, the activity of NAC-Si-2 was similar to that of NAC-Si-1 at a 5-fold lower concentration. Aqueous solutions of NAC and NAC-Si-1 were used at concentrations of 0.25% (w/v). Due to its limited solubility, NAC-Si-2 was applied at 0.05% (w/v). After 48 h incubation in minimal M1 medium, a significant reduction in bacterial growth was observed only in the case of native NAC. In enriched M2 medium, the strains were inhibited only slightly in the presence of NAC and its derivatives. Growth inhibition was similar for both tested strains. These results show that the wild-bacterial isolates are more resistant to NAC, NAC-Si-1 and NAC-Si-2 in M2 medium than the model collection strains E. coli and S. aureus described in our previous report [23]. Nevertheless, in M2 medium, the activity of NAC-Si-2 was similar to that of NAC-Si-1 at a 5-fold lower concentration. Biofilm Formation on Glass Modified with NAC Polymers A significant reduction in bacterial biofilm formation after 6 days of incubation in minimal M1 medium was observed only in the case of polymer NAC-Si-2. A similar, 10-fold reduction was observed with both tested strains ( Figure 5). The NAC-Si-1 polymer proved less effective at reducing bacterial adhesion. This may be due to the differences in wettability of the polymers. Numerous publications have investigated the relationships between the surface properties of materials and biofilm colonization, although some details remain unclear. Bacterial adhesion depends not only on the bacterial strain but also on the surface free energy of the colonized support. Generally, larger specific surface areas and better wetting qualities have been found to favor bacterial adhesion [43]. Therefore, evaluating polymer wettability can give important information regarding the antibiofilm properties of materials. The surface free energies of thin films of NAC-Si-1 and NAC-Si-2 on glass were compared to that of native glass. The surfaces covered with polymers were more hydrophilic than that of the native glass support, which had a surface free energy of 180 mJ/m 2 , in comparison to 240 mJ/m 2 and 380 mJ/m 2 for NAC-Si-1 and NAC-Si-2, respectively. The specific structure of NAC-Si-2 is responsible for its particular properties in thin films and great hydrophilicity, despite the presence of hydrophobic methyl groups on Si atoms. Moreover, the solubility of NAC-Si-2 in water is poorer than that of NAC-Si-1. As we have seen, the greater flexibility of the single siloxane chain of NAC-Si-2 facilitates the rearrangement of macromolecules in the coating, so the hydrophilic groups in the topmost layer of the film may be exposed to the polar environment. In our previous studies, it was shown that NAC-Si-2 'catches and holds' bacterial cells, and this can impede biofilm development [23]. Biofilm Eradication by NAC and NAC-Grafted Polymers The ability of the tested polymers to eradicate biofilms was equally interesting. We studied eradication using different kinds of biofilm treatment: distilled water, 1% (w/v) solutions (NAC, NAC-Si-1) and suspensions (NAC-Si-2) ( Figure 6). It was noted that the treatment of bacterial biofilms with NAC or its polymers at a concentration of 1% (w/v) led to a significant decrease in RLU values compared to the control samples. However, the application of NAC-Si-2 polymer seems to be the best approach to eradication [23]. The NAC-Si-1 polymer proved less effective at reducing bacterial adhesion. This may be due to the differences in wettability of the polymers. Numerous publications have investigated the relationships between the surface properties of materials and biofilm colonization, although some details remain unclear. Bacterial adhesion depends not only on the bacterial strain but also on the surface free energy of the colonized support. Generally, larger specific surface areas and better wetting qualities have been found to favor bacterial adhesion [43]. Therefore, evaluating polymer wettability can give important information regarding the antibiofilm properties of materials. The surface free energies of thin films of NAC-Si-1 and NAC-Si-2 on glass were compared to that of native glass. The surfaces covered with polymers were more hydrophilic than that of the native glass support, which had a surface free energy of 180 mJ/m 2 , in comparison to 240 mJ/m 2 and 380 mJ/m 2 for NAC-Si-1 and NAC-Si-2, respectively. The specific structure of NAC-Si-2 is responsible for its particular properties in thin films and great hydrophilicity, despite the presence of hydrophobic methyl groups on Si atoms. Moreover, the solubility of NAC-Si-2 in water is poorer than that of NAC-Si-1. As we have seen, the greater flexibility of the single siloxane chain of NAC-Si-2 facilitates the rearrangement of macromolecules in the coating, so the hydrophilic groups in the topmost layer of the film may be exposed to the polar environment. In our previous studies, it was shown that NAC-Si-2 'catches and holds' bacterial cells, and this can impede biofilm development [23]. Biofilm Eradication by NAC and NAC-Grafted Polymers The ability of the tested polymers to eradicate biofilms was equally interesting. We studied eradication using different kinds of biofilm treatment: distilled water, 1% (w/v) solutions (NAC, NAC-Si-1) and suspensions (NAC-Si-2) ( Figure 6). It was noted that the treatment of bacterial biofilms with NAC or its polymers at a concentration of 1% (w/v) led to a significant decrease in RLU values compared to the control samples. However, the application of NAC-Si-2 polymer seems to be the best approach to eradication [23]. However, microscopic observations ( Figure 7) suggest that the process of eradication is based on interactions at the interface between the biofilm and the macromolecules. Analysis using SEM of the glass surface following eradication processes showed that the less water-soluble polymer, NAC-Si-2, has a tendency to 'clump and hold' bacterial cells, which may result in very low luminometric measurements (due to lack of cell availability for intracellular ATP testing). Moreover, this effect was probably the cause of the lack of cell fluorescence when we attempted to determine the metabolic state of the bacterial cells by fluorescence microscopy. Groups of tightly packed bacterial cells were also noticed ( Figure 7B-4). This particular feature of the polysiloxane explains the lack of fluorescence emission from bacterial cells eradicated with NAC-Si-2. Therefore, it can be concluded that the polymer NAC-Si-1, with better water solubility, is more effective against mature bacterial biofilms. Figure 6. Effects of water, NAC and polymers NAC-Si-1 and NAC-Si-2 on eradication of biofilms formed by A. hydrophila and C. freundii. Results after 1 h treatment compared to that of the control sample (10-day biofilm without NAC or polymer derivatives). Values show the mean ± standard deviation (SD, n = 3). Values with different letters are statistically different: b, 0.005 < p < 0.05; c, p < 0.005. Isolation of Bacterial Biofilms Formed in Drinking Water Systems In total, 20 samples of biofilms formed in industrial drinking water systems were subjected to microbiological analysis. For each biofilm sample, at least three plates with Plate Count Agar (PCA, Merck, Darmstadt, Germany) were inoculated by swabbing. All the plates were incubated at 25 • C for 5 days. At least two characteristic colonies representing each morphotype were picked up from the agar plates, restreaked several times to ensure purity and then maintained as pure cultures at 4 • C on wort agar slants. Identification of Microorganisms The bacterial cultures were analyzed first by light microscopy BX41 (Olympus, WA, USA), and then by using molecular methods and the following standard methods: Gram staining, the L-aminopeptidase test (Bactident ® Aminopeptidase, Merck, Germany), the oxidase test (Bactident ® Oxidase, Merck, Germany). The 16S rRNA genes were amplified through PCR, according to a technique described previously [44]. The nucleotide sequences were compared with 16S rRNA gene sequences obtained from the NCBI using the program BLASTN 2.2.27 + (https://blast. ncbi.nlm.nih.gov/Blast.cgi). The sequences were deposited in the GenBank database and assigned accession numbers. Culture Media Certified liquid media were prepared for the shake cultures. The minimal medium M1 (50-fold diluted BPW (Merck, Germany)) and enriched medium M2 (TSB (Merck, Germany)) were sterilized at 121 • C for 15 min. These media were chosen due to their sharp differences in nutrient content. The composition of the granulates used to prepare the M1 and M2 media was determined by elemental analysis (%wt of C, N and S) using a varioMICRO CUBE elemental analyzer (Elementar, Langenselbold, Germany). Table 2 presents the chemical characteristics of the culture broths used. The values are the means of two independent measurements (n = 2). Polymeric Materials Polymers, labelled NAC-Si-1 and NAC-Si-2, were prepared by grafting NAC onto the vinyl groups of their precursors, LPSQ-Vi and Silo-Vi, respectively, following a method described elsewhere [23,45,46]. The polymers with side vinyl groups were poly(vinylsilsequioxanes) (LPSQ-Vi, MnGPC = 1000 g/mol, PDI = 1.4) or poly(vinylmethylsiloxanes) (Silox-Vi, Gelest, MnGPC = 1800 g/mol, PDI = 1.3) terminated with trimethylsilyl groups (Figure 8). To study biofilm growth on surfaces coated with NAC grafted onto siloxane polymers, the surface energy of the polymers was estimated using the sessile droplet technique and the Owens-Wendt method. The polymers were cast on bare glass supports (commercially available microscopic slides) using a slit applicator (film thickness: 100 µm). The static, advancing and receding contact angles were measured immediately after the deposition of a liquid (water or anhydrous glycerol) onto the surface of the film. The values and their standard deviations were estimated for the average of at least three measurements taken in different areas of the same sample [23]. Bacterial Growth in the Presence of NAC or NAC-Grafted Polymers The influence of NAC and its derivatives on the tested bacteria were determined using the standard two-fold dilution method [23]. One milliliter of cell suspension (1°McF) was mixed with 1 mL of M1 or M2 culture media with serial dilutions of the tested compounds. The concentrations of NAC (Merck, Germany) and its polymeric derivatives NAC-Si-1 and NAC-Si-2 ( Figure 8) were in the range of 0.0125-0.5% (w/v). Incubation was conducted for 48 h at 30 °C. Bacterial growth in the two types of culture media, M1 and M2, was measured densitometrically in McF units (densitometer DEN-1, Merck, Germany). Capacity of Bacteria to Degrade NAC The capacity of the tested bacteria to degrade NAC was evaluated after incubation of the bacterial cells with 0.5% (w/v) NAC solution in water. Two milliliters of NAC solution was mixed with two full loops of bacterial biomass from 24 h slant cultures. The suspensions were standardized (1°McF) and then incubated for 4 h at 30 °C. After incubation, the samples were centrifuged (6800× g, 4 °C) and the supernatants were collected. Solutions of the supernatants diluted with water (Milli-Q water) at 1:1 (v:v) were used for LC-MS analysis. High-performance liquid chromatography (HPLC) Bacterial Growth in the Presence of NAC or NAC-Grafted Polymers The influence of NAC and its derivatives on the tested bacteria were determined using the standard two-fold dilution method [23]. One milliliter of cell suspension (1 • McF) was mixed with 1 mL of M1 or M2 culture media with serial dilutions of the tested compounds. The concentrations of NAC (Merck, Germany) and its polymeric derivatives NAC-Si-1 and NAC-Si-2 ( Figure 8) were in the range of 0.0125-0.5% (w/v). Incubation was conducted for 48 h at 30 • C. Bacterial growth in the two types of culture media, M1 and M2, was measured densitometrically in McF units (densitometer DEN-1, Merck, Germany). Capacity of Bacteria to Degrade NAC The capacity of the tested bacteria to degrade NAC was evaluated after incubation of the bacterial cells with 0.5% (w/v) NAC solution in water. Two milliliters of NAC solution was mixed with two full loops of bacterial biomass from 24 h slant cultures. The suspensions were standardized (1 • McF) and then incubated for 4 h at 30 • C. After incubation, the samples were centrifuged (6800× g, 4 • C) and the supernatants were collected. Solutions of the supernatants diluted with water (Milli-Q water) at 1:1 (v:v) were used for LC-MS analysis. High-performance liquid chromatography (HPLC) was performed on an LC Dionex UltiMate 3000 (ThermoFisher Scientific, Waltham, USA), using a Kinetex Reversed Phase C18 column (100 × 4.6 mm). The analysis was performed with a gradient of 0.1% TFA in H 2 O (B) and 0.1% TFA in CH3CN (A), at a flow rate of 0.4 mL/min, with UV detection at 214, 220, 254 and 330 nm. Microscopic analysis was performed on an MS Bruker microOTOF-QIII (Bruker, Leipzig, Germany). Bacterial Adhesion in the Presence of NAC or Polymers-Grafted with NAC Minimal medium M1 (20 mL) was poured into sterile 25 mL Erlenmeyer flasks, into which sterile glass carriers (Star Frost 76 9 26 mm, Knittel Glass, Braunschweig, Germany) were placed vertically in such a way that half of the carrier was immersed in the medium while the other part remained outside [33]. The amount of inoculum was standardized [1 • McF] to obtain a cell concentration in the culture medium approximately equal to 5000-10,000 CFU/mL at the start of each experiment. The samples were incubated at 25 • C on a laboratory shaker (135 rpm) for 6 days. Analysis of the extent of cell adhesion to the glass carriers was performed using luminometry. For luminometric tests, the carrier was removed from the culture medium, rinsed with sterile distilled water and swabbed using free ATP sampling pens (Merck, Germany). The measurements were reported in RLU/cm 2 using a HY-LiTE 2 luminometer (Merck, Germany) [47,48]. Biofilm Eradication by NAC and NAC-Grafted Polymers Glass carriers with 10-day old biofilms were rinsed with sterile distilled water and transferred into flasks containing 1% (w/v) NAC or its polymeric derivatives, NAC-Si-1 or NAC-Si-2. The control sample was transferred directly into sterile water. The bacterial biofilms were then incubated at 25 • C for 1 h using a laboratory shaker at 130 rpm. The glass plates were removed from the incubation emulsions, rinsed with sterile distilled water and swabbed using sterile swabs for surface testing. The number of cells in the biofilm that had formed on the glass surface was determined using luminometry. The results were expressed in RLU/cm 2 [23]. The surface was also analyzed using scanning electron microscopy (SEM). Images were taken with a JSH 5500 LV scanning electron microscope (JEOL Ltd., Tokyo, Japan) in high-vacuum mode at an accelerated voltage of 10 kV or 15 kV. The samples were splutter-coated with a fine layer of gold, about 20 nm thick, using an ion coating JFC 1200 apparatus (JEOL Ltd., Japan) [49]. Statistical Methods for Biological Samples Means with standard deviations were calculated from the data obtained from three independent experiments. The mean values of the adhesion results were compared using one-way repeated measures analysis of variance (ANOVA; OriginPro 8.1, OriginLab Corp., Northampton, MA, USA). The results were compared to those for the control samples. Values with different letters presented in the figures show statistically significant differences: a, p ≥ 0.05; b, 0.005 < p < 0.05; c, p < 0.005. Conclusions In this study, NAC was confirmed to have antimicrobial and antibiofilm activity against the Gram-negative strains A. tumefaciens, A. hydrophila, C. freundii, E. soli, J. lividum and S. maltophilia, which are increasingly identified as opportunistic pathogens. Interestingly, these bacterial strains showed the ability to degrade NAC in water, suggesting that the action of NAC may be significantly limited in this environment, due to bacterial enzymatic activities. It is therefore hypothesized that the antibacterial and antibiofilm properties of NAC are multifactorial (i.e., dependent on the bacterial strain and crucial conditions in the environment). New hybrid polymers obtained by grafting NAC onto poly(vinylsilsesquioxanes) and poly(methylvinylsiloxanes) showed rather low antibacterial activity. However, they showed significant ability to eradicate mature biofilms. These novel antibacterial polymers are promising agents for antibiofilm strategies in industrial installations of water.
2019-04-27T13:03:52.650Z
2019-04-01T00:00:00.000
{ "year": 2019, "sha1": "db9c1b9246be96eeb1bac14591a7e318cfb8235a", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1422-0067/20/8/2011/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "db9c1b9246be96eeb1bac14591a7e318cfb8235a", "s2fieldsofstudy": [ "Biology", "Engineering", "Medicine" ], "extfieldsofstudy": [ "Medicine", "Chemistry" ] }
119653343
pes2o/s2orc
v3-fos-license
Blowup of $H^1$ solutions for a class of the focusing inhomogeneous nonlinear Schr\"odinger equation In this paper, we consider a class of the focusing inhomogeneous nonlinear Schr\"odinger equation \[ i\partial_t u + \Delta u + |x|^{-b} |u|^\alpha u = 0, \quad u(0)=u_0 \in H^1(\mathbb{R}^d), \] with $0<b<\min\{2,d\}$ and $\alpha_\star\leq \alpha<\alpha^\star$ where $\alpha_\star =\frac{4-2b}{d}$ and $\alpha^\star=\frac{4-2b}{d-2}$ if $d\geq 3$ and $\alpha^\star = \infty$ if $d=1,2$. In the mass-critical case $\alpha=\alpha_\star$, we prove that if $u_0$ has negative energy and satisfies either $xu_0 \in L^2$ with $d\geq 1$ or $u_0$ is radial with $d\geq 2$, then the corresponding solution blows up in finite time. Moreover, when $d=1$, we prove that if the initial data (not necessarily radial) has negative energy, then the corresponding solution blows up in finite time. In the mass and energy intercritical case $\alpha_\star<\alpha<\alpha^\star$, we prove the blowup below ground state for radial initial data with $d\geq 2$. This result extends the one of Farah in \cite{Farah} where the author proved blowup below ground state for data in the virial space $H^1\cap L^2(|x|^2 dx)$ with $d\geq 1$. Introduction In this paper, we consider the Cauchy problem for the inhomogeneous nonlinear Schrödinger equation i∂ t u + ∆u + µ|x| −b |u| α u = 0, where u : R × R d → C, u 0 : R d → C, µ = ±1 and α, b > 0. The parameters µ = 1 (resp. µ = −1) corresponds to the focusing (resp. defocusing) case. The case b = 0 is the well-known nonlinear Schrödinger equation which has been studied extensively over the last three decades. The inhomogeneous nonlinear Schrödinger equation arises naturally in nonlinear optics for the propagation of laser beams, and it is of a form i∂ t u + ∆u + K(x)|u| α u = 0. (1.1) An easy computation shows u λ (0) Ḣγ = λ γ+ 2−b α − d 2 u 0 Ḣγ . Thus, the critical Sobolev exponent is given by Moreover, the (INLS) has the following conserved quantities: The well-posedness for the (INLS) was first studied by Genoud-Stuart in [13,Appendix] (see also [14]) by using the argument of Cazenave [3]. Note that the existence of H 1 solutions to (INLS) is shown by the energy method which does not use Strichartz estimates. More precisely, Genoud-Stuart showed that the focusing (INLS) with 0 < b < min{2, d} is well posed in H 1 : • locally if 0 < α < α ⋆ , • globally for any initial data if 0 < α < α ⋆ , • globally for small initial data if α ⋆ ≤ α < α ⋆ , where α ⋆ and α ⋆ are defined by In the case α = α ⋆ (L 2 -critical), Genoud in [16] showed that the focusing (INLS) with 0 < b < min{2, d} is globally well-posed in H 1 assuming u 0 ∈ H 1 and where Q is the unique nonnegative, radially symmetric, decreasing solution of the ground state equation Also, Combet-Genoud in [6] established the classification of minimal mass blow-up solutions for the focusing L 2 -critical (INLS). In the case α ⋆ < α < α ⋆ , Farah in [9] showed that the focusing (INLS) with 0 < b < min{2, d} is globally well-posedness in H 1 , d ≥ 1 assuming u 0 ∈ H 1 and where Q is the unique nonnegative, radially symmetric, decreasing solution of the ground state equation ( 1.8) He also proved that if u 0 ∈ H 1 ∩ L 2 (|x| 2 dx) =: Σ satisfies (1.7) and then the blow-up in H 1 must occur. Afterwards, Farah-Guzman in [10,11] proved that the above global solution is scattering under the radial condition of the initial data. Note that the existence and uniqueness of solutions Q to the elliptic equations (1.6) and (1.8) were proved by Toland [27], Yanagida [30] and Genoud [15] (see also Genoud-Stuart [13]). Guzman in [18] used Strichartz estimates and the contraction mapping argument to establish the local well-posedness as well as the small data global well-posedness for the (INLS) in Sobolev space. Recently, the author in [7] improved the local well-posedness in H 1 of Guzman by extending the validity of b in the two and three dimensional spatial spaces. Note that the results of Guzman [18] and Dinh [7] about the local well-posedness of (INLS) in H 1 are a bit weaker than the one of Genoud-Stuart [13]. More precisely, they do not treat the case d = 1, and there is a restriction on the validity of b when d = 2 or 3. However, the local well-posedness proved in [18,7] provides more information on the solutions, for instance, one knows that the global solutions to the defocusing (INLS) satisfy u ∈ L p loc (R, W 1.q ) for any Schrödinger admissible pair (p, q). This property plays an important role in proving the scattering for the (INLS). Note also that the author in [7] pointed out that one cannot expect a similar local well-posedness result for (INLS) in H 1 as in [18,7] holds in the one dimensional case by using Strichartz estimates. In [7], the author used the so-called pseudo-conformal conservation law to show the decaying property of global solutions to the defocusing (INLS) by assuming the initial data in Σ (see before (1.9)). In particular, he showed that in the case α ∈ [α ⋆ , α ⋆ ), global solutions have the same decay as the solutions of the linear Schrödinger equation, that is for 2 This allows the author proved the scattering in Σ for a certain class of the defocusing (INLS). Later, the author in [8] made use of the classical Morawetz inequality and an argument of [29] to derive the decay of global solutions to the defocusing (INLS) with the initial data in H 1 . Using the decaying property, he was able to show the energy scattering for a class of the defocusing (INLS). We refer the reader to [7,8] for more details. The main purpose of this paper is to show the finite time blowup for the focusing (INLS). Thanks to the well-posedness of Genoud-Stuart [13], we only expect blowup in H 1 when α ⋆ ≤ α < α ⋆ which correspond to the mass-critical and the mass and energy intercritical cases. Note that the local well-posedness for the energy-critical (INLS), i.e. α = α ⋆ is still an open problem. Our first result is the following finite time blowup for the (INLS) in the mass-critical case α = α ⋆ . Let u 0 ∈ H 1 be radial and satisfy either E(u 0 ) < 0 or, if E(u 0 ) ≥ 0, we suppose that and [9] that if the initial data u 0 satisfies (1.10) and ∇u 0 L 2 u 0 σ L 2 < ∇Q L 2 Q σ L 2 , then the corresponding solution exists globally in time. This paper is organized as follows. In Section 2, we recall the sharp Gagliardo-Nirenberg inequality related to the focusing (INLS) due to Farah [9]. In Section 3, we derive the standard virial identity and localized virial estimates for the focusing (INLS). We will give the proof of Theorem 1.1 in Section 4. Finally, the proof of Theorem 1.3 will be given in Section 5. where Q is the unique non-negative, radially symmetric, decreasing solution to the elliptic equation Virial identities In this section, we derive virial identities and virial estimates related to the focusing (INLS). Given a real valued function a, we define the virial potential by By a direct computation, we have the following result (see e.g. [26,Lemma 5.3].) 2) and Using this fact, we immediately have the following result. Corollary 3.2. If u is a smooth-in-time and Schwartz-in-space solution to the focusing (INLS), then we have A direct consequence of Corollary 3.2 is the following standard virial identity for the (INLS). Lemma 3.3. Let u 0 ∈ H 1 be such that |x|u 0 ∈ L 2 and u : I × R d → C the corresponding solution to the focusing (INLS). Then, |x|u ∈ C(I, L 2 ). Moreover, for any t ∈ I, Proof. The first claim follows from the standard approximation argument, we omit the proof and refer the reader to [3, Proposition 6.5.1] for more details. The identity (3.5) follows from Corollary 3.2 by taking a(x) = |x| 2 . In order to prove the blowup for the focusing (INLS) with radial data, we need localized virial estimates. To do so, we introduce a function θ : Note that the precise constant here is not important. For R > 1, we define the radial function It is easy to see that . Then for any ǫ > 0 and any t ∈ I, (3.9) Remark 3.5. 1. The condition d ≥ 2 comes from the radial Sobolev embedding. This is due to the fact that radial functions in dimension 1 do not have any decaying property. The restriction 0 < α ≤ 4 comes from the Young inequality below. 2. If we consider α ⋆ ≤ α ≤ α ⋆ , then there is a restriction on the validity of α in 2D. More precisely, we need α ⋆ ≤ α ≤ 4 when d = 2. Using (3.11) with s = 1 2 and the conservation of mass, we estimate When α = 4, we are done. Let us consider 0 < α < 4. To do so, we recall the Young inequality: for a, b non-negative real numbers and p, q positive real numbers satisfying 1 p + 1 q = 1, then for any ǫ > 0, ab ǫa p + ǫ − q p b q . Applying the Young inequality for a = ∇u(t) Note that the condition 0 < α < 4 ensures 1 < p, q < ∞. The proof is complete. In the mass-critical case α = α ⋆ , we have the following refined version of Lemma 3.4. The proof of this result is based on an argument of [22] (see also [2]). Then for any ǫ > 0 and any t ∈ I, where Proof. We first notice that where χ 1,R and χ 2,R are as in (3.13). Using the radial Sobolev embedding (3.11) with s = 1 2 , the conservation of mass and the fact |χ 2,R | 1, we estimate We next apply the Young inequality with p = 2d 2−b and q = 2d 2d−2+b to get for any ǫ > 0 Moreover, using (3.6), (3.7) and (3.8), it is easy to check that |∇(χ Combining the above estimates, we prove (3.12). To prove the blowup in the 1D mass-critical case α = 4 − 2b, we need the following version of localized virial estimates due to [23]. Let ϑ be a real-valued function in W 3,∞ satisfying (3.14) Set 16) for any t ∈ I, then there exists C > 0 such that Moreover, χ 2 ≤ C for some constant C > 0. We thus get (3.22) by taking a 1 > 0 small enough. The proof is complete. Mass-critical case α = α ⋆ In this section, we will give the proof of Theorem 1.1. 4.2. The case d ≥ 2, E(u 0 ) < 0 and u 0 is radial. We use the localized virial estimate (3.12) to have where If we choose a suitable radial function ϕ R defined by (3.7) so that for a sufficiently small ǫ > 0, then by choosing R > 1 sufficiently large depending on ǫ, we see that for any t in the existence time. This shows that the solution u blows up in finite time. It remains to find ϕ R so that (4.1) holds true. To do so, we follow the argument of [22]. Let us define a function It is easy to see that θ satisfies (3.6). We thus define ϕ R as in (3.7). We show that (4.1) holds true for this choice of ϕ R . Using the fact we have By the definition of ϕ R , Since 0 < r/R − 1 < 1/ √ 3, we can choose ǫ > 0 small enough so that (4.1) is satisfied. When r > (1 + 1/ √ 3)R, we see that ϑ ′ (r/R) ≤ 0, so χ 1,R (r) = 2(2 − ϕ ′′ R (r)) ≥ 4. We also have that χ 2,R (r) ≤ C for some constant C > 0. Thus by choosing ǫ > 0 small enough, we have (4.1). 4.3. The case d = 1 and E(u 0 ) < 0. We follow the argument of [23]. We only consider the positive time, the negative one is treated similarly. We argue by contradiction and assume that the solution exists for all t ≥ 0. We divide the proof in two steps. Step 1. We assume that the initial data satisfies where C, N, θ and a 0 are defined as in Lemma 3.8. We will show that if u 0 satisfies (4.2) and (4.3), then the corresponding solution satisfies (3.16) for all t ≥ 0. Since θ(x) ≥ 1 for |x| > 1 and δ > 0, we have from (4.3) that Let us define On the other hand, u(t) satisfies the assumption of Lemma 3.8 on [0, T 0 ). We thus get from Lemma 3.8 and (4.2) that for all 0 ≤ t < T 0 . By the definition of θ, it is easy to see that θ ≥ ϑ 2 /4 = (∂ x θ) 2 /4 for any x ∈ R. Thus, (4.6) yields for all 0 ≤ t < T 0 . By (4.3) and the fact that θ ≥ 1 on |x| > 1, we obtain for all 0 ≤ t < T 0 . By the continuity of u(t) in L 2 , we get This contradicts with (4.5). Therefore, the assumptions of Lemma 3.8 are satisfied with I = [0, ∞) and we get d 2 dt 2 V θ (t) ≤ −δ < 0, for all t ≥ 0. This is impossible. Hence, if the initial data u 0 satisfies (4.2) and (4.3), then the corresponding solution must blow up in finite time. Remark 4.2. We now show that the condition E(u 0 ) < 0 is sufficient for the blowup but it is not necessary. Let E > 0. We find data u 0 ∈ H 1 so that E(u 0 ) = E and the corresponding solution u blows up in finite time. We follow the standard argument (see e.g. [3,Remark 6.5.8]). Using the standard virial identity (3.5) with α = α ⋆ , we have We see that if f (t) takes negative values, then the solution must blow up in finite time. In order to make f (t) takes negative values, we need Now fix θ ∈ C ∞ 0 (R d ) a real-valued function and set ψ(x) = e −i|x| 2 θ(x). We see that ψ ∈ C ∞ 0 (R d ) and Im ψx · ∇ψdx = −2 |x| 2 θ 2 (x)dx < 0. We now set Let λ, µ > 0 be chosen later and set u 0 (x) = λψ(µx). We will choose λ, µ > 0 so that E(u 0 ) = E and (4.15) holds true. A direct computation shows Thus, the conditions E(u 0 ) = E and (4.15) yield and choose It is obvious that (4.17) is satisfied. Condition (4.16) implies This holds true by choosing a suitable value of λ. The case xu 0 ∈ L 2 . By the standard virial identity (3.5) and the conservation of energy, we have Here dα − 4 + 2b > 0 in the intercritical case α ⋆ < α < α ⋆ . The standard convexity argument implies that the solution blows up in finite time. The case u 0 is radial. We use Lemma 3.4 together with the conservation of energy to have for any ǫ > 0, for any t in the existence time. Since dα − 4 + 2b > 0, we take R > 1 large enough when α = 4; and take ǫ > 0 small enough and R > 1 large enough depending on ǫ when 0 < α < 4 to have that d 2 dt 2 V ϕR (t) ≤ 2(dα + 2b)E(u 0 ) < 0, for any t in the existence time. This implies that the solution must blow up in finite time. 5.2. The case E(u 0 ) ≥ 0. In this case, we assume that the initial data u 0 satisfies (1.10) and (1.11). We first show (1.13). By the definition of energy and multiplying both sides of E(u(t)) by M (u(t)) σ , the sharp Gagliardo-Nirenberg inequality (2.1) yields Moreover, using (2.4) and (2.5), it is easy to see that We also have that f is increasing on (0, x 0 ) and decreasing on (x 0 , ∞), where Using again (2.4) and (2.5), we see that x 0 is exactly ∇Q L 2 Q σ L 2 . By (5.1), the conservation of mass and energy together with the assumption (1.10) imply f ( ∇u(t) L 2 u(t) σ L 2 ) ≤ E(u 0 )M (u 0 ) σ < E(Q)M (Q) σ . Using this, (5.2) and the assumption (1.11), the continuity argument shows ∇u(t) L 2 u(t) σ L 2 > ∇Q L 2 Q σ L 2 , for any t as long as the solution exists. This proves (1.13). The case xu 0 ∈ L 2 . The finite time blowup for the intercritical (INLS) with initial data in H 1 ∩ L 2 (|x| 2 dx) satisfying (1.10) and (1.11) was proved in [9]. For the sake of completeness, we recall some details. By the standard virial identity (3.5) and (5.8), This shows that the solution blows up in finite time. The case u 0 is radial. We first note that under the assumptions of Theorem 1.3, we can apply Lemma 3.4 to obtain for any ǫ > 0, for any t in the existence time. Taking R > 1 large enough when α = 4, and ǫ > 0 small enough and R > 1 large enough depending on ǫ when 0 < α < 4, we learn from (5.8) that d 2 dt 2 V ϕR (t) ≤ −c/2 < 0. This shows that the solution must blow up in finite time. Combining two cases, we prove Theorem 1.3.
2018-04-20T17:21:06.000Z
2017-11-22T00:00:00.000
{ "year": 2018, "sha1": "fc4f7286822b1f5714b9dd6d28ea7fc08ce4e5b4", "oa_license": null, "oa_url": null, "oa_status": "CLOSED", "pdf_src": "Arxiv", "pdf_hash": "fc4f7286822b1f5714b9dd6d28ea7fc08ce4e5b4", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
202269028
pes2o/s2orc
v3-fos-license
Catharsis as a therapy: an overview on health and human development Received Date: Jan 23, 2019 / Accepted Date: May 02, 2019 / Published Date: May 04, 2019 Abstract Catharsis has been recognized as a healing, cleansing, and transforming experience throughout history, and has been used in cultural healing practices, literature, drama, religion, medicine, and psychology. It reviews the concept of catharsis, Catharsis in medicine, religion, and cultural rituals, Effect of catharsis on emotional recovery, use catharsis as their core technique to achieve positive therapeutic change. Based on the reviewed it recommended that it helps in facilitating forgiveness among individual and lead to permanent resolution. Introduction The word catharsis is derived from the Greek word which is translated as cleansing or purification. Most of the definitions emphasize two essential components of catharsis: the emotional aspect (strong emotional expression and processing) and the cognitive aspect of catharsis (insight, new realization, and the unconscious becoming consciousness) and as a result -positive change. Aristotle defined catharsis as purging of the spirit of morbid and base ideas or emotions by witnessing the playing out of such emotions or ideas on stage [1]. Breuer and Freud described catharsis as an involuntary, instinctive body process, for example crying [2]. Schultz and Schultz (2004) followed the psychodynamic tradition and defined catharsis as the process of reducing or eliminating a complex by recalling it to conscious awareness and allowing it to be expressed [3]. The American Psychological Association [4] also associates catharsis with the psychodynamic theory and defines it as the discharge of affects connected to traumatic events that had previously been repressed by bringing these events back into consciousness and reexperiencing them. Scheff [5] Page: 32 www.raftpubs.com discharge and cognitive awareness which he called distancing, when the person experiencing catharsis is maintaining the 'observer' role rather than the participant, which involves a sense of control and full alertness in person's immediate environment. Scheff also indicated that it is most common that towards the end of somatic-emotional discharge the detailed, vivid recalling of forgotten events and insights often occur. There is a certain amount of confusion and misunderstanding about the definition and interpretation of catharsis: some of the researchers perceive catharsis as emotional discharge, equating it with the behavior of expressing strong emotions, some emphasize the cognitive aspect and the new awareness that emerges after reliving traumatic events from the past. Catharsis in medicine, religion, and cultural rituals The idea of catharsis in medicine is similar to that in literature. It means purging, purification, although in a medical sense this implies a physical release, for example, expectoration of the sputa implies healing of cold. It was not until Hippocrates, that menstruation, diarrhea, and vomiting were regarded as cathartic processes [5]. Hippocrates associated catharsis with healing, because it's role of a "purification agent" affecting the course of disease (both physical and mental). The spiritual meaning of catharsis is very much the same: discharging everything harmful from one's mind and heart, so that one can become pure. The ritual of purification usually implies that a person had engaged in some prohibited actions or sins. Catharsis helped to return to the previous status -before the violation of generally accepted rules and norms. In various religious practices, the action of purification is fulfilled with the help of water, blood, fire, change of clothes, and sacrifice. The rituals are often considered as part of a person's healing from the devastating effect of guilt. Further, the key mission of mysticism is to understand the return or unification of one's soul with God. The ritual of baptism (purifying person with water) in Christianity has cathartic meaning of revival. Confession has the same underlying assumption, and it is similar to the concept of cathartic treatment introduced by Freud and Breuer, because confession involves the recall, revealing, and release of forbidden thoughts, actions, and repressed emotions. Spiritual and cultural rituals have been known throughout the history to help people process collective stress situations, such as death or separation, or major life changing events like rites of passages, weddings, and such. Traditional societies have ceremonies of mourning, funeral rites, and curing rituals, which most often include cathartic activities, such as crying, weeping, drumming, or ecstatic dance [6]. Similarly, modern forms of mass entertainment can provoke massive cathartic experiences, for example, movies like the Passion of the Christ directed by Mel Gibson, attracted mass audiences and became the socially acceptable way for collective crying. Another good example is the popularity of horror movies because they evoke intense fear emotions. It is apparent that collective forms of emotional reexperiencing and discharge in social, cultural, spiritual, or athletic events are highly popular, attract massive audiences and are known to provide relief and increase group cohesiveness and solidarity [3]. Effect of catharsis on emotional recovery This cathartic release of emotions is often believed to be therapeutic for affected individuals. Many therapeutic mechanisms have been seen to aid in emotional recovery. One example is interpersonal emotion regulation, in which listeners help to modify the affected individual's affective state by using certain strategies Reeck; Ames; Ochsner, [7] Catharsis as a therapy: an overview on health and human development JPHSM: May-2019: Page No: 31-35 Page: 33 www.raftpubs.com Expressive writing is another common mechanism for catharsis. Frattaroli published a meta-analysis suggesting that written disclosure of information, thoughts, and feelings enhances mental health [8]. However, other studies question the benefits of social catharsis. Finkenauer and colleagues found that non-shared memories were no more emotionally triggering than shared ones. Other studies have also failed to prove that social catharsis leads to any degree of emotional recovery [9]. Zech and Rimé asked participants to recall and share a negative experience with an experimenter. When compared with the control group that only discussed unemotional topics, there was no correlation between emotional sharing and emotional recovery [10]. Some studies even found adverse effects of social catharsis. Contrary to the Frattaroli study, Sbarra and colleagues found expressive writing to greatly impede emotional recovery following a marital separation [11]. Similar findings have been published regarding trauma recovery. A group intervention technique is often used on disaster victims to prevent trauma-related disorders. However, meta-analysis showed negative effects of this cathartic "therapy [11]. Uses of catharsis With the growth of behaviorism, the role of catharsis as a beneficial psychological technique was underestimated until Moreno introduced Psychodrama in the1930s. Moreno used the concept of catharsis as Aristotle and Freud suggested it and developed it into a new psychotherapeutic modality. Reenacting scenes from one's past, dreams, or fantasies helps the client bring the unconscious conflicts into consciousness, eventually experience catharsis, and thus achieve relief and positive change [13]. According to Moreno, catharsis helps to reunite the separated (unconscious) parts of the psyche and the conscious self [14]. Although there are a lot of ways how unconscious may be expressed, for instance, delusions, forgetting, and dreams [15], such expression is mild, and does not allow the release -it is rather an indication that the problem exists. Therefore, catharsis was successfully used in psychodrama to reveal deep and long-standing negative emotions and neutralize the negative impact of related traumatic experiences [14]. In the early 1970s, Janov [16] elaborated on Freud's ideas and claimed that if infants and children are not able to process painful experiences fully (cry, sob, wail, scream, etc.,) in a supported environment, their consciousness 'splits', pain gets suppressed to the unconscious and reappears in neurotic symptoms and disorders in later life. Painful experiences become 'stored' and need to be 'released' in therapy by reliving and discharging suppressed feelings. Janov claimed that cathartic emotional processing of painful early life experiences and the process of connecting them with the memory of the original event could fully free clients from neurotic symptoms. Janov argued that cognitive remembering of suppressed traumatic experiences is not enough for healing to occur. As it was practiced in the early 1970s, Primal therapy seemed to be focused on emotional discharge without appropriate safety and distancing. Therefore, it appeared to be damaging for some clients, especially for those with severe mental illness, personality disorders, or other more severe conditions when, for instance client's ego strength is not sufficient to process strong feelings, which might lead to disintegration, or if client already experiences confusion between present and past realities. Therefore, Primal therapy was perceived as dangerous and rejected by the majority of mental health professionals. Greenberg [17] concluded that emotional arousal and processing within a supportive therapeutic relationship is the core element for Page: 34 www.raftpubs.com positive change in therapy. He emphasized the cognitive aspect of catharsis and the need to understand and make sense of emotions. Greenberg argued that awareness, healthy emotional expression, and cognitive integration of emotions combined produce positive change. It appears that Emotion-Focused therapy appropriately addressed the cognitive component of catharsis and safety issues. Emotion-Focused therapy developed techniques to help clients recognize and validate their strong feelings, and coached and supported clients to express hurtful emotions safely, as well as, to find meaning for their experiences. Emotion-Focused therapy employs empty chair technique, introduced by gestalt therapy, for clarification of inner conflicts, as well as for finishing unresolved relationship issues from the past. Greenberg, Warwar, and Malcolm [18] proved that Emotion-Focused therapy using empty chair technique was more effective than psychoeducation in facilitating forgiveness and 'letting go' for individuals who had painful emotional experiences with their significant others. Empty chair technique can be a useful a tool to facilitate catharsis, as well as to help clients to increase distance from their inner conflicts and overwhelming emotions, for example by asking them to sit in a third chair and assume the role of an observer or mediator. Catharsis brought about through drama and music was a means of producing a moderation and balance of the emotions, and of connecting the passions with reason and wisdom. 2. It helps in facilitating forgiveness among individual and lead to permanent resolution. 3. Catharsis is an overwhelming spiritual experience of repentance and renewal. 4. The responsiveness increased levels of intimacy and satisfaction within the relationship
2019-09-11T07:07:19.636Z
2019-05-04T00:00:00.000
{ "year": 2019, "sha1": "a2e5558015054dbe683e0596b1c68ad7734c72ff", "oa_license": "CCBY", "oa_url": "https://doi.org/10.36811/jphsm.2019.110007", "oa_status": "HYBRID", "pdf_src": "MergedPDFExtraction", "pdf_hash": "bb26f927ca6374315319db7020fbdfb95d5cd464", "s2fieldsofstudy": [ "Psychology" ], "extfieldsofstudy": [ "Psychology" ] }
1608476
pes2o/s2orc
v3-fos-license
Rhinovirus infection induces cytotoxicity and delays wound healing in bronchial epithelial cells Background Human rhinoviruses (RV), the most common triggers of acute asthma exacerbations, are considered not cytotoxic to the bronchial epithelium. Recent observations, however, have questioned this knowledge. The aim of this study was to evaluate the ability of RV to induce epithelial cytotoxicity and affect epithelial repair in-vitro. Methods Monolayers of BEAS-2B bronchial epithelial cells, seeded at different densities were exposed to RV serotypes 1b, 5, 7, 9, 14, 16. Cytotoxicity was assessed chromatometrically. Epithelial monolayers were mechanically wounded, exposed or not to RV and the repopulation of the damaged area was assessed by image analysis. Finally epithelial cell proliferation was assessed by quantitation of proliferating cell nuclear antigen (PCNA) by flow cytometry. Results RV1b, RV5, RV7, RV14 and RV16 were able to induce considerable epithelial cytotoxicity, more pronounced in less dense cultures, in a cell-density and dose-dependent manner. RV9 was not cytotoxic. Furthermore, RV infection diminished the self-repair capacity of bronchial epithelial cells and reduced cell proliferation. Conclusion RV-induced epithelial cytotoxicity may become considerable in already compromised epithelium, such as in the case of asthma. The RV-induced impairment on epithelial proliferation and self-repair capacity may contribute to the development of airway remodeling. Background The bronchial epithelium plays a unique role as a protective physical and functional barrier between external environment and underlying tissues. As a result of this role it is frequently injured and epithelial integrity is damaged. A repair process starts quickly which includes migration of the remaining basal airway epithelial cells to repopulate damaged areas, and subsequent proliferation and differentiation until epithelial integrity has been restored [1,2]. Epithelial damage is a key feature of asthma. As a result of inflammation, a large portion of columnar epithelial cells shed and form Creola bodies, detected in sputum and during bronchoscopy in asthmatic patients [3]. This cycle of damage and repair has been proposed as a key mechanism leading to thickening of the airway wall, and other pathologic alterations collectively characterized as airway remodeling [4], which in turn has been associated with incompletely reversible airway narrowing, bronchial hyper-responsiveness and asthma symptoms [5]. Many factors can be cytotoxic to the bronchial epithelium, including eosinophil products [6], allergens [7] and respiratory viruses. Virus-induced cytotoxicity has been well documented for the majority of these agents, including influenza, parainfluenza, adenovirus and respiratory syncytial virus (RSV) [8]. In contrast, human rhinoviruses (RVs), although the most preponderant viruses associated with acute asthma exacerbations [9], have been shown to induce minimal, if any, cytotoxicity [10][11][12]. We have recently shown that RVs are able to replicate in human primary bronchial epithelial cells [13]. An unexpected finding in that study was that exposure of sparely seeded cell monolayers resulted in a considerable RV-specific cytopathic effect (CPE). RV-induced CPE was also reported in another study, in which case it was attributed to specific RV serotypes [14]. Based on the above, we hypothesized that RV infection may be conditionally able to affect epithelial cell viability and life-death cycle. Therefore, in this study we used BEAS-2B cells, a well-established in-vitro lower respiratory epithelium model of RV infection [15,16], used in parallel studies with primary bronchial epithelial cells [17,18]as well in cell death studies [19]to systematically investigate the ability of RV to induce cytotoxicity in bronchial epithelial cells. Furthermore, the effect of RV on an in-vitro model of epithelial wound repair was assessed. Cell cultures BEAS-2B cells, a human continuous bronchial epithelial cell line and Ohio-HeLa cells (obtained from ATCC and the MRC Cold Unit, UK, respectively) were cultured in Eagle's minimal essential medium (E-MEM) buffered with NaHCO 3 and supplemented with 10% (v/v) fetal bovine serum (FBS) and 40 µg/ml of gentamycin, in a humified 5% CO 2 incubator. Cells were spilt twice weekly. Primary human bronchial epithelial cells (HBECs), initially deriving from an adult non-asthmatic volunteer in the course of another ongoing study, were available frozen in liquid nitrogen. They were isolated as described earlier [13]. Cells were rapidly thawed and cultured on plates pre-coated with collagen type-I (Nutacon, Holland), submerged in Clonetics BEGM (Cambrex, ML, USA). Medium was replaced daily. Cells were used at passage 2, at a confluence of 50%. It should be pointed out that cells in these submerged cultures are undifferentiated and do not form tight junctions [20]. All culture reagents were purchased from Gibco-Invitrogen Corp. (Carlsbad, CA, USA) and Falcon (Becton Dickinson, Labware, NJ, USA) and biochemicals were from Sigma (St. Louis, MO, USA), unless otherwise specified. Virus cultures and titration Rhinovirus types 5, 7, 9, 14 and 16 (major subtypes) and 1b (minor subtype) were propagated in Ohio-HeLa cells in large quantities at 33°C, in a humified, 5% CO 2 incubator, as previously described [16]. Briefly, when full cytopathic effect (CPE) developed, cells and supernatants were harvested, pooled, frozen and thawed twice, clarified, sterile-filtered, aliquoted and stored at -70°C. Lysates of parallel Ohio-HeLa cell cultures, not infected with virus, were used as controls. In order to determine RV titers, Ohio-HeLa cells were seeded in 96-well plates reaching 60-70% confluence at the time of infection. Logarithmic dilutions of RVs were made in multiple wells and after five days of culture the plates were fixed and stained with Crystal Violet Buffer (5% formaldehyde, 5% ethanol and 0.1% crystal Violet in PBS). The end-point titer was defined as the highest dilution at which a CPE was detected in at least half of the wells and expressed as the inverse logarithm of this dilution (MOI-multiplicity of infection-infectious units/cell). For each experiment, a new vial was rapidly thawed and used immediately [21]. In order to assess the specificity of RV-mediated responses, RV preparations were exposed to 58°C for 1 h. The successful inactivation was confirmed by lack of RV replication in Ohio-HeLa cells. Cytotoxicity assay BEAS-2B cells were plated in 48-well plates in serial dilutions and allowed to grow for 48 hours, reaching confluence of 100%, 50%, 25% and 12.5%. Cell numbers and respective confluence were assessed by standard Neubauer cytometer in initial experiments. Cells were then exposed to rhinovirus as previously described [16]. Briefly wells were washed with HBSS and virus was added at the desirable MOI in parallel to non-infected Ohio HeLa cell lysate negative controls. The amount of the virus added was proportional to the number of the cells. After 1 hour of gentle shaking at room temperature, fresh medium was added, to a final volume of 0.5 ml. Eagle's MEM supplemented with 4% FCS, 1% MgCl and 4% tryptose phosphate broth and 40 µg/ml of gentamycin, was used for the experiments. After 48 hours of incubation, cells were washed Cytotoxicity of different RV serotypes; 1b, 7, 14, 5, 16 and 9, at MOI = 1 on BEAS-2B cells, cultured until reaching different densities (100, 50, 25 and 12.5%) Figure 1 Cytotoxicity of different RV serotypes; 1b, 7, 14, 5, 16 and 9, at MOI = 1 on BEAS-2B cells, cultured until reaching different densities (100, 50, 25 and 12.5%). An inverse correlation between cell density and RV-induced cytotoxicity is observed for RV1b, RV7, RV14 and RV16, (*p < 0.05, n = 3-26, linear regression). twice in PBS and a volume of crystal violet staining buffer equal to 1/5 th of the original culture medium was added to the wells as indicator of cell viability [22,23]. Cells were incubated for 30 min at room temperature followed by extensive washing with distilled water. After air drying 0.2 ml of a destain buffer (16.6% v/v glacial acetic acid, 50% v/v methanol in ultra pure water) was added to the wells for 5 min. Cells were fully destained and the produced color was transferred to a clear 96 well ELISA plate and optical density was measured with a photometer at 595 nm (Ceres 900C, Bio-Tec Instruments, Inc, Winooski VT, USA) [13]. Cytotoxicity was estimated as % of the negative control (1-O.D RV infected/O.D HeLa * 100). Epithelial repair assay Confluent monolayers of BEAS-2B cells were grown in 48well plates. Cells were then damaged mechanically by crossing three times with a 10-200 µL volume universal pipette tip (Corning, NY, USA) [1]. After washing twice with HBSS cells were infected with RV1b at MOI 0.5, in parallel to non-damaged monolayers as well as HeLa lysate controls as described above and incubated in a humified 5% CO 2 incubator. Immediately after infection (t = 0) and at 24, 48 and 72 hours a plate was stained with crystal violet. Wells were photographed and the area of unpopulated cells was calculated with image analysis, (Scion Image software, b4.0.2, NIH). Furthermore, cytotoxicity was estimated, as described above. In some experiments, cells were fixed and stained with 4, 6 diamino-2-phenylindole (DAPI), a DNA-binding dye. They were then viewed using a UV-visible Zeiss Axioplan 2 fluorescent microscope and fluorescence images were captured using a CCD camera. Proliferation Assay BEAS-2B cells were cultured in 25 cm 2 flasks until confluent. After infection with RV 1b at 0.5 MOI or control, cells were incubated for an additional 24 hours at 33°C. They were then washed twice with HBSS, detached using a nonenzymatic cell dissociation buffer (Gibco, UK), split 1:2 in new flasks and re-incubated. At that time, as well as at 24, 48 and 72 hours later proliferation was estimated by staining with Proliferating Cell Nuclear Antigen (PCNA), a proliferation marker correlates with other markers of the S phase of cell cycle like tritiated thymidine and Bromodeoxyuridine labeling [24]. PCNA assessed with flow cytometry [25]. Flow Cytometry BEAS-2B cells were harvested non-enzymatically and resuspended at a density of 1 × 10 5 cells/100 µl in washing buffer (PBS with 1% FBS). For ICAM-1 analysis cells were incubated with 20 µL anti-ICAM, phycoerythin-conjugated monoclonal antibody (Pharmingen, Becton Dickin-son, Jan Hose, CA, USA) for 30 min at 4°C. After washing twice, cells were fixed with 0.5 ml of 1% paraformaldehyde in PBS and counted with a FACSort (Becton Dickinson, Jan Hose, CA, USA) flow cytometer. Fluorescence data were collected on 10 4 cells and histogram analysis was performed with the use of Cell Quest software™. For PCNA analysis, cells were permeabilized in a buffer comprising of 0.2 mg/ml Na 2 HPO 4 -2H 2 O, 1 mg/ml KH 2 PO 4 , 45% v/v acetone and 9.25% v/v formaldehyde [25], followed immediately by staining with 10 µL of an anti-PCNA, fluorescein-conjugated monoclonal antibody (Pharmingen, Becton Dickinson, Jan Hose, Ca, USA). Fluorescence data from 10 4 cells were collected and histogram analysis was performed with Cell Quest software. Statistical Analysis Data are expressed as mean ± standard error of mean. Statistical analysis was conducted with the SPSS 11.0 for Windows software. Linear regression analysis was used to evaluate the effect of cell density, and ANOVA for time and dose comparisons. Means were compared by nonparametric tests. P values less than 5% were considered significant. Rhinoviruses induce cytotoxicity in bronchial epithelial cells in a serotype and cell density-depended manner BEAS-2B cultures were infected with RV1b, RV5, RV7, RV9, RV14 and RV16 at an MOI of 1 and confluences of 12.5%, 25%, 50% and 100%. The extent of RV-induced cytotoxicity differed between RV serotypes: RV9 was not cytotoxic at all at this MOI. RV1b and RV7 were the most cytotoxic, able to induce cytotoxicity even on confluent monolayers, while killing over 65%-70% of less dense cultures. RV14 and RV5 were moderately cytotoxic while RV16 could kill only sparsely seeded cells. Differences in RV-induced cell death between RV serotypes were statistically significant at all cell densities (p = 0.00 in all cases, ANOVA). Furthermore, a statistically significant inverse correlation between cell density and RV-induced cytotoxicity was observed for RV1b, RV7, RV14 and RV16, (p = 0.000, 0.000, 0.014 and 0.03 respectively, linear regression); RV5 was moderately cytotoxic at all cell densities. Figure 1 shows the % cytotoxicity of each RV serotype at different cell densities. RV-induced cytotoxicity is specific To determine whether the observed cytotoxicity is specific to RV and not associated with factors in the virus preparation, we exposed a 50% confluent monolayer to 1 MOI of heat-inactivated RV 1b. Inactivated RV1b lost its capacity to induce cell death (6.43% ± 3.68 vs. 55.87% ± 2.68 of live virus, p = 0.021, Mann-Whitney) (Figure 3). RV infection delays epithelial wound repair To test whether infection with RV may affect the self-repair capacity of bronchial epithelial cell monolayers, digital photos were taken immediately after mechanical damage (t = 0) as well as 24, 48 and 72 hours later in infected and non-infected monolayers. (Figure 4). Furthermore, intact and wounded monolayers did not differ in susceptibility to RV-mediated cytotoxicity (17.13% ± 4.11 versus 17.64% ± 2.6 at 48 hours after infection for intact and wounded respectively), suggesting that epithelial wounding leaves unaffected the remaining cells of the monolayer in respect to RV-induced cytotoxicity. RV infection decreases epithelial cell proliferation The expression of PCNA, reflecting proliferative activity of epithelial cells, increased 24 hours after seeding, followed by a trend towards return to baseline at 48 and 72 h. However, PCNA expression (Mean Fluorescence Intensity, MFI) was significantly lower at all time points in RVinfected cells ( Figure 5). Cell viability, assessed by 7ADD staining, was over 90% in these experiments. Discussion In contrast to previous knowledge, but in line with recent observations, this study demonstrates that human rhinoviruses, the agents most frequently associated with acute asthma exacerbations [26], are able to become cytotoxic in an in-vitro model of human bronchial epithelium. A continuous cell line model was used for the majority of experiments; however, the finding was also confirmed in primary bronchial cells. Furthermore, it is shown for the first time that RV infection may delay epithelial wound healing by affecting epithelial cell proliferation. It has been generally accepted that RVs do not induce cytotoxicity in-vitro or in-vivo [27][28][29], even in heavy colds [10][11][12]30]. However, two recent studies designed to assess the ability of RV to infect primary human bronchial epithelial cells have unexpectedly observed RV-associated cytotoxicity: in the study of Schroth et al [14], RV16 and RV49 were used and cytotoxicity was observed only with the latter serotype; the authors hypothesized that a higher viral binding and/or larger yield by RV49 may explain their observation, noting however the need for additional studies. This was also the case in the study of Papadopoulos et al. [13] in which RV cytotoxicity was observed when sparsely seeded cultures were exposed to the virus. The current study, which systematically addressed these possibilities, demonstrates that they are both reproducible, and in fact different RV serotypes differ in their cytotoxic capacity, which in most cases, is nevertheless cell density dependent. The latter finding can also explain why RV cytotoxicity was not observed in previous in-vitro studies, which were conducted with confluent cultures [28,29]. A recent comprehensive study from Deszcz et al [31] is in support of our findings, as it shows that RV14 can induce high levels of cytotoxicity in a bronchial epithelial cell line 16HBE14o -. Furthermore, they demonstrate that a possible mechanism is the induction of apoptosis via the mitochondrial pathway, a phenomenon also shown in primary cells from asthmatic subjects [32]. It has been shown that differentiated bronchial epithelial cells grown in air-liquid interface and developing tight junctions, are considerably resistant to RV infection [20]. In this respect, the results of this study, using submerged cultures that lead to non-differentiated cells, may overes-Cytotoxicity of active and heat inactivated RV1b (MOI = 1) on 50% confluent BEAS-2B cells Cytotoxicity ( % over control) * timate the in-vivo situation. However, a characteristic of asthma is the significant loss of columnar epithelial cells leading to loss of its integrity and density [4], epithelial damage also correlates to the severity of the disease [33] In this respect, sparsely seeded bronchial epithelial cell cultures, can be considered as an extreme, but relevant model of asthmatic epithelium. Under such conditions, as show herein, RV-associated cytotoxicity increases considerably, with almost linear density-dependence, suggesting that virus-induced exacerbations may have increased sequels in more severe patients [34]. The fact that different RV serotypes are not equally capable of killing epithelial cells, ranging from no to extensive cytotoxicity, supports the possibility that this phenome-non may contribute to asthma exacerbation severity variations observed in clinical practice [35]. RV infects a small proportion of exposed cells [15]; biopsy data show that in human RV infection epithelial inflammation, potentially resulting from infection, is patchy [27,36]: there has been, however, no direct comparison between normal and asthmatic individuals in regard to RV-induced cytotoxicity in-vivo, a study complicated by the fact that the epithelial integrity and viability is considerably affected in asthmatics at baseline. In a recent study, Wark et al showed increased RV proliferation in primary epithelial cells obtained from asthmatic patients in comparison to normal controls [32]. We have also observed that exposure of BEAS-2B cells to culture supernatants Damaged epithelium (t = 0) is suboptimally repopulated after RV-infection in comparison to control BEAS-2B cells modeling an 'atopic' environment, was also able to increase RV proliferation, and at the same time increase RV-induced cytotoxicity [37]. These observations further suggest that RV-induced cytotoxicity may be relevant in asthma exacerbation pathogenesis. There are several possibilities in respect to the mechanism(s) underlying this phenomenon, which have not, however, been addressed in this study. One possibility might be that rapidly dividing cells, as is the case of sparse cultures, may be more permissive to RV infection. Moreover, differential expression of soluble factors, such as interferons, may regulate either susceptibility to infection or the proliferative potential of RV. These hypotheses, which may well not be mutually exclusive and could all contribute to RV cytotoxicity, are currently under investigation. Independent of the causative mechanism(s), in an already affected epithelium, RV infection may lead to more profound damage. This would eventually lead to activation of repair mechanisms: deposition of extracellular matrix, proliferation and migration of epithelial cells in order to repopulate the damaged area, followed by cell differentiation [1,2]. Hence, we used a previously validated wound model [38,39] to investigate the role of RV infection on the repair process, describing for the first time an RVmediated delay in epithelial wound healing, associated with reduced proliferation of RV infected cells. This finding may be of significance as altered restitution of airway structure is one of hallmarks of asthmatic inflammation leading to airway remodeling [4]. Damaged asthmatic epithelium has been previously reported to have proliferation defects during the repair process [40]. Dysregulated proliferation in bronchial epithelial cells from asthmatic patients has been associated with increased expression of the cyclin-dependent kinase inhibitor p21waf [41,42]. In severe, corticosteroid-dependent asthma, markers of epithelial cell proliferation are coexpressed with markers of activation, suggesting that, in at least that case, the repair process is associated with a persistent activation state of the epithelial cells [41]. The above findings have led investigators to propose that a repair/activation imbalance may be the central mechanism of airway remodeling in asthma [5]. In this respect, RV-induced cytotoxicity, an event frequently occurring and able to activate epithelial cells into an inflammatory response [16], may be implicated in the development of remodeling. The possibility that a viral infection may reprogram epithelial responses towards a 'remodeled' phenotype has also been proposed, based on a mouse model of paramyxoviral infection [43]. Conclusion In conclusion, several human RV serotypes are able to become cytotoxic to human bronchial epithelial cells, especially when these are sparsely cultured; RVs are also able to delay epithelial wound healing. Previously unrecognised, RV-induced cytotoxicity may become important in the context of asthma in which the epithelium is already affected and consequently contribute to the induction and/or perpetuation of airway remodelling.
2017-06-21T05:44:00.662Z
2005-10-10T00:00:00.000
{ "year": 2005, "sha1": "639db467cc3409ac2f56902f6ba237b32ef2593d", "oa_license": "CCBY", "oa_url": "https://respiratory-research.biomedcentral.com/track/pdf/10.1186/1465-9921-6-114", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "639db467cc3409ac2f56902f6ba237b32ef2593d", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
119296694
pes2o/s2orc
v3-fos-license
Precision Kaon and Hadron Physics with KLOE, La Rivista del Nuovo Cimento We describe the KLOE detector at DAFNE, and its physics program. The impact of our results on flavor and hadron physics to date, as well as an outlook for further improvement in the near future, are discussed. -Introduction Late in 1989 the Istituto Italiano di Fisica Nucleare, INFN, decided to construct an e + e − collider meant to operate around 1020 MeV, the mass of the φ-meson. The φ-meson decays mostly to kaons, neutral and charged, in pairs. Its production cross section peaks at about 3 µbarns. Even with a modest luminosity, L=100 µb −1 /s, hundreds of kaon pairs are produced per second. This collider, called a φ-factory and christened DAΦNE, is located at the Laboratori Nazionali di Frascati, LNF, INFN's high energy physics laboratory near Rome [1]. Since the stored beam intensities decay rapidly in time, the cycle is repeated several times per hour. A collider is characterized by its luminosity L defined by: event rate=L×cross section, beam lifetime and beam induced radiation background. The latter is large for short lifetimes, which also result in the average luminosity being smaller than the peak value, in a word, lower event yield and large, variable background. DAΦNE is housed in the ADONE building at LNF, see fig. 2. DAΦNE runs mostly at center of mass energy W = m φ ≈ 1019. 45 MeV. The φ meson decays dominantly to charged kaon pairs (49%), neutral kaon pairs (34%), ρπ (15%), and ηγ (1.3%). A φ factory is thus a copious source of tagged and monochromatic kaons, both neutral and charged. In most of the following we will assume that all analysis are carried out in the φ-meson center of mass. While this is in fact what we do, it is appropriate to recall that the electron and positron collide at an angle of π-0.025 radians. The φ-mesons produced in the collisions therefore move in the laboratory system toward the center of the storage rings with a momentum of about 13 MeV corresponding to β φ ∼0.015, γ φ ∼1.0001. K mesons from φ-decay are therefore not monochromatic in the laboratory, fig. 3. All discussion to follow refers to a system of coordinates with the x-axis in the horizontal plane, toward the center of DAΦNE, the y axis vertical, pointing upwards and the z-axis bisecting the angle of the two beam lines. The momentum of neutral kaons varies between 104 and 116 MeV and is a single valued function of the angle between the kaon momentum in the laboratory and the φ-momentum, i.e. the x-axis. Knowledge of the kaon direction to a few degrees allows to return to the φ-meson center of mass, fig. 3. The mean charged kaon momentum is 127 MeV. 1 . 2. KLOE . -A state-of-the-art detector is needed to collect data at DAΦNE, from which physics results can be extracted. In summer 1991, the KLOE Collaboration was initiated proposing a detector and a program. The KLOE detector was not formally approved nor funded for another couple of years, because the authorities had not envisioned fully the magnitude of KLOE. By mid Summer 1998 however, KLOE was designed, constructed, completely tested and, complete with all of its electronics for signal processing, event gathering and transmission, was practicing with Cosmic Rays. Between Christmas and 1999 New Year, KLOE was moved from its own assembly hall onto the DAΦNE's South Interaction Region. The detailed KLOE saga, with its concomitant joys and travails, could be an entertaining tale that should be written up somewhere. Our purpose in this paper, is instead, chapter 2, to extract and describe those properties of the detector, chapter 3, its performance, and chapter 4, the special kaon "beams" available at DAΦNE, that made the physics measurements possible. In the subsequent seven years KLOE, after beginning with a trickle of beam, by 2006 had produced data of such precision that the kaon, the first fundamental particle to introduce the concept of "flavor" into our current way of thinking, got its definitive 21st century portrait re-mapped with high precision, 60 years after its discovery. These are the subjects covered in the next chapters: chapter 5 details the new measurements of kaon physical parameters and decays; chapter 6, discusses the KLOE determination of V us of the CKM mixing matrix , unitarity and lepton universality tests, as well as search for new physics; chapter 7 is on quantum interferometry , and chapter 8 is on tests of CPT. Concurrently, capitalizing on the profuse production of light mesons resulting from electromagnetic transitions from the φ-mesons KLOE has improved scores of light meson properties often by orders of magnitude and these are discussed in chapter 9. Besides neutral kaon decays, symmetry tests have also been performed using light meson and charged kaon decays, see sections 9 . 1.2 9 . 4.2, and 5 . 3. Finally, utilizing initial state radiation KLOE has measured the dipion cross section, σ ππ , over the whole range from the ππ threshold to the φ mass energy, see chapter 10. Most KLOE data were taken during two running periods. The 2001-2002 run yielded 450 pb −1 of data, corresponding to approximately 140 million tagged K S decays, 230 million tagged K L decays, and 340 million tagged K ± decays. Analysis of this data set is essentially complete and most of the results are described in this report. scale and complexity as the "general purpose detectors" of its time that were operating at the world's foremost laboratories such as LEP at CERN. If we consider that LEP operated at about one hundred times DAΦNE's energy, producing much more energetic particles and with correspondingly greater multiplicities, KLOE's large size would seem unwarranted. We shall see however that while KLOE's complexity was necessitated by its intent of being a definitive high precision experiment, its size, instead, was dictated by the quirk of one of the two neutral kaons, called the K L , of living an extraordinarily long time (for a short lived particles) , about 51 nanoseconds. Kaons from φ-mesons decaying at rest, travel at approximately one fifth of the speed of light. This low speed is a great bonus for many reasons. However the mean path travelled by a K L -meson, λ L = γβcτ is 3.4 m. Thus if we want a detector which can catch 1 − 1/e=63% of all decaying neutral long-lived kaons, it must have a radius of some three and a half meters. We compromise on two meters and catch some 40% of those decays. Less than that in fact, since some volume is inevitably lost around the beam interaction point, IP, and some more is lost at the outer edge, in order to be able to recognize the nature of the decays close to it. The major components of KLOE that enclose the decay volume around the IP are, going outwards radially, a large tracking device to reconstruct the trajectory of charged particles, the Drift Chamber DC, surrounded by a hermetic calorimeter to measure the energy and the entry point of photons, the Barrel EMC and two End-cap EMC. The tracking chamber and the calorimeter are immersed in a mostly axial magnetic field of 0.52 T, provided by a superconducting coil enclosed in its iron yoke. 1. Drift Chamber . -In the design of the KLOE tracking chamber one is constrained by the desire to catch all the charged secondary products from a decay and measure their properties with great precision, without jeopardizing the need to achieve the same for the neutral secondary products. Recalling that when a charged particle travels in a medium, a gas, it undergoes repeated collisions with electrons, releasing part of their energy, resulting in a string of ion pairs created marking its passage. Thus when the particle enters a drift chamber, DC (a gas filled chamber traversed by a large number of wires, some kept at +2000 volt, anode wires, some at ground), electrons of the ion pairs created along its trajectory drift to the positive voltage wires and from an avalanche multiplication mechanism a detectable signal appears at the wire's end. KLOE custom designed sophisticated electronics that are mounted at the wire's end which senses the tiny signals and provide us with the drift charge and time measurement. Furthermore, the total integrated charge collected at the wire's end gives information on the energy released from the initial particle. Furthermore, particles of different mass travelling in the same medium and with the same momenta have different energy releases: for the same p they have different β and approximately the specific ionization dE/dx ∝ 1/β 2 . Therefore this information is suitable to perform particle identification (PID). Finally, measuring the drift time locates the distance of the track from the anode wires which is then used to reconstruct the trajectory of the particles. A caveat, the actual path of the particle is altered by multiple scattering, the multiple scattering angle being ∝ 1/ √ X 0 , where X 0 is the radiation length which is roughly proportional to 1/Z 2 , in gr/cm 2 (and one must not forget the density). Finally, while moving towards the anode, the drifting ionization undergoes diffusion, the amount of which is a function of the density of the gas mixture of the DC. After judicious balancing, in order to avoid the K L →K S regenerations (K S generated from the strong interaction of a K L with the traversed medium that might simulate CPviolating decays), a helium based gas mixture was chosen to minimize multiple scattering and density, despite the fact diffusion of drifting ionization is larger in He than certain other gases. The average value of the radiation length in the DC volume is X 0 ∼ 900 m, including the wire's contribution. Again, for the mechanical structure low-Z and low density materials were used to minimize K L regeneration, tracks' multiple coulomb scattering and the absorption of photons before they can reach the calorimeter. To achieve this the chamber is constructed out of a carbon fiber composite. The chamber volume is delimited by an outer cylinder of 2 m radius, an inner cylinder of 25 cm radius and closed by two 9 mm thick annular endplates 9.76 m radius of curvature and a 25 cm radius hole in the center. Due to their large size, the endplates were fabricated in two halves then glued together. About 52,000 holes were drilled on each endplate for the insertion of the feedthroughs holding the DC wires. The total axial load on the end plates, about 3500 kg, would deform the end plates by several mm at the end of stringing. The deformation problem was solved by using rings of rectangular cross-section which surround the L-shaped plate rims, made of the same material. The ring is used to apply a tangential pull to the spherical plate via 48 tensioning screws made of titanium alloy pulling on tapped aluminum inserts in the L-shaped flange. The net result of the pull is to approximate a complete sphere beyond the outer edge of the plate which greatly reduces the deformation. Each screw was instrumented with two strain gauges, calibrated in order to compensate for bending effects on the screws, so that the pulling force was always known. At the end of stringing the deviation from the spherical shape of the endplates was measured to be less than 1 mm and the hole positions to be accurate at the level of 60 µm. The KLOE DC is indeed a tour de force achievement thereafter copied in other experiments (albeit never to the same large dimensions). The KLOE DC [2] has ∼52,000 wires, about 12,500 of them are sense wires, the remaining ones shape the electric field in 12,500 almost square cells located on cylindrical layers around the z-axis (defined by the bisector of the two crossing beams). Particles from the φ decays are produced with small momenta and therefore track density is much higher at small radii. This motivated the choice of having the 12 innermost layers made up of 2 × 2 cm 2 cells and the remaining 48 layers with 3 × 3 cm 2 cells. For a particle crossing the entire chamber we get dozens of space points from which we reconstruct a track: the track of a pion from K S → π + π − decay has ∼60 space points in average. An all stereo geometry allows the measurement of the z coordinate. Fig. 6 shows the wire geometry during the DC construction as illuminated by light. In this particular arrangement, shown in fig. 7 left, the wires form an angle with the z-axis, the stereo angle ǫ. Consecutive layers have alternating signs of this angle and its absolute value increases from 60 to 150 mrad with the layer radius while keeping fixed for all the layers the difference between R p and R 0 distances (1.5 cm), defined in fig. 7 left. This design results in a uniform filling of the sensitive volume with almost-square drift cells, with shape slowly changing along z, which increases the isotropy of the tracking efficiency. To extract the space position from the measured drift time of the incident particle, we need to know the space-time relation describing the cell response. The whole DC can be described using 232 space-time relations only, parametrized in terms of the angles β andφ defined in fig. 7 right. The β angle characterizes the geometry of the cell, directly related to the electric field responsible of the avalanche multiplication mechanism. Theφ angle gives the orientation of the particle trajectory in the cell's reference frame, defined in the transverse plane and with origin in the sense wire of the cell. The presence of a signal on a wire and its drift time and distance is called hit. Once we have the hits, we can proceed with the event reconstruction using the wire geometry, space-time relations and the map of the magnetic field. This procedure follows three steps: (i) pattern recognition, (ii) track fit, and (iii) vertex fit. The pattern recognition associates hits close in space to form track candidates and gives a first estimate of the track parameters. Then track fit provides the final values of these parameters minimizing a χ 2 function based on the difference between the fit and the expected drift distances, residuals, as evaluated from a realistic knowledge of the cells' response (measured drift times and space-time relation). This is an iterative procedure because the cells' response depend on the track parameters. Finally the vertex fit searches for possible primary and secondary vertices, on the basis of the distance of closest approach between tracks. Besides the drift time and position information, hits in the DC are also associated to a charge measurement directly related to the energy released in the cell from the incident particle. The amount of collected charge depends on the particle's track length in the crossed cell. The energy deposited in a sample of finite thickness fluctuates and exhibits a Landau distribution, with a long tail at high energies. Therefore to improve the energy resolution, the energy deposited is sampled many times for each track and a truncated mean technique is used. To use the charge information for particle identification purposes it is necessary to normalize the charge to the track length and define the specific ionization dE/dx. If the energy is measured in counts, we have on average 15 counts/cm for pions and muons from K ± µ2 and K ± π2 decays, with momentum within 180 and 300 MeV. Instead K ± tracks exhibit a larger number, ∼120 counts/cm. The drift chamber provides tracking in three dimensions with resolution in the bending plane better than 200 µm, resolution on the z measurement of ∼2 mm and of ∼1 mm on the decay vertex position.The particle's momentum is determined from the curvature of its trajectory in the magnetic field with a fractional accuracy better than ∼0.4% for polar angles larger than 45 • . The calibration of the absolute momentum scale has a fractional accuracy of few 10 −4 using several two-and three-body decays (e + e − → e + e − ,e + e − → µ + µ − , K L → πlν, K L → π + π − π 0 , K ± → µ ± ν,K ± → π ± π 0 ) covering a wide momentum range. Deviations from nominal values of invariant mass, missing mass or momentum of the charged decay particle in the rest frame of the decaying particle are used as benchmarks for the calibration procedure. The resolution on the specific ionization measurement obtained using K ± two-and three-body decays and Bhabha scattering events, is ∼ 6% for pions and ∼ 7% for electrons [3], using 60 truncated samples. The DC system's complexity, as well as the demands placed upon it requires the response of each cell to an incoming particle to be known accurately. The cell response is a function of its geometry and of the voltage applied to the wires. Thus the position of the DC, mechanical structure as well as wires, had to be surveyed, in situ, together with the other KLOE components. The Slow Control monitoring system (sect. 2 . 6) fully embedded in the Data Acquisition process (sect. 2 . 5), sets and registers the voltages applied to the wires, sends the information to the read-out electronics and controls its proper functioning [4]. Moreover the gas mixture parameters, namely composition, temperature and pressure, enter directly in the space-time relations used in reconstructing the particle trajectory, to extract the space position from the measured drift time of the incident particle. The Slow Control System also monitors the gas parameters. To ensure the stability in time of the DC performance, the system has to be calibrated periodically by acquiring samples of cosmic-ray muons suitable for the measurement of the ∼200 different space-time relations. The calibration program, incorporated into the KLOE online system, automatically starts at the beginning of each run and selects about 80,000 cosmic-ray events. These events are tracked using the existing space-time relations and the average value of the residuals for hits in the central part of the cells is monitored. If the residuals exceed 40 µm then about 300,000 cosmic-ray events are collected, with about 30 Hz rate, and a new set of calibration constants is obtained. At the beginning of each data-taking period a complete calibration of the space-time relations is needed together with the measurement of the DC time offsets, the latter from some 10 7 cosmicray events. Finally, during data taking the DC performances are monitored using selected samples of Bhabha scattering events. 2. Electromagnetic Calorimeter . -The neutral decay products of the K L 's, its shortlived partner K S (lifetime ∼ 89 picoseconds) and of those of their charged brethren, the K ± 's, which we need to detect, are photons, either directly produced or from the decay of neutral pions into two photons. It is important to recall that photons for energies larger than a few MeV, as in most of our cases, interact with matter by electron positron pair production that in turn radiate photons when traversing matter, a process quite similar to pair-production. Thus, a photon, or an electron for that matter, travelling through dense, high Z matter, repeatedly undergoes radiation and pair production in cascade, until all its energy is expended in a so called shower of e + e − and photons. From detection of these particles and their energy one measures the original photon's (or electron's) energy. This is done with a "calorimeter", which also usually pinpoints the position of the electromagnetic cascade (or shower) as well. The KLOE calorimeter, EMC, has been designed and built to satisfy a set of stringent requirements such as providing a hermetic detection of low energy photons (from 20 to 500 MeV) with high efficiency, a reasonable energy resolution and an excellent time resolution to reconstruct the vertex for the K L neutral decays. The calorimeter response has also to be fast since its signals are used to provide the main trigger of the events. The chosen solution for the EMC is that of a sampling calorimeter, composed of lead passive layers, accelerating the showering process, and of scintillating fiber sensing layers. It is made of cladded 1 mm scintillating fibers sandwiched between 0.5-mmthick lead foils. The foils are imprinted with grooves wide enough to accommodate the fibers and some epoxy, without compressing the fibers. This precaution prevents damage to the fiber-cladding interface. The epoxy around the fibers also provides structural strength and removes light travelling in the cladding. In more detail, the basic structure of 0.5 mm, grooved Pb layers between which are embedded 1 mm diameter scintillating fibers are positioned at the corners of almost equilateral triangles. ∼200 such layers are stacked, glued, and pressed, resulting in a bulk material. The resulting composite has a fibers:lead:glue volume ratio of 48:42:10 which corresponds to a density of 5 g/cm 3 , an equivalent radiation length X 0 of 1.5 cm and an electromagnetic sampling fraction of ∼13%. The large ratio of active material to radiator, the frequent signal sampling and the special fiber arrangement result in a factor √ 2 improvement in energy resolution with respect to calorimeter with slabs of equivalent scintillator:lead ratio. The chosen fibers, Kuraray SCSF-81 and Pol.Hi.Tech. 00046, are blue emitting scintillating fibers with a fast emission time whose larger component is described by an exponential distribution with a τ of 2.5 ns. In addition, the special care in design and assembly of the Pb-fiber composite ensures that the light propagates along the fiber in a single mode, resulting in a greatly reduced spread of the light arrival time at the fiber ends. Propagation speed in the fiber is ∼ 17 cm/ns. These fibers produce more than 5×10 3 photons for one MeV of deposited energy by ionizing particles. 3% of the light reaches the fiber ends. Fibers transmit 50% of the light over 2 m (λ ∼ 4m). This material is shaped into modules 23 cm thick (∼15X 0 ). 24 modules of trapezoidal cross section are arranged in azimuth to form the calorimeter barrel, aligned with the beams and surrounding the DC. An additional 32 modules, square or rectangular in cross section, are wrapped around each of the pole pieces of the magnet yoke to form the endcaps, which hermetically close the calorimeter up to ∼98% of 4π, see fig. 8. The unobstructed solid-angle coverage of the calorimeter as viewed from the origin is ∼94% [5]. The fibers run parallel to the axis of the detector in the barrel, vertically in the endcaps, and are read out at both ends viewed by lucite light guides of area of 4.4×4.4 cm 2 . The other end of the guides are coupled to the photocathodes of fine mesh photomultipliers of 1.5 inches diameter with quantum efficiency of ∼20%. The fiber length of 4 m corresponds to an average photoelectron yield of 35 p.e./layer for a minimum ionizing particle crossing the calorimeter at the fiber center. The photomultiplier tubes at each fiber's end transform the light into electric pulses. Their amplitudes, A i , is proportional to the amount of energy deposited while the recorded times, t i , are related to the time of flight of the particle. For each cell the position of the readout elements and the difference of the arrival times at the two fiber ends determine the shower position with a O(1) cm accuracy. The full stack of 23 cm (15 X 0 ) fully contains a shower of 500 MeV. 4880 photomultiplier tubes view the 24+32+32 modules into which the calorimeter is subdivided for a grand total of 15,000 km used optical fibers. As a first step, the reconstruction program makes the average of time and energy of the recorded t i and A i for the two sides of each cell and compute the hit position. Corrections for attenuation length, energy scale, time offsets and light propagation speed are taken into account at this stage. A clustering procedure then groups together nearby clumps of energy deposition and calculates the average quantities over all the participating cells. The calibration constants to transform t i and A i from raw quantities to time in nanoseconds and energy in MeV are evaluated with dedicated on-line and off-line algorithms. The energy calibration starts by a first equalization in cell response to minimum ionizing particles (m.i.p) at calorimeter center and by determining the attenuation length of each single cell with a dedicated cosmic ray trigger. This is done before the start of each long data taking period. The determination of the absolute energy scale in MeV relies instead on a monochromatic source of 510 MeV photons: the e + e − → γγ sample. This last calibration is routinely carried out each 200-400 nb −1 of collected luminosity. For the timing, the relative time offsets of each channel, T 0 i , related to cable lengths and electronic delays and the light velocity in the fibers are evaluated every few days with high momentum cosmic rays selected with DC information. An iterative procedure uses the extrapolation of the tracks in the calorimeter to minimize the residuals between the expected and measured times in each cell. A precision of few tens of picoseconds is obtained for these offsets. To run at the design luminosity, DAΦNE operated with a bunch-crossing period equal to the machine radio frequency (RF) period, T RF = 2.715 ns. Due to the spread of the particle's arrival times, the trigger is not able to identify the bunch crossing related to each event, which has to be determined offline. The start signal for the electronics devoted to time measurement (TDC) is obtained by phase-locking the trigger level-1 signal (sect. 2 . 4) to an RF replica, with a clock that has a period of 4 × T RF . The calorimeter times are then related to the time of flight of the particle from IP to the calorimeter, T tof , by T cl = T tof + δ C − N bc T RF , where δ C is a single number accounting for the overall electronic offset and cable delay, and N bc is the number of bunch-crossings needed to generate the TDC start. The values of δ C and T RF are determined for each data taking run with e + e − → γγ events by looking at the distribution of T cl −R cl /c where R cl /c is the expected time of flight: well separated peaks correspond to different values of N bc . We arbitrarily define δ C as the position of the peak with the largest statistics, and determine T RF from the peaks' distances. Both quantities are evaluated with a precision better than 4 ps for 200 nb −1 of integrated luminosity. This measurement of T RF allows us to set the absolute calorimeter time scale to better than 0.1 %. During offline processing, to allow the cluster times to be related to the particle time of flight, we subtract δ C and determine, on an event by event basis, the "global" event start time T 0 , i.e. the quantity N bc T RF . A starting value for all analysis is evaluated by assuming that the earliest cluster in the event is due to a photon coming from the interaction point. Further corrections are analysis dependent. The high photon yield and the frequent sampling enable cluster energies to be measured with a resolution of σ E /E = 5.7%/ E (GeV), as determined with the help of the DC using radiative Bhabha events. The absolute time resolution σ t = 57 ps/ E (GeV) is dominated by photoelectron statistics, which is well parametrized by the energy scaling law. A constant term of 140 ps has to be added in quadrature, as determined from e + e − → γγ, radiative φ decays and φ → π + π − π 0 data control samples. This constant term is shared between a channel by channel uncorrelated term and a common term to all channels. The uncorrelated term is mostly due to the calorimeter calibration while the common term is related to the uncertainty of the event T 0 , arising from the DAΦNE bunch-length and from the jitter in the trigger phase-locking to the machine RF. By measuring the average and the difference of T cl − R cl /c for the two photons in φ → π + π − π 0 events, we estimate a similar contribution of ∼ 100 ps for the two terms. Cluster positions are measured with resolutions of 1.3 cm in the coordinate transverse to the fibers, and, by timing, of 1.2 cm/ E (GeV) in the longitudinal coordinate. These performances enable the 2γ vertex in K L → π + π − π 0 decays to be localized with σ ≈ 2 cm along the K L line of flight, as reconstructed from the tagging K S decay. Incidentally, the thin lead layers used and the high photon yield allows high reconstruction efficiency for low energy photons. 3. Beam Pipe and Quadrupole Calorimeter . -At DAΦNE the mean K S , K L and K ± decay path lengths are λ S = 0.59 cm, λ L = 3.4 m and λ ± = 95 cm. To observe rare K S decays and K L K S interference with no background from K S → K L regeneration, a decay volume about the interaction point with r > 15λ S must remain in vacuum. The beam pipe, see fig. 9, surrounds the interaction point, with a sphere of 20 cm inner diameter, with walls 0.5 mm thick made of a Be-Al sintered compound. This sphere provides a vacuum path ∼ 10× the K 0 amplitude decay length, effectively avoiding all K S → K L regeneration. Permanent quadrupoles for beam focusing are inside the apparatus at a distance of 46 cm from IP, and surrounding the beam pipe. The quadrupoles of this low-β insertion are equipped with two lead/scintillating-tile calorimeters, QCAL, of ∼ 5X 0 thickness, with the purpose of detecting photons that would be otherwise absorbed on the quadrupoles [6]. Specifically, the QCAL main task is to identify and reject photons from K L → 3π 0 decays when selecting the CP violating K L → 2π 0 events. Each calorimeter consists of a sampling structure of lead and scintillator tiles arranged in 16 azimuthal sectors. The readout is performed by plastic optical fibers, Kuraray Y11, which shift light from blue to green (wavelength shifter, WLS), coupled to mesh photomultipliers. The special arrangement of WLS fibers allows also the measurement of the longitudinal z coordinate by time differences. Although the tiles are assembled in a way which maximizes efficiency for K L photons, a high efficiency is in fact also obtained for photons coming from the IP. This allows us to extend the EMC coverage of the solid angle down to cos θ=0.99 for prompt decays. The time resolution is related to the light yield and emission time/photoelectron of the Kuraray Y11 fibers. For a m.i.p. crossing the whole stack of 16 tiles, we had obtained during tests a light yield of ∼ 50 photoelectrons for entry points close to the PM's side. Considering that a m.i.p. deposits an energy equivalent to that of a 75 MeV photon, this corresponds to ∼ 0.5 p.e./MeV and in turn implies a time resolution for electromagnetic showers of 205 ps/ E(GeV). However during running, the presence of the B-field reduces the effective light yield by a factor 1.4, thus worsening the timing to 240 ps/ E(GeV), which however is high enough to yield an efficiency above 90% for photon energies greater than 20 MeV. 4. Trigger . -Event rates at DAΦNE are high; at the maximum luminosity of 10 32 cm −2 s −1 up to 300φ s −1 and 30, 000 Bhabha s −1 are produced within the KLOE acceptance window. The trigger design [7] has been optimized to retain all φ decays. Moreover, all Bhabha and γγ events produced at large polar angles are accepted for detector monitoring and calibration, as well as a downscaled fraction of cosmic-ray particles, which cross the detector at a rate of ∼ 3000 Hz. Finally, the trigger provides efficient rejection on the two main sources of background: small angle Bhabha events, and particle lost from the DAΦNE beams, resulting in very high photon and electron fluxes in the interaction region. The trigger electronic logic system rapidly recognizes topologies and energy releases of interest, processing the signals coming both from EMC and DC. Since φ decay events have a relatively high multiplicity, they can be efficiently selected by the EMC trigger by requiring two isolated energy deposits (trigger sectors) in the calorimeter above a threshold of 50 MeV in the barrel and 150 MeV in the endcaps. Events with only two fired sectors in the same endcap are rejected, because this topology is dominated by machine background. Moreover, events with charged particles in the final state give a larger number of hit wires in the DC than do background events. The DC trigger employs this information, requiring the presence of ∼15 hits in the DC within a time window of 250 ns from beam crossing. An event satisfying at least one of the two above conditions generates a first level trigger, T1, which is produced with minimal delay and is synchronized with the DAΦNE master clock. The T1 signal initiates conversion in the front-end electronics modules, which are subsequently read out following a fixed time interval of ∼ 2µs, which is ultimately driven by the typical drift distances travelled by electrons in the DC cells. In case of DC trigger, a validation of the first level decision is required by asking for ∼120 hits within a 1.2-µs time window. The KLOE trigger also implements logic to flag cosmic-ray events, which are recognized by the presence of two energy deposits above 30 MeV in the outermost calorimeter layers. For most data taking, such events were rejected after partial reconstruction by an online software filter. 2 . 5. Data acquisition and processing. -At a luminosity of ∼100 µb −1 /s, the trigger rate is ∼2000 Hz. Of these ∼500 Hz are from φ decays or Bhabha scattering. All digitized signals from the calorimeter, drift chamber, calibration and monitoring systems are fed via fiber optics from the electronics, sitting on platforms mounted on the KLOE iron yoke, to the computers in the next building. The Data Acquisition architecture has been designed to sustain a throughput of 50 Mbytes/s all along this path. In the near-by control room one can see real time displays of reconstructed charged tracks traversing the DC and photons depositing energy clusters in the EMC. The display of two events is shown in fig. 10. The events are reconstructed, meaning that charged tracks and energy deposits that occurred at the same instant in time on the various subcomponents of KLOE are associated together, properly labelled and packed into a unique file. To optimize the event reconstruction we use continuously updated calibration constants for the time and energy scales. Furthermore, background hits are acquired using a random trigger, which are recorded in order to be used in the simulation of KLOE events (MC). The MC simulation program reproduces accurately the detector geometry, the response of the apparatus and the trigger requirements , but of course cannot foretell the moment to moment variation of the background. All raw data, background, reconstructed and MC events are recorded on tape. KLOE has a huge tape library of ∼800 TB capacity [4]. At reconstruction time, events are classified into broad categories, or "streams". Data summary tapes are produced with a separate procedure. Any of the KLOE analyses, even after event reconstruction is quite complex. It includes making data summary tapes and extensive modelling of the detector response. This requires generation of large number of Monte Carlo (MC) events for each process under study. "Offline" processes run in a farm of computers of ∼ 150 kSPECint2K CPU power, all residing in the dedicated computer center. Both data and MC summaries (∼80 TB) are cached on disk for analysis [8]. 2 . 6. Slow control system. -KLOE has to be operated 24 hours a day, 7 days a week. Therefore it has to be kept under continuous control, both to guarantee efficient data taking, and for safety reasons. Parameters such as high and low voltage settings, the temperature of the electronics components, or the status of the drift chamber gas system are set and monitored by a dedicated system fully embedded in the Data Acquisition process: the Slow Control. This system also records some machine parameters, and provides to the DAΦNE team information about KLOE including our on line measurements of the luminosity and of the background levels. -Reconstruction Performances KLOE provides a continuous monitoring of the machine working point, providing feedback to DAΦNE continuously as well. The most important parameters are the beam energies and crossing angle, which are obtained from the analysis of Bhabha scattering events with e ± polar angles above 45 degrees. The average value of the center-of-mass energy is evaluated online during data taking with a precision of ∼ 50 keV for each 200 nb −1 of integrated luminosity. This determination is further refined with offline analysis to achieve a precision of ∼ 20 keV, as discussed later. The position of the e + e − primary vertex, with coordinates X P V , Y P V , and Z P V , is reconstructed run-by-run from the same sample of Bhabha events. X P V and Y P V are determined with typical accuracy of about 10 µm, and have widths L(X) and L(Y ) which are about 1 mm and few tens of microns, respectively. Z P V is also reconstructed online with 100 − 200 µm accuracy, but it has a natural width of 12 − 14 mm, determined by the bunch length itself. 1. Overview of EMC and DC performances. -We note that while KLOE is a general purpose apparatus with good performances of the two main detectors, the DC and the EMC, its real strength manifests when joining the reconstruction capability of both detectors. To give an overall view at a glance, we first summarize in table I the parametrized resolution for neutral particles (mainly photons) and for charged tracks (mainly electrons/pions) for prompt decays and K L ,K ± . Resolution on invariant masses are also reported in the same table as well as the reconstruction capability for neutral and charged vertices. In KLOE, the single particle reconstruction efficiency can be determined from data control samples, as well as from Monte Carlo simulation. As an example, the photon reconstruction efficiency as a function of the photon energy can be estimated from φ → π + π − π 0 events, where a redundant determination of the decay photon kinematics is possible using the charged pion momenta as measured from the DC. For charged particles produced at the interaction point, the track efficiency can be measured from K S → π + π − events selected with the requirement of having at least one track fulfilling the twobody decay kinematics. A high reconstruction efficiency is observed for both detectors on data and Monte Carlo. For each specific analysis, the event selection efficiency is evaluated from Monte Carlo. To take into account data-MC difference in the cluster/track efficiencies, the calculation is usually performed by weighting each particle in the final state with the ratio of data/MC reconstruction efficiencies. For particle identification, PID, we start extrapolating tracks on the calorimeter surface and associating tracks to clusters. The identification of photons is based on clusters not associated to any track and on the difference between expected and reconstructed time of flight, ToF, for straight trajectories. The PID of charged tracks, which at the considered energy are electrons, pions and muons, is based on a combined usage of EMC and DC, by looking at ToF, E/p and cluster shape. At low momenta, P < 200 MeV, the ToF is usually the winning handle while at higher momenta variables based on cluster shape are used. Often these information are combined together in a likelihood identification variable [9] or using a neural network [10]. To identify charged kaons we can use instead dE/dx, given the excellent separation provided by the specific ionization measurement between kaons and the other charged particles produced in the apparatus. 2. Absolute energy scale. -An improved determination of the center-of-mass energy, W , is obtained for each run by fitting the e + e − invariant-mass distribution for Bhabha events to a Monte Carlo generated function, which includes radiative effects. Initial state radiation (ISR), where one or both initial colliding particles radiate a photon before interacting, affects the e + e − center-of-mass energy and therefore the final state invariant mass. The ISR is mostly collinear to the beam, and in general is not detected. MC Bhabha events were generated using the Babayaga [11] event generator, which accounts for both final and initial state radiation. The absolute energy scale is calibrated by measuring the visible cross section for the φ → K S K L process. The cross section peak is fitted to a theoretical function [12], which depends on the φ parameters, takes into account the effect of ISR, and includes the interference with the ρ(770) and the ω(782) mesons. The φ mass, total width, and peak cross section are the only free parameters of the fit, the ρ(770) and the ω(782) parameters being fixed. The results of the fit to the data are shown in fig. 11. The φ mass value obtained from the fit is m φ = 1019.329 ± 0.011 MeV, which has to be compared with m φ = 1019.483 ± 0.011 ± 0.025 MeV, measured by CMD-2 at VEPP-2M [13]. For CMD-2 the absolute energy scale is accurately determined from beam induced resonant depolarization, so that we use the ratio m CMD φ /m KLOE φ = 1.00015 to correct our determination of the center of mass energy. This corresponds to a shift in the value of W of ∼150 keV. 3. Luminosity. -The tagging capability of the calorimeter trigger allowed us to perform fast luminosity measurements, providing a robust estimate of the DAΦNE luminosity with few per cent accuracy. This was achieved by counting Bhabha events in the acceptance of the calorimeter barrel. For this purpose, we selected events with two trigger sectors fired on the barrel, imposing a higher discrimination threshold than the one used in the standard acquisition chain. The tight angular selection, which reduces the available statistics by a factor of ∼ 6, was motivated by the need to keep under control the overwhelming machine background, which is concentrated at small angle. Additional cuts on the position of the fired sectors and on the time difference between the two sig- nals were added to improve rejection on the residual background, from cosmic-ray events. When operating at L=100 µb −1 /s, the luminosity measurement was updated every 15 s, with a statistical error of ∼ 3%. Fig. 12 top shows the monthly integrated luminosity by KLOE since January 2001, while in the bottom the year integrated luminosity is shown. These data have been collected almost exclusively at the φ peak. In early 2006, KLOE also collected ∼200 pb −1 of data at √ s = 1000 MeV, at which σ(e + e − → φ → π + π − π 0 ) drops to 5% of its peak value. Finally, it is important to note that already in year 2000 KLOE collected 20 pb −1 , producing quite a few publications which improved the knowledge in both hadronic and kaon physics [14,15,16,17,18]. A more accurate measurement of the integrated luminosity is performed offline [19], using Bhabha events in the polar angle range 55 • < θ < 125 • , the so-called VLAB events. The effective cross section for these events, ∼ 430 nb, is large enough to reduce the statistical error at a negligible level. The luminosity is obtained by counting the number of VLAB candidates, N VLAB , and normalizing it to the effective Bhabha cross section, σ MC VLAB , obtained from MC simulation, after subtraction of the background, δ bkg : The precision of the measurement depends on the correct inclusion of higher-order terms in computing the Bhabha cross section. For this purpose we use Babayaga, which include the QED radiative corrections in the framework of the parton-shower method. The quoted precision is 0.5%, although an updated version of this generator, Babayaga@NLO [20], has been released recently. The new predicted cross section decreased by 0.7% and the theoretical systematic uncertainty improved from 0.5% to 0.1%. The VLAB events are selected with requirements on variables that are well repro- duced by the KLOE MC. The acceptance cut on the electron and positron polar angle, 55 • < θ +,− < 125 • is based on the calorimeter clusters, while the momentum selection, p +,− > 400 MeV, is based on the DC determination. The background from µ + µ − (γ), π + π − (γ), and π + π − π 0 events is well below 1% and is subtracted. All selection efficiencies (trigger, cluster and track reconstruction) are greater than 99%, as obtained from MC and confirmed with data control samples. Finally, corrections are applied on a run-by-run basis to follow the small variations in the center-of-mass energy and in the detector calibration. In the end, we quote a 0.3% uncertainty in the determination of the acceptance due to experimental effects. -Kaon beams The neutral kaon pair from φ → K 0 K 0 is in a pure J P C = 1 −− state. Therefore the initial two-kaon state can be written, in the φ-rest frame, as where the identity holds even without assuming CP T invariance. Detection of a K S thus signals the presence of, "tags", a K L and vice versa. Thus at DAΦNE we have pure K S and K L beams of precisely known momenta (event by event) and flux, which can be used to measure absolute K S and K L branching ratios. In particular DAΦNE produces the only true pure K S beam and the only K L beam of known momentum. A K S beam permits studies of suppressed K S decays without overwhelming background from the K L component. A K L beam allows lifetime measurements. Similar arguments hold for K + and K − as well, although it is not hard to produce pure, monochromatic charged kaon beams. In the following sections we briefly discuss the criteria adopted to tag the K S , K L , and charged kaon beams. In KLOE, we call a K L interaction in the EMC a "K-crash", and we use it as a tag for the K S . Note that the above error on β(K L ) corresponds to an error on the kaon energy of ∼0.25 MeV or ∼1 MeV for its momentum, with just one event. Adding the information about the position of the energy release, the direction of the K L line of flight is determined with ∼ 1 • angular accuracy. Using the previous value of K L momentum, and the run-average value of the φ momentum as evaluated from Bhabha events, we determine the K S momentum from p S = p φ − p L , with the same accuracy as for K L . 2. K L beam. -All of the produced K S mesons decay within few centimeters from the interaction point. Therefore, in principle, all of them can be detected in the apparatus. On the contrary, K L 's can travel several meters before decaying, thus escaping detection. From the above, it follows that a pure sample of nearly monochromatic K L 's can be selected by identification of K S decays. The best performances in terms of efficiency and momentum resolution are obtained by selecting K S → π + π − decays. These are identified as two-track vertices close to the e + e − interaction point, with invariant mass in a 5 MeV acceptance window around the nominal kaon mass. The K S → π + π − decay provides also a good measurement of K L momentum, p L = p φ − p S , where p φ is again the φ momentum as determined from Bhabha scattering events. The resolution is ∼0.8 MeV on the momentum, while the K L flight direction is determined within ∼ 2 • . More interestingly, events with both a K S → π + π − decay and a K-crash identified allow us to measure the DAΦNE beam energy spread. This is extracted by comparing for each event the two independent determinations of center-of-mass energy that are evaluated using the K S and K L momenta, respectively. Using this technique we find a beam energy spread of ∼220 keV, in agreement with the machine model expectation. For many KLOE measurements, it is of fundamental importance to accurately determine the exact amount of lost/retained decays, or, in other words, determining the K L decay fiducial volume. For decays involving two charged particles this is obtained relatively easily by the observation of the vertex of the two related tracks in the drift chamber, measured with an accuracy of ∼ 3 mm. The task is more difficult for totally neutral decay channels, but can still be achieved using the excellent timing accuracy of the EMC. As an example, we can locate the K L →π 0 π 0 decay point, the "neutral vertex", with accuracies of O(2) cm, as illustrated in fig. 14, left. Each photon from the K L decay point (D) defines a time-of-flight triangle: the first side is the segment from the interaction point to the K L decay vertex (ID); the second is the segment from the K L decay vertex to the centroid of the calorimeter cluster (DA); and the third is the segment from the interaction point to the cluster centroid (IA). The K L direction is initially known, because the K L decay is tagged. The photon vertex position is specified by the distance ID, which is determined from where t γ is the cluster time and β K is the K L velocity. The position of the interaction point is obtained by backward extrapolation along the K S flight path. The K L decay vertex position is evaluated for each neutral cluster separately, and the energy-weighted average of the values for each cluster is then taken as its final determination. The accuracy in the location of the photon vertex has been studied using K L → π + π − π 0 decays, in which the decay position can be independently determined using clusters and tracks, with much greater precision in the latter case. The dependence of the position resolution on decay distance (L K ) is illustrated in fig. 14, right. 3. K ± beam. -The φ-meson decays 50% of the times into K + K − pairs; these are quasi anti-collinear in the laboratory, due to the small crossing angle of the e + e − beams, with momentum ranging between 120 and 133 MeV/c and decay length λ ± ∼ 95 cm. While moving towards the DC volume, the average K ± momentum decreases to ∼100 MeV/c due to the energy loss in the beam pipe and DC inner wall materials. As for neutral kaons, the identification of a K ∓ decay tags a K ± beam with known flux, thus giving the possibility to measure kaon absolute branching ratios. Charged kaons are tagged using their two-body decays, K ± → µ ± (−) ν µ and K ± → π ± π 0 , accounting for ∼85% of the total decay channels. These decays are observed as "kinks" in a track originating at the IP. The charged kaon track must satisfy 70 < p K ± < 130 MeV/c. The momentum of the decay particle in the kaon rest frame evaluated using the pion mass, p * , is 205 and 236 MeV, for K ± decays to π ± π 0 and µ ± ν, respectively. Fig. 15 shows the p * distribution obtained assuming the decay particle is a pion. Cutting on the spectrum of the decay particle, we select about 1.5 × 10 6 K + K − events/pb −1 of integrated luminosity. -Momentum spectrum in the kaon rest frame of the negative charged decay particle, assuming the particle has the pion mass for data (dots) and MC (lines). The distribution is normalized to unity. The two peaks correspond to pions and muons from K − → π − π 0 (205 MeV/c) and K − → µ − νµ (236 MeV/c). The muon peak is broadened by the use of the incorrect mass. -Kaon physical parameters and decays Since the kaon's discovery about sixty years ago, knowledge of its properties, from masses, lifetimes, and how often each species decays into to which particles (the so called branching ratios, BR), has been accumulated piecemeal literally over hundreds of individual experiments, performed at various laboratories all over the world. Each experiment has its own peculiarity, contingent upon the particle beam available and its own equipment's acceptance range, which can result in highly accurate measurements from a statistical point of view (namely based on thousands of data events) but with hidden corrections peculiar to it perhaps unknown even to its authors. The particle data group [21] in charge of compiling the results, often adopt a "democratic" way of weighing the various results according to their nominal errors and literally applying a scale factor to these errors when faced with two inconsistent measurements. This procedure skirts the problems of judging the validity of results obtained over time and sometimes ends up with a hodge-podge of inconsistent results forced into a mold of dubious scientific validity. The mission of KLOE is to measure most, if not all, of the properties of the kaon system to high accuracy, with a single detector simultaneously. One problem that consistently plagues the interpretation of older measurements is the lack of clarity about accounting for radiative contributions. All of our measurements of kaon decays with charged particles in the final state are fully inclusive of radiation. Radiation is automatically accounted for in the acceptance correction. All our MC generators incorporate radiation as described in [22]. Because of the availability of tagged and pure kaon beams of known momenta, KLOE is the only experiment that can at once measure the complete set of experimental inputs, branching ratios, lifetimes and form factor parameters for both charged kaons and long lived neutral kaons. In addition KLOE is the only experiment that can measure K S branching ratios at the sub-percent level. 1. K 0 Mass. -Kaon masses are about 500 MeV, very close to one half the φmeson mass, known to an accuracy of 0.019/1019.460 ∼2 × 10 −5 by the use of the g-2 depolarizing resonances, [21]. The events φ → K S K L offer a unique possibility to obtain a precise value of the neutral kaon mass. Indeed we observe that if the φ-meson is at rest the kaon mass can be extracted from the kaon momentum using the relation: Since p K ≃ 110 MeV, measuring it at 1% level, well within the KLOE capability, results in a measurement of the K 0 mass better than 0.1%. 50,000 events are enough to reach a statistical accuracy of about 1 keV. φ mesons are produced with a momentum along the x axis, p φ = 12.5 MeV at DAΦNE. From the measured momenta of the two pions from K S → π + π − , we measure the K S momentum. The K L momentum is given by p KL = p φ − p KS , where p φ is the average φ momentum measured with Bhabha events collected in the same runs. The center of mass energy of the K S K L pair (W KK ) is related to the kaon mass m K , according to: On the other hand the collision center of mass energy W is computed from Bhabha events, as described in sect. 3. Corrections due to ISR have to be taken into account when relating W to W KK . The correction function f K (W ) has been evaluated using a full detector simulation where the radiation from both beams has been implemented, and W KK is reconstructed as in the data. The expression of the radiator function has been taken from [23], including O(α 2 ) corrections. The correction |1 − f K (W )| is very small below the resonance, corresponding to a shift in W KK of 40 keV. For W above the φ mass, W KK increases up to 100 keV. The neutral kaon mass is then obtained solving the equation: The single event mass resolution is about 430 keV. Contributions to the mass resolution are: about 370 keV from experimental resolution, about 220 keV from beam energy spread, as measured by KLOE in agreement with machine theory, and about 100 keV from ISR. The systematic error due to the momentum miscalibration has been evaluated by changing the momentum scale in computing the pion momenta. A momentum miscalibration, δp/p , translates to a miscalibration on the mass δm K /m K = 0.06δp/p, in agreement with the above qualitative calculation. The momentum scale is obtained by using several processes covering a wide momentum range, from 50 to 500 MeV (K L → π + π − π 0 , K L → πℓν, φ → π + π − π 0 ). We obtain a fractional accuracy below 2 × 10 −4 , in agreement with the estimate obtained using Bhabha's [8], which results in a systematic error δm K of 6 keV. The systematic error coming from theoretical uncertainty on the radiator function has been evaluated considering the contribution from higher order terms in α. The correction function f K (W ) has been evaluated by excluding the constant term in the O(α 2 ). The corresponding change in f K (W ) is 1.3 × 10 −5 , corresponding to a variation on m K of 7 keV. Further checks have been made by using the function given in [24]: no significant differences were observed. Additional systematics come from the dependence of the measured mass from the value of W . They have been evaluated by comparing the mass values obtained with data collected at W < 1020 MeV and W > 1021 MeV, where the value of f K (W ) is more than a factor two larger. The difference between the two mass values is m K (W < 1020) − m K (W > 1021) = 9 ± 10 keV, consistent with zero. Other sources of systematics are due to the uncertainties on the W calibration, i.e., the statistic and systematic error on m CMD−2 φ and on m φ obtained from our fit, discussed in sect. 3 . 2. The total contribution from these sources amounts to a mass uncertainty of 15 keV. Systematic uncertainties are treated as uncorrelated. The result is [25]: -Precision measurements of the lifetime of neutral kaons, especially in presence of many multibody decay modes, is particularly difficult since it is in general not possible to prepare monochromatic beams of neutral particles nor to stop them. KLOE enjoys the availability of such monochromatic beams. While the K S lifetime is very well known, (0.8958 ± 0.0005) × 10 −10 s [21], this is not the case for the K L lifetime, τ L . A total of approximately 13 million tagged K L decays are used for the measurement of the four major K L BRs, as discussed in sect. 5 . 5.1. Since the geometrical efficiency for the detection of K L decays depends on τ L , so do the values of the four BRs: where BR (0) is the value of the branching ratio evaluated for τ (0) L , an assumed value for τ L . The four relations defined by eq. 7, together with the condition that the sum of all K L BRs must equal unity, allow determination of the K L lifetime and the four BR values independent of τ L . This approach, followed in [26], gives τ L = 50.72 ± 0.11 stat ± 0.13 syst−stat ± 0.33 syst ns. In addition, the lifetime can be measured by observing the time dependence of the decay frequency to a single mode. Because of tagging, the lifetime can also be obtained by simply counting the total number of decays in a given time interval from a known K L sample. This second, independent input comes from our separate analysis of the proper decay-time distribution for K L → 3π 0 events, for which the reconstruction efficiency is high and uniform over a fiducial volume of ∼0.4 λ L . The position of the K L decay position is determined from the photon arrival times, with the technique described in sect. 4 . 2. The K L proper decay-time, t * , is obtained from the decay length, l K , divided by the Lorentz factor βγc = p KL /m K . The K L momentum is evaluated event-by-event using the information from the K S → π + π − tagging decay. About 8.5 million decays are observed within the proper-time interval from 6.0 to 24.8 ns. We fit the distribution in this interval ( fig. 16), and obtain τ L = 50.92 ± 0.17 stat ± 0.25 syst ns [27]. The systematic error of 0.49% is presently dominated by the uncertainty on the dependence of the tagging efficiency on l K , and by background subtraction. This latter measurement is included together with the results for the four main K L BRs and the K L lifetime in a fit to determine the K L BRs and lifetime. We also use our measurements of BR(K L → π + π − )/BR(K L µ3) and BR(K L → γγ)/BR(K L → 3π 0 ), reported in sections 5 . 5.4 and 5 . 5.5, requiring that the seven largest K L BRs add to unity. The only non-KLOE input to the fit is the 2006 PDG ETAFIT result BR(K L → π 0 π 0 )/BR(K L → π + π − ) = 0.4391 ± 0.0013, based on relative amplitude measurements for K → ππ. The results of the fit are presented in table II; the fit gives χ 2 /ndf = 0.19/1 (CL = 66%). The BRs for the K Le3 and K Lµ3 decays are determined to within 0.4% and 0.5%, respectively. 3. K ± lifetime. -The measurements of the K ± lifetime listed in the PDG compilation, [21], exhibit poor consistency. The PDG fit has a confidence level of 1.5 × 10 −3 and the error on the recommended value is enlarged by a scale factor of 2.1. Parameter Value Correlation coefficients KLOE has measured the decay time for charged kaons in two ways both using events tagged by K ± µ2 decays [28]. The first method was to obtain the proper time from the kaon path length in the DC. The second method was based on the precise measurement of the kaon decay time from the arrival times of the photons from kaon decays with a π 0 in the final state. These two methods reach comparable accuracy and allow us to crosscheck systematics. The method relying on the measurement of the charged kaon decay length requires the reconstruction of the kaon decay vertex using DC information only. The signal is given by a K ± , moving outwards from the IP in the DC with momentum 70 < p K ± < 130 MeV/c. The kaon decay vertex (V) has to be in the DC volume defined by 40 < x 2 V + y 2 V < 150 cm, |z V | < 150 cm. Once the decay vertex has been identified, the kaon track is extrapolated backwards to the IP into 2 mm steps, taking into account the ionization energy loss dE/dx to evaluate its velocity βc. Then the kaon proper time The reconstruction and selection efficiency of the decay vertex has been evaluated directly from data. The control sample is selected using calorimetric information only: neutral vertices are identified looking for two clusters in time fired by the photons coming from the π 0 → γγ decay. The proper time distribution is fit between 16 and 30 ns correcting for the measured efficiency. Resolution effects are taken into account in the convolution with the exponential decay function used to fit the t * distribution. The result we have obtained, which is the weighted mean between the K + and the K − lifetimes, is τ ± = (12.364 ± 0.031 stat ± 0.031 syst ) ns. The fit window of this method covers 1.1 τ ± . The second method relies on the measurement of the kaon decay time and uses charged kaon decays with a π 0 in the final state, K ± → Xπ 0 . In these decays the kaon time of flight is obtained from the time of the EMC clusters of the photons from the π 0 decay. We require the backward extrapolation of the tagging kaon track to the IP. Then exploiting the kinematic closure of the φ → K + K − decay, knowing the momentum of φ and of the K + (K − ) we can build the expected helicoidal trajectory of the K − (K + ) on the signal side (virtual helix). Stepping along this helix we look for the π 0 → γγ decay. For each photon it is then possible to measure the kaon proper time t * using: with t γ the time of the cluster associated to the photon, r γ the distance between V 0 and the position in EMC of the photon cluster, T 0 the production time of the φ and β 2 K the beta of the signal kaon. The efficiency has been evaluated directly from data. The control sample has been selected using DC information only, selecting the kaon decay vertex in the volume defined above. The proper time is fit between 13 and 42 ns, about 2.3 τ ± , t* (ns) Events/ns correcting for the efficiency and using the convolution of an exponential decay function and the resolution function. The weighted mean between the K + and the K − lifetimes gives the result τ ± = (12.337 ± 0.030 stat ± 0.020 syst ) ns. Given a statistical correlation of ρ = 30.7%, the weighted mean between the two charges and methods is: Both fits are shown in fig. 17. The comparison of K + and K − lifetimes is a test of CPT invariance which guarantees the equality of the decay lifetimes for particle and antiparticle. The average of the two methods are τ + = (12.325 ± 0.038) ns and τ − = (12.374 ± 0.040) ns. From these measurement we obtain: τ − /τ + = 1.004 ± 0.004. The result agrees well with CPT invariance at the 4 per mil level. 5 . 4. K S decays. -As already mentioned, central to K S studies is the use of K-crash tag (sect. 4 . 1), thanks to which it has been possible to measure the K S branching ratios, whose values span six orders of magnitude. 5 . 4.1. K S →π + π − (γ), π 0 π 0 . The K S decays overwhelmingly (99.9%) into two pions: π 0 π 0 and π + π − . The ratio of the charged decay mode to that of the neutral, R π S ≡ Γ(K S → π + π − (γ))/Γ(K S → π 0 π 0 ), is a fundamental parameter of the K S meson. First of all, it provides the BRs for the K S → π 0 π 0 and K S → π + π − with only small corrections. The latter BR is a convenient normalization for the BRs of all other K S decays to charged particles. From R π S one can also derive phenomenological parameters of the kaon system, such as the differences in magnitude and phase of the isospin I = 0, 2 ππ scattering amplitudes. Finally, this ratio together with the corresponding one from the K L determine the amount of direct CP violation in K → ππ transitions. Prior to KLOE, the fractional error on R π S was 1.2%, as a result of an average on various measurements, each of ∼5% accuracy. This averaging was somehow questionable, since the various experiments did not clearly describe their procedure for handling radiative events. During the last six years we performed two precise measurements of R π S with increasing systematical accuracy, and with proper treatment of radiative corrections. The final combined value has a precision of three parts per mil [29]. Given the tag, the K S → π + π − (γ) events are selected by requiring the presence of two tracks of opposite charge with their point of closest approach to the origin inside a small fiducial volume around the IP, and with momentum in the range 120 < p < 300 MeV. K S → π 0 π 0 events are identified by the prompt photon clusters from π 0 decays. A calorimeter cluster is considered as a prompt photon if its velocity, evaluated by ToF, is compatible with β = 1 within the expected resolution. Moreover, it must not be associated to any track. To accept a K S → π 0 π 0 event, three or more prompt photons are required, with a minimum energy of 20 MeV, to reduce contamination from machine background. To reach a high precision, all of the possible systematic effects have been studied in great detail, making use of several independent data control samples. Systematic errors can arise from imperfections in the detector simulation, limitations in the methods used to evaluate the detector efficiencies, and uncertainties in the cross sections and branching ratios used to estimate the fraction of background events. Moreover, a small difference in the tagging efficiency between charged and neutral decays, due to the different time response of the two categories of events, has been taken into account. Actually, the e + e − interaction time, the so called T 0 , is obtained from the fastest particle reaching the calorimeter, assuming a velocity β = 1 and a straight flight path starting from the interaction point (sect. 2 . 2). This assumption is correct for photons coming from K S →π 0 π 0 events, while most of the pions from their charged counterpart arrive ∼3 ns later: therefore their T 0 is delayed by one RF period, ∼2.7 ns. As a consequence of this, the velocity of the K L crash clusters is overestimated by ∼10% for K S →π + π − events. Another subtle effect that has to be taken into consideration, is the different energy response of the EmC to pions and photons. This, in turn, causes a small difference in the trigger efficiencies for the two categories of events (∼98.5% for π + π − , ∼100% for π 0 π 0 ), which has to be properly taken into account. We obtain R π = 2.2549 ± 0.0054. Using this result, and the measurement of Γ(K S → πeν)/Γ(K S → π + π − (γ)) discussed in 5.4.2, we extract the dominant K S branching ratios (see table III). To this end, we exploit unitarity: the sum of the BR's for the ππ and πlν modes has been assumed to be equal to one, the remaining decays accounting for less than 10 −4 . To extract the value of the phase shift difference δ 0 − δ 2 between I = 0 and I = 2 amplitudes we use the method described in reference [30]. In the isospin limit, we find: Effects of isospin breaking increase this value by ∼ 10 • , although the correction is determined with a large error (see Ref. [30]). 5 . 4.2. K S → πeν(γ). The K S decays semileptonically less than one per cent of the time. To pick out such decays where the event contains an unseen neutrino is nontrivial. Yet, KLOE has isolated a very pure sample of ∼13,000 semileptonic K S decays and accurately measured the BRs for K S → π + e −ν (γ) and K S → π − e + ν(γ) [31]. The basic steps in the analysis are: tag K S decays by the K L crash, apply a cut on the ππ invariant mass (this removes 95% of the K S → π + π − decays and reduces the background-to-signal ratio to about 80:1), impose several geometrical cuts to further improve the purity of the sample and, in particular, remove contamination by events with early π → µν decays. Finally, stringent requirements are imposed on the particle's ToF, which very effectively separates electrons from pions and muons and allows the charge of the final state to be assigned. Fig. 18 shows the signal peak and the residual background in the distribution of ∆ Ep = E miss − |p miss | for the π − e + ν channel, where E miss and p miss are respectively the missing energy and momentum at the vertex, evaluated in the signal hypothesis. For signal events, the missing particle is a neutrino and ∆ Ep = 0. The numbers of πeν decays for each charge state are normalized to the number of observed π + π − events, resulting in the ratios in the first column of table III. Using the result for R π of the previous section (also in table III), the absolute BRs for K S → ππ and K S → πeν reported in the second column of the table are obtained. These ratios give also the first measurement of the semileptonic charge asymmetry for the K S : The comparison of A S with the asymmetry A L for K L decays allows test of the CP and CP T symmetries to be performed. Assuming CP T invariance, A S = A L ∼ 2 Re ǫ ≃ 3 × 10 −3 , where ǫ gives the CP impurity of the K S , K L mass eigenstates due to CP violation in ∆S=2 transitions (for an exact definition of ǫ see sect. 8 . 1). To evaluate A S and A L without making any assumptions on CP T symmetry, we consider the standard decomposition of the semileptonic amplitudes [32]: where x + (x − ) describes the violation of the ∆S = ∆Q rule in CP T conserving (violating) decay amplitudes, and y parametrizes CP T violation for ∆S = ∆Q transitions. The difference between the charge asymmetries, signals CP T violation either in the mass matrix (δ term, see sect. 8 . 1) or in the decay amplitudes with ∆S = ∆Q (Rex − term). The sum of the asymmetries, is related to CP violation in the mass matrix (ǫ term) and to CP T violation in the ∆S = ∆Q decay amplitude (y term). Using A L = (3.34 ± 0.07) × 10 −3 [21] and our A S measurement (eq. 10), we obtain from eq. 12 Current knowledge of these two parameters is dominated by results from CPLEAR [33]: the error on Re δ is 3 × 10 −4 and that on Re x − is 10 −2 . Using Re δ = (3.0 ± 3.3 stat ± 0.6 syst ) × 10 −4 from CPLEAR, we obtain: thus improving on the error of Re x − by a factor of four. 5 . 4.3. K S →γγ. A precise measurement of the K S → γγ decay rate is an important test of Chiral Perturbation Theory (ChPT) predictions. The decay amplitude of K S → γγ has been evaluated at leading order of ChPT [35], O(p 4 ), providing a precise estimate of BR(K S → γγ) = 2.1 × 10 −6 , with 3% uncertainty. This estimate is ∼ 30% lower with respect to the latest determination from NA48 [36], thus suggesting relevant contributions from higher order corrections. We measured the K S → γγ rate using 1.9 fb −1 of integrated luminosity. A MC background sample of comparable statistics and a large MC signal sample (equivalent to ∼ 50 fb −1 of collisions) have been used in the analysis. After K-crash tag, we select ∼ 700 million events, out of which ∼ 1900 are K S → γγ decays. The signal is selected by requiring exactly two prompt photons, with an efficiency of ∼ 83%. After photon counting, the background composition is dominated by K S → 2π 0 , with two photons undetected by the EMC. This background is strongly reduced, with negligible signal loss, by vetoing the events with photons reaching the QCAL in a 5 ns coincidence window with respect to the event-T 0 . At the end of this selection, we are left with 157 × 10 3 events and a signal over background ratio S/B=1/80. Further background reduction is obtained by performing a kinematic fit to the event in the signal hypothesis, and using as constraints the total 4-momentum conservation, the kaon mass and the photon ToF. We retain events with χ 2 < 20, reaching S/B∼ 1/3 with an efficiency on signal of ∼ 63%. After this cut we count the signal events by fitting the 2-D distribution of the photon invariant mass, M γγ , and of the photon opening angle in the K S center of mass, θ * γγ , and using MC signal and background shapes. We count N (γγ) = 711 ± 35 signal events out of 2740 events, with χ 2 /dof = 854/826, corresponding to a probability P(χ 2 ) ∼ 24%. In fig. 19, the observed distributions of cos θ * γγ and M γγ are compared with the simulated signal and background shapes as weighted by the fit procedure. To get the BR(K S → γγ), signal events are normalized to the number of K S → 2π 0 decays observed in the same data sample. These are selected, after K-crash tag, by requiring three to five prompt photons. An extensive study on systematics effects, which affect both the efficiency correction and the signal counting, has been carried out. As an example, the MC efficiency of the prompt photon selection has been corrected for possible differences in the photon detection efficiency between data and MC, and a systematic error has been evaluated which accounts for residual imperfections. Moreover, a control sample of K L → γγ events decaying close to the IP has been selected, which has been used to check systematics from the kinematic fit selection and from data-MC differences in the EMC energy scale. Systematic errors on signal counting have been evaluated by: (i) performing a set of fits to the M γγ − θ * γγ distribution in regions with an enhanced signal content, to evaluate the error induced by the uncertainty in the background shape, (ii) repeating the fit with different bin-size and applying an energy scale correction a factor of two greater with respect to that measured with the K L → γγ control sample. Finally, we obtain [37] BR which differs by 3σ from the previous best determination. Our result is also consistent with O(p 4 ) ChPT prediction. The parameter η 000 , defined as the ratio of K S to K L decay amplitudes, can be written as where ǫ quantifies the K S CP impurity and ǫ ′ 000 is due to a direct CP -violating term. Since we expect ǫ ′ 000 << ǫ [38], it follows that η 000 ∼ ǫ. In the Standard Model (SM), therefore, BR(K S → 3π 0 ) ∼ 1.9 × 10 −9 to an accuracy of a few %, making the direct observation of this decay quite a challenge. We performed a search for the K S → 3π 0 decay using 450 pb −1 of integrated luminosity. A MC background sample ∼ 2.5 times larger than data and a high statistics MC signal sample have been used in this analysis as well. After K-crash tag, we select the signal by requiring six prompt photon clusters, and no tracks from the IP, which is useful to reject background from K S → π + π − with an early K L → 3π 0 decay. At this stage of the analysis, the residual background is dominated by K S → π 0 π 0 events with two spurious clusters from shower fragments (splitting) or accidental coincidence with clusters produced by machine activity. Further background rejection is obtained by performing a kinematic fit to the event in the signal hypothesis, and cutting at After the previous rejection cuts, the signal search is performed by defining two χ 2like discriminating variables, ζ 3 − ζ 2 , optimized for K S → 3π 0 and K S → 2π 0 selection, respectively. ζ 2 is defined by selecting the four out of six photons which provide the best agreement in terms of kinematic variables with the K S → 2π 0 hypothesis. ζ 3 is defined as where ∆m i = m i − m π 0 is the difference between the nominal π 0 mass and the invariant masses of the ith photon pair, chosen among the six clusters of the event. ζ 3 is close to zero for a K S → 3π 0 event, and is expected to be large for a six-photon background event. A signal box is defined in the ζ 3 − ζ 2 plane, while side-bands are used to check the MC prediction for the background. At the end of the analysis chain we observe two candidates in the signal box, with an expected background of 3.1 ± 0.8 ± 0.4. The signal selection efficiency, after tagging, is ∼ 24%. To obtain an upper limit for BR(K S → 3π 0 ), we normalize the number of signal candidates to K S → π 0 π 0 decays observed in the same data sample. For this purpose, we select events with three to five prompt photon clusters. Finally, we obtain [39] BR which is a factor of ∼ 6 improvement with respect to the best previous limit [40]. This limit on the BR can be directly translated into a limit on |η 000 |: This result can be visualized in the complex plane as a circle of radius 0.018 centered at zero in the Re(η 000 ), Im(η 000 ) plane, and is a 2.5 times better improvement with respect to the result of [40]. 5 . 4.5. K S →e + e − . This decay is a flavor-changing neutral current process, suppressed in the SM, with an amplitude dominated by the two photon intermediate state. Using ChPT to O(p 4 ), one obtains the prediction BR(K S →e + e − ) ∼2×10 −14 [41]. A value significantly higher could indicate new physics. Prior to KLOE the best experimental limit on this decay was set by CPLEAR, BR(K S →e + e − ) ≤ 1.4 × 10 −7 at 90% CL [42]. We performed a search for the K S → e + e − decay using 1.9 fb −1 of integrated luminosity. A MC background sample of comparable statistics and a large MC signal sample have been also used in the analysis. After K-crash tag, we search for the signal by requiring two tracks of opposite charge originating near the IP. The two tracks are required to have an invariant mass M ee , evaluated in the electron hypothesis, in a ∼ 20 MeV window around the nominal K S mass.( 1 ) This cut is particularly effective on K S → π + π − events, which peak at M ee ∼ 409 MeV. After it the background is dominated by K S → π + π − with at least one pion wrongly reconstructed, and by φ → π + π − π 0 events. The K S events are strongly reduced by cutting on the track momentum in the K S rest frame, which is expected to be ∼ 206 MeV for K S → π + π − decays. To reject φ → π + π − π 0 events we use instead | p miss | = | p φ − p KS − p KL |, where K S and K L momenta are evaluated from the charged tracks and from the K-crash tag, respectively. The value of | p miss | peaks at zero for the signal, within the experimental resolution of few MeV. It spreads towards higher values for φ → π + π − π 0 events. The residual background, both from K S → π + π − and φ → π + π − π 0 decays, is rejected identifying the two electrons by ToF, and by using the properties of the associated calorimetric cluster. The reliability of the MC background simulation is checked after each step of the selection on the invariant mass sidebands. At the end of the analysis chain, we don't count any event in the signal box; the background estimate is also compatible with zero. The upper limit on the number of signal events is therefore N ee = 2.3 at 90% CL The signal selection efficiency, after tagging, is ∼ 47%. Such performances in terms of exceptional background rejection (> 10 8 ) with an acceptable signal efficiency, have been achieved largely thanks to the very good momentum resolution of our DC. To obtain an upper limit for the BR(K S → e + e − ), we normalize N ee to the K S → π + π − decays observed in the same data sample. For this purpose, we used the same selection criteria as for the measurement of BR(K S → π + π − ), described in sect. 5 . 4.1. Finally, we obtain as a preliminary result [43] BR(K S → e + e − ) ≤ 9.3 × 10 −9 at 90% CL, which represents a factor of ∼ 15 improvement with respect to the best previous limit. 5 . 5. K L Decays. -A measurement of the absolute K L BRs is a unique possibility of the φ-factory, where a pure sample of nearly monochromatic K L 's can be selected by identification of a simultaneous K S → π + π − decay. The absolute BRs can be determined by counting the fraction of K L 's that decay into each channel, and correcting for acceptances, reconstruction efficiencies, and background. More specifically, we evaluate K L BRs as: where N f is the number of decays to f identified in the FV after background subtraction, N tag is the number of tagged K L 's, ǫ FV is the fraction of decays in the FV, ǫ rec is the reconstruction efficiency for channel f , and ǫ tag / ǫ tag is the tag bias, i.e. the ratio of the K S → π + π − reconstruction efficiencies for K L → f and independent of the K L fate, respectively. We use a FV well within the DC, which gives an efficiency ǫ FV ∼ 26%. Losses of K L 's from interactions in the beam pipe and chamber walls are taken into account in the evaluation of ǫ FV , as well as its dependence on the K L lifetime (see sect. 5 . 2). 5 . 5.1. K L semileptonic and 3 pion decays. The four major decay modes, π ± e ∓ ν (K e3 ), π ± µ ∓ ν (K µ3 ), π + π − π 0 , and 3π 0 account for more than 99.5% of all decays, and are measured simultaneously from a sample of 13 millions tagged K L decays. For the first three modes, two tracks are observed in the DC, whereas for the 3π 0 mode, only photons appear in the final state. The analysis of two-track and all-neutral-particle events is therefore different. Two-track events are assigned to the three channels of interest by use of a single variable: the smaller absolute value of the two possible values of ∆ µπ = |p miss | − E miss , where p miss and E miss are the missing momentum and energy in the K L decay, respectively, and are evaluated assuming the decay particles are a pion and a muon. Fig. 20, left, shows an example of a ∆ µπ distribution. We obtain the numbers of K e3 , K µ3 , and π + π − π 0 decays by fitting the ∆ µπ distribution with the corresponding MC-predicted shapes. The signal extraction procedure is tested using PID variables from the calorimeter. This is illustrated in fig. 20, right, where the ∆ eπ spectrum is shown for events with identified electrons, together with the results of a fit with MC shapes. The K e3(γ) radiative tail is clearly evident. The inclusion of radiative processes in the simulation is necessary to obtain a good fit, as well as to properly estimate the fully inclusive radiative rates. The reconstruction efficiency is ∼ 54% for K e3 , ∼ 52% for K µ3 , and ∼ 38% for π + π − π 0 , as evaluated from MC. These efficiencies are then corrected to account for MC imperfections in reproducing the tracking efficiency. The correction factors range between 0.99 and 1.03, depending on the decay channel and on the data taking period. The decay K L → 3π 0 is easier to identify. Detection of ≥ 3 photons originating at the same point along the K L flight path is accomplished with the technique described in sect. 4 . The tag bias is mostly due to a dependence of the calorimeter trigger efficiency on the K L final state. Additional effects are introduced by the possible overlap between K S and K L charged products, which undermines K S reconstruction performances for K L decaying close to the IP. Both effects are reduced by additional requests to the tag selection, which give an overall tagging efficiency of ∼ 9%. Our measured BR values for the four modes [26], corresponding to a reference value τ not include the contribution from the K L lifetime value, which enters into the calculation of the FV efficiency. This dependence has been used in sect. 5 . 2 to perform a combined fit to BRs and lifetime, including also the τ L measurement from the proper decay time distribution of K L → 3π 0 decays. The results of this fit have been given in table II. vides the best means for the measurement of |V us |, because only the vector part of the weak current contributes to the matrix element π |J α | K . In general, where P and p are the kaon and pion four-momenta, respectively, and t = (P − p) 2 . The form factors (FF) f + and f − appear because pions and kaons are not point-like particles, and also reflect both SU(2) and SU (3) breaking. Introducing the scalar FF f 0 (t), the matrix element above is written as The f + and f 0 FFs must have the same value at t = 0. We have therefore factored out a term f (0). The functionsf + (t) andf 0 (t) are both unity at t = 0. For vector transitions, the Ademollo-Gatto theorem [44] ensures that SU (3) breaking appears only to second order in m s − m u,d . In fact, f + (0) differs from unity by only ∼4%. The behavior of the reduced FFsf + (t) andf 0 (t) as a function of t can be measured from the decay spectra. At the level of accuracy reached by the present experiments, the parameterization used to extract the FF slopes is a relevant issue. If the FFs are expanded in powers of t up to t 2 as four parameters (λ ′ + , λ ′′ + , λ ′ 0 , and λ ′′ 0 ) need to be determined from the decay spectrum in order to be able to compute the phase space integral. However, this parametrization of the FFs is problematic, because the values for the λs obtained from fits to the experimental decay spectrum are strongly correlated, as discussed in [45]. In particular, the correlation between λ ′ 0 and λ ′′ 0 is −99.96%; that between λ ′ + and λ ′′ + is −97.6%. It is therefore impossible to obtain meaningful results using this parameterization. Form factors can also by described by a pole form: which expands to 1 + t/M 2 V,S + (t/M 2 V,S ) 2 , neglecting powers of t greater than 2. It is not clear however what vector (V) and scalar (S) states should be used. Recent K e3 measurements [46,47,48] show that the vector FF is dominated by the closest vector (qq) state with one strange and one light quark (or Kπ resonance, in an older language). The pole-fit results are also consistent with predictions from a dispersive approach [49,50]. We will therefore use a parametrization for the vector FF based on a dispersion relation twice subtracted at t = 0 [49]: where H(t) is obtained using K − π scattering data. An approximation to eq. 26 is with p 2 and p 3 as given in table V. The pion spectrum in K µ3 decay has also been measured recently [46,51,52]. As discussed in [45], there is no sensitivity to λ ′′ 0 . All authors have fitted their data using a linear scalar FF:f Because of the strong correlation between λ ′ 0 and λ ′′ 0 , use of the linear rather than the quadratic parameterization gives a value for λ 0 that is greater than λ ′ 0 by an amount equal to about 3.5 times the value of λ ′′ 0 . To clarify this situation, it is necessary to obtain a form forf 0 (t) with at least t and t 2 terms but with only one parameter. The Callan-Treiman relation [53] fixes the value of scalar FF at t = ∆ Kπ (the socalled Callan-Treiman point) to the ratio of the pseudoscalar decay constants f K /f π . This relation is slightly modified by SU(2)-breaking corrections [54]: where ∆ CT is of the order of 10 −3 . A recent parametrization for the scalar FF [49] allows the constraint given by the Callan-Treiman relation to be exploited. It is a twicesubtracted representation of the FF at t = ∆ Kπ and t = 0: such that C =f 0 (∆ Kπ ) andf 0 (0) = 1. G(t) is derived from Kπ scattering data. As suggested in [49], a good approximation to eq. 30 is with p 2 and p 3 as given in table V. The Taylor expansion gives log C = λ 0 ∆ Kπ /m 2 π + (0.0398 ± 0.0041). Eq. 31 is quite similar to the result in [55]. To measure K e3 form-factor slopes, we start from the same sample of K L decays to charged particles used to measure the main K L BRs. We impose additional, loose kinematic cuts and make use of ToF information from the calorimeter clusters associated to the daughter tracks to obtain better particle identification. The result is a highpurity sample of 2 million K L → πeν decays. Within this sample, the identification of the electron and pion tracks is certain, so that the momentum transfer t can be safely evaluated from the momenta of the K L and the daughter tracks. We obtain the vector form-factor slopes from binned log-likelihood fits to the t distribution. Using the quadratic parameterization of eq. 24, we obtain [48] λ ′ + = (25.5 ± 1.5 stat ± 1.0 syst ) × 10 −3 and λ ′′ + = (1.4 ± 0.7 stat ± 0.4 syst ) × 10 −3 , where the total errors are correlated with ρ = −0.95. Using the pole parameterization of eq. 25, we obtain M V = 870±6 stat ±7 syst MeV. Evaluation of the phase space integral for K L → πeν decays gives 0.15470±0.00042 using the values of λ ′ + and λ ′′ + from the first fit and 0.15486 ± 0.00033 using the value of M V from the second; these results differ by ∼0.1%, while both fits give χ 2 probabilities of ≤92%. The results we obtain using quadratic and pole fits are manifestly consistent. The measurement of the vector and scalar form-factor slopes using K L → πµν decays is more complicated. First, there are two form factors to consider, and since all information about the structure of these form factors is contained in the distribution of pion energy (or equivalently, t), the correlations between form-factor slope parameters are very large. In particular, it is not possible to measure λ ′′ 0 for any conceivable level of experimental statistics [45]. Second, at KLOE energies, clean and efficient π/µ separation is much more difficult to obtain than good π/e separation. However, the form-factor slopes may be obtained from fits to the distribution of the neutrino energy E ν , rather than to the distribution in t. E ν is simply the missing momentum in the K L → πµν decay evaluated in the K L rest frame, and requires no π/µ assignment to calculate. A price is paid in statistical sensitivity: the E ν distribution is related to the t distribution via an integration over the pion energy, and so the statistical errors on the form-factor slope parameters will be 2-3 times larger when the E ν distribution, rather than the t distribution, is fit (assuming that the fit parameters are λ ′ + , λ ′′ + , and λ 0 ). About 1.8 million decays were accepted. We first fit the data using equations 24 and 28 for the vector and scalar FFs respectively. The result of this fit is shown in fig. 22. We obtain [56]: with χ 2 /dof = 19/29, and correlation coefficients as given in the matrix. Improved accuracy is obtained by combining the above results with those from our K Le3 analysis. We then find: with χ 2 /dof = 2.3/2 and the correlations given in the matrix on the right. Finally, the same combination of K e3 and K µ3 results has been performed to take advantage of the recent parameterizations of the FFs based on dispersive representations (eqs. 26 and 30). We perform a fit to the values obtained for λ ′ + , λ ′′ + , and λ 0 that makes use of the total error matrix as described above, and the constraints implied by equations 27 and 31. Thus, the vector and scalar FFs are each described by a single parameter. Dropping the " ′ " notations, we find: with χ 2 /dof = 2.6/3 and a total correlation coefficient of −0.26. The uncertainties arising from the choice of parameterization for the vector and scalar FFs are given explicitly. The values of the phase space integrals for K ℓ3 decays are listed in table VI, for both determinations of the FF slopes (equations 33 and 34). We note that the use of the dispersive parameterization changes the value of the phase space integrals by at most ∼ 0.09% with respect to what obtained using quadratic and linear parametrizations for vector and scalar FFs, respectively. This reflects into a systematic uncertainty from FF modelling of less than one per mil on the determination of f + (0) V us , which is evaluated using the phase space factors from the FF dispersive parameterization. Finally, from the Callan-Treiman relation, eq. 29, we compute f + (0) = 0.967 ± 0.025 using f K /f π = 1.189 ± 0.007 from a recent lattice calculation [57], and ∆ CT = (−3.5 ± 8.0) × 10 −3 . Our value for f + (0), although with a rather large error, is in agreement with the present best lattice determination, f + (0) = 0.9644 ± 0.0049 [58]. 5 . 5.3. Radiative K Le3 decay. Two different processes contribute to photon emission in kaon decays: inner bremsstrahlung (IB) and direct emission (DE). DE is radiation from intermediate hadronic states, and is sensitive to hadron structure. The relevant kinematic variables for the study of radiation in K l3 decays are E * γ , the energy of the radiated photon, and θ * γ , its angle with respect to the lepton momentum in the kaon rest frame. The IB amplitude diverges for E * γ → 0. For K e3 , for which m e ≈ 0, the IB spectrum in θ * γ is peaked near zero as well. The IB and DE amplitudes interfere. The contribution to the width from IB-DE interference is 1% or less of the purely IB contribution; the purely DE contribution is negligible. To disentangle the IB and DE components, we measure the double differential spectrum d 2 Γ/dE * γ dθ * γ and compare the result with expectations from Monte Carlo generators. In the ChPT treatment of [59], the photon spectrum is approximated by The DE contributions are summarized in the function f (E * γ ), which represents the deviation from a pure IB spectrum. All information on the strength of the DE terms is contained in the parameter X . Theoretical predictions on this parameter suffer by large uncertainties, due to the poor knowledge of ChPT low energy constants. In contrast, a fit to the E * γ -θ * γ spectrum allows us to measure for the first time a value for X . From the fit, we extract also The value of this ratio has been computed at O(p 6 ) in ChPT, leading to the prediction R = (0.963 ± 0.006 X ± 0.010) × 10 −2 . The experimental distribution in (E * γ , θ * γ ) has been fit using the sum of four independently normalized MC distributions: the distributions for K e3γ events from IB satisfying (not satisfying) the kinematic cuts E * γ > 30 MeV and θ * γ > 20 • , the distribution corresponding to the function f (E * γ ), and the physical background from K L → π + π − and K L → πµν events. Using a sample of 9 000 K e3γ events, and 3.5 million K e3(γ) for normalization, we find [60]: The dependence of R on X , predicted by ChPT, can be used to further constrain the possible values of R and X from our measurement. The constraint is applied via a fit, which gives R = (0.944 ± 0.014) × 10 −2 and X = −2.8 ± 1.8, with χ 2 /ndf = 0.64/1 (P = 42%). The resulting value of R may be compared with the value quoted in [59], calculated for X = −1.2 ± 0.4. 5 . 5.4. K L → π + π − . CP violation was discovered in 1964 through the observation of the decay K L →π + π − [61]. The value of BR(K L → π + π − ) is known today with high accuracy from the results of many experiments [21]. In the SM, CP violation is naturally accommodated by a phase in the quark mixing matrix [62,63]. BR(K L →π + π − ), together with the well known values of BR(K S →π + π − ), τ S , and τ L , determines the modulus of the amplitude ratio which is parameterized as η +− = ǫ + ǫ ′ . Here ǫ quantifies the K L CP impurity and ǫ ′ is due to a direct CP -violating term. Since we know that ǫ ′ ∼ 10 −3 ǫ [21], it follows that a measurement of |η +− | can be directly compared with the SM prediction for ǫ. We obtain a precise determination of BR(K L → π + π − ) by measuring the ratio BR(K L → π + π − )/BR(K L → π ± µ ∓ ν). This approach is particularly convenient, since the values of the tagging efficiencies for the K L →π + π − and K L → πµν decays are very similar, and the related systematic uncertainties do cancel in the ratio. For this analysis, we used the same data sample as for K L BRs determination (sect. 5 . 5.1), and our value of BR(K L → π ± µ ∓ ν) has been used to extract BR(K L → π + π − )( 2 ). Given a K S → π + π − tagging decay, we select events with two tracks with opposite charge belonging to the same vertex along the K L line of flight. A fiducial volume in the DC is chosen to count K L decays. The best variable to select K L → π + π − decays is E 2 miss + | p miss | 2 , where the missing 4-momentum is evaluated using the tag information for K L , and DC for K L decay particles, which are assumed to be pions. We count π + π − events by fitting the E 2 miss + | p miss | 2 distribution, and using MC for the signal and the background shapes. The result of this fit is shown for a fraction of the whole data sample in fig. 23; the signal region is expanded in the same figure, right. The systematic uncertainty in the result is dominated by distortions introduced by MC in the simulation of signal and background shapes, which are particularly sensitive to momentum resolution. We obtain [64]: This branching ratio measurement is fully inclusive of final-state radiation, and includes both the inner bremsstrahlung and the CP-conserving direct emission (DE) components. Using our measurements of BR(K S → π + π − ) and τ L , the value of τ S from the PDG [21], and subtracting the contribution of the CP -conserving DE process [65] from the inclusive measurement of BR(K L → π + π − ), we obtain |η +− | = (2.219 ± 0.013) × 10 −3 . Finally, using the world average of Re(ǫ ′ /ǫ) = (1.67 ± 0.26) × 10 −3 and assuming arg ǫ ′ = arg ǫ, we obtain |ǫ| = (2.216 ± 0.013) × 10 −3 . The value of |ǫ| can also be predicted from the measurement of the CP -conserving observables ∆M (B d ), ∆M (B s ), V ub , and V cb . This is particularly interesting to test the mechanism of the CP violation in the SM. For this purpose, we use the prediction |ǫ| = (2.875 ± 0.455) × 10 −3 obtained in [66]. No significant deviation from the SM prediction is observed. Notice however that, due to the large uncertainties on the computation of the hadronic matrix element corresponding to the K-K mixing, the theoretical error is much larger than the experimental one. 5 . 5.5. K L → γγ. The K L → γγ decay provides interesting tests of ChPT. This is because in the SU (3) limit it takes contribution from O(p 6 ) terms only [67]. Moreover, a precise measurement of K L → γγ is needed to compute the absorptive part of the K L → µ + µ − decay rate. This could be used to constrain the dispersive part of the same decay, which in turn is related to the parameter V td of the CKM matrix. From ∼ 300 pb −1 of integrated luminosity, we measure the ratio Γ(K L → γγ)/Γ(K L → 3π 0 ). Using K S → π + π − as a tagging decay, we select the signal by requiring two energetic photons belonging to a single vertex along the K L line of flight. A good rejection power over the K L → 3π 0 background is achieved by profiting of the two-body decay kinematics. After the selection, we count the signal by fitting the two-photon invariant mass, M γγ , with a combination of MC distribution for signal and background ( fig. 24). From the previous fit, we count 22 185 ± 170 signal events, with efficiency ∼ 81%. Finally, we obtain [68] Γ(K L → γγ) Γ(K L → 3π 0 ) = (2.79 ± 0.02 stat ± 0.02 syst ) × 10 −3 . . 6. K ± decays. -The charged kaons, K ± , decay mostly into µ ± ν (K µ2 ) and π ± π 0 (K π2 ), about 60% and 20% of all decays respectively, and the semileptonic modes account for about 8%. In the following the measurement of the main branching ratios of K ± done at KLOE will be reported. These are absolute branching ratios determined tagging with two-body decays, easily identified as peaks in the p * distribution (sect. 4 . 3). A residual dependency of the tagging criteria on the decay mode of the tagged kaon is still present and it is accounted for in the final branching ratio evaluation. The few per mil accuracy on BR(K µ2 ) and BR(K π2 ) has been obtained tagging with K − and using K + for signal search, neglecting corrections to the BRs from nuclear interactions (NI) of the kaon (σ N I (K + ) ∼ σ N I (K − )/10 2 ). All the measurements are inclusive of final-state radiation. The results on BR(K ± ℓ3 ) and K ± lifetime τ ± (eq. 8) have been used to evaluate V us (sect. 6 . 2.1) and, together with the results from neutral kaons, to test lepton universality (sect. 6 . 2.2). From the BR(K + µ2 ) and τ ± values we have measured V us /V ud (sect. 6 . 3) and combining this result with the values of V us and V ud we have tested CKM unitarity (sect. 6 . 5). Leptonic two-body decays K + µ2 and K + e2 allow us to put bounds on New Physics expected in Super-Symmetric extensions of the SM (secs. 5 . 6.5 and 6 . 4). 5 . 6.1. K + → µ + ν(γ). The KLOE measurement of this absolute BR [69] is based on the use of K − → µ −ν decays for event tagging. This choice minimizes possible interference in track reconstruction and cluster association of the tagging K − and the tagged K + decays. The large number of K µ2 decays allows for a statistical precision of ∼0.1%, while setting aside a generous sample for systematic studies. For all the tagged events, the search for a positive kaon moving outwards from the IP in the DC with momentum 70 < p K ± < 130 Mev/c is performed. Then kaon decay vertices in the DC volume, 40 < R V < 150 cm, are selected. The number of signal counts is extracted from the distribution of p * , the momentum of the charged decay particle evaluated in the kaon reference frame and in the pion mass hypothesis, between p * min = 225 MeV/c and p * Max = 400 MeV/c. This distribution is shown in fig. 25 left. The spectrum shows the contamination, for a total amount of 2%, from K + π2 and K + → π 0 l + ν events. All the background sources have one neutral pion in the final state and therefore can be identified by the observation of two photon's clusters of equal time and compatible with the π 0 mass. This selection provides the sample to evaluate on data the p* distribution of the background. Also the distribution for the signal events can be obtained from data control sample selected using EMC information only. This distribution is used together with the shape of the background sources to fit the overall spectrum and to perform background subtraction, fig. 25, center. Fig. 25 right shows the spectrum after background subtraction. The branching ratio is then obtained dividing the signal counts by the number of tagged events and correcting for the reconstruction and selection efficiency. The efficiency has been determined directly on data using a control sample of K µ2 events selected exploiting their signature in the EMC. The control sample is constituted by events with K − µ2(γ) providing the tag and K + µ2(γ) selected using EMC information only. This criterium is largely independent from the selection procedure based on DC information that have been used for obtaining the signal sample. In a sample of four million tagged events, KLOE finds ∼865,000 signal events giving: This measurement is fully inclusive of final-state radiation and has a 0.27% uncertainty. 5 . 6.2. K + → π + π 0 (γ). The measurement of the absolute BR of this decay, inclusive of radiation, is performed tagging with both K − µ2 and K − π2 decays, providing a pure K + beam for signal search. The selection of K + π2 (γ) decays uses DC information only, with K + and its decay vertex selected as done in the K + → µ + ν(γ) analysis. Loose cuts on p * and on the momentum difference between the kaon and the charged secondary track are applied to reject K → 3π decays and K ± split tracks. The signal count is then extracted from the fit of the p * distribution in the window starting from p * cut =180 MeV/c (see fig. 26). This spectrum exhibits two peaks, the first at about 236 MeV/c from K + µ2 decays and the second at about 205 MeV/c from K + π2 decays. Lower p * values are due to three body decays. The momenta of the charged secondaries produced in the kaon decay have been evaluated in the kaon rest frame using the pion mass hypothesis. Therefore the K π2 peak appears to be symmetric while the K µ2 one is asymmetric due to the incorrect mass hypothesis used. The fit to the p * distribution is done using the following three contributions: K µ2 , K π2 and three-body decays. For K µ2 and K π2 components we use shapes obtained from data control samples selected using EMC information only. For the contribution from three-body decays we use the MC distribution. Fig. 26 left shows the result of the fit of the p * distribution performed on the K − µ2 -tagged data sample. Fig. 26 right shows the three contributions: K π2 , K µ2 and three-body decays. Using a total number of 12,113,686 K − µ2 -tagged events we obtain 818, 347 ± 1, 912 signal counts. From the sample of 9,352,915 K − π2 -tagged events we get 621, 612 ± 1, 678 signal counts. The reconstruction and selection efficiency has been evaluated on data from a control sample selected using EMC information only, to avoid correlation with the DC driven sample selection. Once a tagging K − µ2 decay has been identified, the control sample selection is given by K + decays with a π 0 in the final state identified via the reconstruction of π 0 → γγ decays. Corrections to the efficiency accounting for possible distortions induced by the control sample selection have been evaluated using MC simulation. The final efficiency evaluation is strongly related to the charged kaon lifetime τ ± via the geometrical acceptance. The BR depends on τ ± as: BR/BR (0) =1-0.0395 ns −1 (τ ± -τ (0) ± ) with reference value τ (0) ± = 12.385 ± 0.024 [21] used in the final BR evaluation. The weighted average, accounting for correlations, of the absolute branching ratios obtained using events tagged by K − µ2 and K − π2 decays is a measurement with 4.6 per mil accuracy [70]: The value reported is shifted by -1.3% (∼2σ) with respect to the PDG06 fit value [21] and has a 20% improvement in the fractional accuracy. 5 . 6.3. K ± semileptonic decays. The values of BR(K ± → π 0 e ± ν(γ)) and BR(K ± → π 0 µ ± ν(γ)) are each determined from four independent measurements: K + and K − decays tagged by K → µν and K → ππ 0 . In the analyzed data set about 60 million tag decays were identified. The signal selection asks for a decay vertex in the DC volume. After removal of K π2 decays from the signal sample exploiting kinematics, the π 0 is reconstructed from the two γs which provide the kaon decay time t decay π 0 . This sample is composed mainly of semileptonic decays, residual two-body and K ± → π ± π + π − decays. To reject K π2 events with an early π ± → µ ± ν decay, we cut on the lepton momentum evaluated in the center of mass of the π ± using the muon mass hypothesis. Then to isolate K e3 and K µ3 decays, the lepton is identified using a time of flight technique. Requiring the charged decay track to point to an energy deposit in EMC, the kaon decay time is given by t decay Measuring the time of the cluster associated to the charged decay particle t lept , its momentum p lept and track length L lept , we can evaluate m 2 lept by imposing t decay π 0 = t decay lept . The m 2 lept distribution is shown in fig. 27: the K e3 component shows as a narrow peak around zero while the K µ3 component is the peak around the m 2 µ value. The signal count is extracted from a constrained likelihood fit to this distribution, using a linear combination of K e3 and K µ3 shapes and of background sources all taken from MC simulation. The fit result superimposed to the data distribution is also shown in fig. 27. The reconstruction and selection efficiency has been measured using MC simulation and corrected for relevant data/MC differences. Control samples have been selected to measure tracking and calorimeter clustering efficiency as a function of a suitable set of variables to perform data/MC comparisons. The BRs depend on the value used for τ ± as Using the current world average value of the K ± lifetime τ (0) ± and averaging over the available samples, we get [71]: BR (0) (K ± e3 ) = (0.04965 ± 0.00038 stat ± 0.00037 syst ) BR (0) (K ± µ3 ) = (0.03233 ± 0.00029 stat ± 0.00026 syst ). These results, completely inclusive of final-state radiation, have a fractional accuracy of 1.1% for the K e3 and of 1.2% for the K µ3 decays. The total errors are correlated with 0.627 and do not include any contribution from the uncertainty on τ ± . Inserting in eq. 43 the KLOE result for τ ± from eq. 8, we obtain the values of K ± semileptonic BRs reported in table VII, which have been used for our evaluation of V us . 5 . 6.4. K ± → π ± π 0 π 0 (τ ′ ). This measurement is based on a sample of ∼ 3.3 × 10 7 tagged K ± [72], using K ± µ2 and K ± π2 decays. The signal selection requires a decay vertex in the DC volume and at least four energy deposits in EMC, on-time with respect to the decay vertex, and p * < 135 MeV. Finally we require the sum of the energies of the most energetic photons to be and radiative decays like K ± → π ± π 0 γ, in which a spurious cluster in EMC is paired with the cluster of the radiated photon mimicking the second π 0 . The total background fraction has been estimated from MC to be less than 1%. The overall efficiency is given by the product of the efficiency to reconstruct the kaon track, the track of the charged secondary and the decay vertex, times the efficiency of having at least 4 clusters fulfilling the energy and timing requirements. Each component has been measured separately for each tag and charge. Data control samples from EMC selection have been used for the evaluation of efficiencies involving DC variables and viceversa. Corrections evaluated using MC simulation account for possible contamination of the control samples. We get 52253 and 30798 τ ′ decays from 1.9925×10 7 events tagged by K ± µ2 and 1.2753×10 7 tagged by K ± π2 decays respectively. The final branching is: BR(K ± → π ± π 0 π 0 ) = (1.763 ± 0.013 stat ± 0.022 syst )% (45) 5 . 6.5. R K =Γ(K ± → e ± ν e )/Γ(K ± → µ ± ν µ ). For this measurement we decided to perform a "direct search" for K ± → e ± ν e and K ± → µ ± ν µ decays without tagging, to keep the statistical uncertainty on the number of K ± → e ± ν e counts below 1%. The presence of a one-prong decay vertex in the DC volume with a secondary charged decay track with relatively high momentum (180÷270 MeV) is required. To distinguish K ± → e ± ν e and K ± → µ ± ν µ decays from the background we use the lepton mass squared M 2 lep obtained from the momenta of the kaon and the charged decay particle, assuming zero neutrino mass. This provides a clean identification of the K ± → µ ± ν µ sample, while to improve background rejection in selecting K ± → e ± ν e decays we need to use the information from the EMC. A particle identification based on the asymmetry of the energy deposits between the EMC planes, on the spread E R.M.S. of these energy deposits and on the position of the plane with the maximum energy release is used. The number of signal events is extracted from a likelihood fit to the two-dimensional E R.M.S. vs M 2 lep distribution, using MC shapes for signal and background. The shape for K → eν(γ) have been evaluated considering both the Inner Bremsstrahlung (IB) and Direct Emission (DE) contributions. For signal count only events with photon energy in the kaon rest frame E * γ < 20 MeV have been considered. With this choice the contribution from DE is negligible and we measure the IB contribution, the only one entering in the SM prediction of R K . The number of signal events obtained from the fit is N Ke2 = 8090±156 and the projection of the fit result on the M 2 lep axes is compared to data in fig. 28 left. The number of K ± → µ ± ν µ events in the same data set is extracted from a similar fit to the M 2 lep distribution, where no PID cuts are applied. The fraction of background events under the muon peak is estimated from MC to be less than one per mil. The number of K ± → µ ± ν µ events in the same data set is 499 251 584±35403. Using the number of observed K ± → e ± ν e and K ± → µ ± ν µ events, we get the preliminary result [73]: This value is compatible within errors with the very precise SM prediction R K = (2.472± 0.001) × 10 −5 . Recently it has been pointed out that, in a SUSY framework, sizeable violations of lepton universality can be expected in K l2 decays [74] from the couplings to the charged Higgs boson H + . With this Yukawa coupling, the dominant non-SM contribution to R K modifies this ratio to: With the lepton flavor violating term ∆ 13 of the order of 10 −4 − 10 −3 , as expected from neutrino mixing, and moderately large values of tan β and m H ± , SUSY contributions may therefore enhance R K up to a few percent. Fig. 28 right shows the strong constraints on tan β and m H ± given by the KLOE R K result. For a moderate value of ∆ 13 ≈ 5 × 10 −4 , the region tan β > 50 is excluded for charged Higgs masses up to 1000 GeV/c 2 at 95% CL. 1. Introduction. -While much emphasis is placed on the search for New Physics, we still lack precise information on the validity of certain aspects of the SM itself. One of the postulates of the SM is the form of the Lagrangian for the charged-current weak interactions, which can be written as Two important properties are evident: there is only one coupling constant for leptons and quarks, and quarks are mixed by the Cabibbo-Kobayashi-Maskawa matrix [62,63], V CKM , which must be unitary. The 4-fermion Fermi coupling constant G F is related to the gauge coupling g by G F = g 2 /(4 √ 2 m 2 W ). Precise measurements of leptonic and semileptonic kaon decay rates can provide information about lepton universality. Combined with results from nuclear β decay and pion decays, such measurements also provide information about the unitarity of the mixing matrix. Ultimately they tell us whether quarks and leptons do indeed carry the same weak charge. The universality of electron and muon interactions can be tested by measuring the ratio Γ(K → πµν)/Γ(K → πeν). The partial rates Γ(K → πeν) and Γ(K → πµν) provide measurements of g 4 |V us | 2 , which, combined with g 4 |V ud | 2 from nuclear β decay and the muon decay rate, test the unitarity condition |V ud | 2 +|V us | 2 +|V ub | 2 = 1. We recall that in 1983 it was already known that |V ub | 2 < 4×10 −5 [76] and today |V ub | 2 ∼1.5×10 −5 [21]. We will therefore ignore |V ub | 2 in the following. The ratio Γ(K → µν)/Γ(π → µν) provides an independent measurement of |V us | 2 / |V ud | 2 . To perform these tests at a meaningful level of accuracy, radiative effects must be carefully corrected for. Strong-interaction effects also introduce form factors, which must be calculated from first principles, or measured whenever possible. Usually, corrections for SU(2)-and SU(3)-breaking must also be applied. Recently, advances in lattice calculations have begun to catch up with experimental progress. 2. Via semileptonic kaon decays . -Assuming lepton universality the muon decay rate provides the value for the Fermi constant, G F =1.16637(1) × 10 −5 GeV −2 , [21]. The semileptonic decay rates, fully inclusive of radiation, are given by In the above expression, the index K denotes K 0 → π ± and K ± → π 0 transitions, for which C 2 K = 1 and 1/2, respectively. m K is the appropriate kaon mass, and S EW is the universal short-distance electroweak correction [77] ( 4 ). Conventionally, f + (0) ≡ f K 0 π − + (0), and the mode dependence is encoded in the δ terms: the longdistance electromagnetic (EM) corrections, which depend on the meson charges and lepton masses, and the SU(2)-breaking corrections, which depend on the kaon charge [79]. I K,ℓ is the integral of the Dalitz plot density over the physical region and includes |f +, 0 (t)| 2 (sect. 5 . 5.2). I K,ℓ does not account for radiative effects, which are included in δ EM Kℓ . The experimental inputs to eq. 49 are the semileptonic decay rates (tables II, III, VII) and the phase space integrals (table VI), which have been discussed in previous sections. SU (2)−breaking and EM corrections which are used to extract f + (0) V us are summarized in table IX. The SU (2)−breaking correction is evaluated with ChPT to O(p 4 ), as described in [79]. The long distance EM corrections to the full inclusive decay rate are evaluated with ChPT to O(e 2 p 2 ) [79], and using low-energy constants from [80]. The quoted results have been evaluated recently [81], and include for the first time the K µ3 channels for both neutral and charged kaons. . 2.1. f + (0) V us and V us . Using all of the experimental and theoretical inputs discussed above, the values of f + (0) V us have been obtained for K Le3 , K Lµ3 , K Se3 , K ± e3 , and K ± µ3 decay modes, as shown in table X and in fig. 29. It is worth noting that the only external experimental input to this analysis is the K S lifetime. All other experimental inputs are KLOE results. The five different determinations have been averaged, taking into account all known correlations. We find [82] f + (0) V us = 0.2157 ± 0.0006, (50) with χ 2 /ndf = 7.0/4 ( 13%). To evaluate the reliability of the SU (2)−breaking correction, a comparison is made between separate averages of f + (0) V us for the neutral and the charged channels, which are 0.2159(6) and 0.2145 (13). With correlations taken into account, these values agree within 1.1σ. Alternatively, an experimental estimate of ( 4 ) Often in literature the numerical factor 192 is replaced by 768 for which the phase space is normalized to 1 for all final particles masses vanishing [78]. The phase space integrals listed in sect. 5 . 5.2 are evaluated accordingly to the chosen convention. (table IX). 3. Via K → µν decay. -High-precision lattice quantum chromodynamics (QCD) results have recently become available and are rapidly improving. The availability of precise values for the pion-and kaon-decay constants f π and f K allows use of a relation between Γ(K µ2 )/Γ(π µ2 ) and |V us | 2 / |V ud | 2 , with the advantage that lattice-scale uncertainties and radiative corrections largely cancel out in the ratio [85]: where the uncertainty in the numerical factor is dominantly from structure-dependent corrections and may be improved. Thus, it could very well be that the abundant decays of pions and kaons to µν ultimately give the most accurate determination of the ratio of |V us | to |V ud |. This ratio can be combined with direct measurements of |V ud | to obtain |V us |. From our measurements of BR(K µ2 ) (eq. 41) and τ ± (eq. 8), and using Γ(π µ2 ) from [21], we evaluate: Using the recent lattice determination of f K /f π from the HPQCD/UKQCD collaboration, f K /f π = 1.189(7) [57], we finally obtain V us /V ud = 0.2326 ± 0.0015. 6 . 4. Bounds on New Physics from K → µν decay. -A particularly interesting observable is the ratio of the V us values obtained from helicity suppressed and helicity allowed modes: R ℓ23 = |V us (K ℓ2 )/V us (K ℓ3 ) × V ud (0 + → 0 + )/V ud (π µ2 )|. This ratio, which is equal to 1 in the SM, would be affected by the presence of scalar density or extra righthanded currents. A scalar current due to a charged Higgs H + exchange is expected to lower the value of R ℓ23 , which becomes [86]: with tan β the ratio of the two Higgs vacuum expectation values in the MSSM. In addition, in this scenario both 0 + → 0 + nuclear beta decays and K ℓ3 are not affected, and the unitarity constraint for this modes can be applied. To evaluate R ℓ23 , we fit our experimental data on K µ2 and K ℓ3 decays, using as external inputs the most recent lattice determinations of f + (0) [58] and f K /f π [57], the value of V ud from [87], and V 2 ud + V us (K l3 ) 2 = 1 as a constraint. We obtain R ℓ23 = 1.008 ± 0.008, (56) which is one σ above the SM prediction. This measurement can be used to set bounds on the charged Higgs mass and tan β. Fig. 30 shows the region excluded at 95% CL in the m H + − tan β plane. The measurement of BR(B → τ ν) [88] can also be used to set bounds on the m H + − tan β plane, which are shown in fig. 30. While the B → τ ν can exclude quite an extensive region of this plane, there is an uncovered region corresponding to the change of sign of the correction. This region is fully covered by our result. 6 . 5. Test of CKM . -To test the unitarity of the quark mixing matrix, we combine all the information from our measurements on K µ2 , K e3 , K µ3 , together with superallowed 0 + → 0 + nuclear β decays (fig. 31). The best estimate of |V us | 2 and |V ud | 2 can be obtained from a fit to our results V us = 0.2237 (13) and V us /V ud = 0.2326 (15), together with V ud = 0.97418(26) [87]. The fit gives |V us | 2 = 0.0506(4) and |V ud | 2 = 0.9490(5), with χ 2 /ndf = 2.34/1 (13%) and a correlation coefficient of 3%. The values obtained confirm the unitarity of the CKM quark mixing matrix as applied to the first row. We find |V us | 2 + |V ud | 2 − 1 = −0.0004 ± 0.0007 (∼ 0.6σ) (57) In a more conventional form, the results of the fit are: |V us | = 0.2249 ± 0.0010 |V ud | = 0.97417 ± 0.00026. (58) Imposing unitarity as a constraint, |V us | 2 + |V ud | 2 = 1, does not improve the accuracy. One should also keep in mind that while lattice results for f + (0) and f K /f π appear to be converging and are quoted with small errors there is still a rather large spread between different calculations. If we were to use instead f + (0) = 0.961 ± 0.008 as computed in [83] and still preferred by many authors, we find |V us | = 0.2258 ± 0.0012 which is less precise but satisfies more closely unitarity. . 0.1. Quantum coherence. As stated in sect. 4 . 1, the neutral kaon pair from φ → K 0 K 0 are produced in a pure quantum state with J P C =1 −− . We evolve the initial state in time, project to any two possible final states f 1 and f 2 , take the modulus squared, and integrate over all t 1 and t 2 for fixed ∆t = t 1 −t 2 to obtain (for ∆t > 0, with Γ ≡ Γ L +Γ S ): The last term is due to interference between the decays to states f 1 and f 2 and ζ=0 in quantum mechanics. Fits to the ∆t distribution provide measurements of the magnitudes and phases of the parameters η i = f i |K L / f i |K S , as well as of the K L -K S mass difference ∆m and the decay widths Γ L and Γ S . Such fits also allow tests of fundamental properties of quantum mechanics. For example, the persistence of quantum-mechanical coherence can be tested by choosing f 1 = f 2 . In this case, because of the antisymmetry of the initial state and the symmetry of the final state, there should be no events with ∆t = 0. Using ∼ 300 pb −1 of integrated luminosity, we published an analysis of the ∆t distribution for K S K L → π + π − π + π − events that establishes the feasibility of such tests. The ∆t distribution is fit with a function of the form of eq. 59, including the experimental resolution and the peak from K L → K S regeneration in the beam pipe. The results are shown in fig. 32. Observation of ζ = 0 would imply loss of quantum coherence. The value of ζ depends on the basis, K 0 -K 0 or K S -K L used in the analysis. Using the K 0 -K 0 basis we find [89] ζ = (0.10 ± 0.21 stat ± 0.04 syst ) × 10 −5 (60) which means no violation of quantum mechanics. The statistical error has been already reduced by a factor of two analyzing ∼ 1 fb −1 more of integrated luminosity. 7 . 0.2. Quantum interference and quantum gravity. In the context of a hypothetical quantum gravity theory, CP T violation effects might occur in correlated neutral kaon states [90,91], where the resulting loss of particle-antiparticle identity could induce a breakdown of the correlation of state imposed by Bose statistics. As a result, the initial state can be parametrized as: where ω is a complex parameter describing a completely novel CP T violation phenomenon, not included in previous analyses. Its order of magnitude could be at most |ω| ∼ (m 2 Inserting the modified initial state in the expression of the K S K L → π + π − π + π − decay intensity, and fitting the ∆t distribution as in the case of the previous analysis, we obtained the first measurement of the complex parameter ω [89]: This can be translated into an upper limit on the absolute value, ω < 2.1 × 10 −3 at 95% CL. Also in this case the statistical error has been reduced by a factor of two analyzing ∼ 1 fb −1 more of integrated luminosity, thus reaching the interesting Planck's scale region. The three discrete symmetries of quantum mechanics, charge conjugation (C), parity (P ) and time reversal (T ) are known to be violated in nature, both singly and in pairs. Only CP T appears to be an exact symmetry of nature. Exact CP T invariance holds in quantum field theory which assumes Lorentz invariance (flat space), locality and unitarity [92]. These assumptions could be violated at very large energy scales, where quantum gravity cannot be ignored [93]. Testing the validity of CP T invariance probes the most fundamental assumptions of our present understanding of particles and their interactions. The neutral kaon system offers unique possibilities for this study. Within the Wigner-Weisskopf approximation, the time evolution of the neutral kaon system is described by where M and Γ are 2 × 2 time-independent Hermitian matrices and Ψ(t) is a twocomponent state vector in the K 0 -K 0 space. Denoting by m ij and Γ ij the elements of M and Γ in the K 0 -K 0 basis, CP T invariance implies and The eigenstates of eq. 63 can be written as such that δ = 0 in the limit of exact CP T invariance. Unitarity allows us to express the four entries of Γ in terms of appropriate combination of kaon decay amplitudes A i : where the sum runs over all the accessible final states. Using this decomposition in eq. 65 leads to the Bell-Steinberger relation [94]: a link between Re(ǫ), Im(δ) and the physical kaon decay amplitudes. In particular, without any expansion in the CP T -conserving parameters and neglecting only O(ǫ) corrections to the coefficient of the CP T -violating parameter δ, we find where A S (f ) and A L (f ) are the physical K S and K L decay amplitudes, and φ SW = arctan 2(m L − m S )/(Γ S − Γ L ) . The advantage of the neutral kaon system is that only the ππ(γ), πππ and πℓν decay modes give significant contributions to the right-hand side of eq. 67. The solution to the unitarity relation in eq. 67 is: where the α parameters are related to the product of K S and K L decay amplitudes to ππ(γ), πππ and πℓν final states, κ = τ S /τ L , b = BR(K L → πℓν), and Recently, we published improved results for the two parameters Re(ǫ) and Im(δ) using eq. 67 and our measurements of neutral kaon decays [95]. Our analysis benefits in particular from three measurements: i) the branching ratio for K L decays to π + π − [64] (sect. 5 . 5.4), which is relevant for the determination of Re(ǫ); ii) the new upper limit on BR(K S →π 0 π 0 π 0 ) [39] (sect. 5 . 4.4), which is necessary to improve the accuracy on Im(δ); and iii) the measurement of the K S semileptonic charge asymmetry A S [31] (sect. 5 . 4.2), which allows, for the first time, the complete determination of the direct contribution from semileptonic channels, without assuming unitarity. The experimental inputs to the determination of the α parameters of eq. 68 are all of the KLOE neutral kaon measurements (tables II and III), the values of τ S , m L − m S , φ +− , φ 00 from [21], and the π + π − γ and π + π − π 0 amplitudes from [96] and [97], respectively. Using all of these results we obtain the values reported in table XI; fig. 33 and fig. 34 show the 68% and 95% CL contours in the complex plane Re(α i ) − Im(α i ). Inserting all of the information in eq. 68 we finally obtain: The allowed region in the (Re(ǫ), Im(δ)) plane at 68% CL and 95% CL is shown in fig. 35, left. Our results, eq. 70, improve by a factor of two the previous best determinations. The limits on Im(δ) and Re(ǫ) can be used to constrain the mass and width difference between K 0 andK 0 via The allowed region in the ∆M = (m K 0 − mK0), ∆Γ = (Γ K 0 − ΓK0) plane is shown in fig. 35, right. Since the total decay widths are dominated by long-distance dynamics, in models where CP T invariance is a pure short-distance phenomenon, it is useful to consider the limit Γ K 0 = ΓK0. In this limit, neglecting CP T −violating effects in the decay amplitudes, we obtain the following bound on the neutral kaon mass difference: 2. CP T and Lorentz symmetry breaking. -CP T invariance holds for any realistic Lorentz-invariant quantum field theory. However a very general theoretical possibility for CP T violation is based on spontaneous breaking of Lorentz symmetry, as developed by Kostelecky [98], which appears to be compatible with the basic tenets of quantum field theory and retains the property of gauge invariance and renormalizability (Standard Model Extensions, SME). For neutral kaons, CP T violation would manifest to lowest order only in the parameter δ, with dependence on the kaon 4-momentum: where γ K and β K are the kaon boost factor and velocity in the observer frame, and ∆a µ are four CP T -and Lorentz-violating coefficients for the two valence quarks in the kaon. Following [98], the time dependence arising from the rotation of the earth can be explicitly displayed in eq. 73 by choosing a three-dimensional basis (X,Ŷ ,Ẑ) in a non-rotating frame, with theẐ axis along the earth's rotation axis, and a basis (x,ŷ,ẑ) for the rotating (laboratory) frame. The laboratory frame precesses around the earth's rotation axisẐ at the sidereal frequency Ω. Defining χ the angle betweenẐ and the positron beam direction, the CP T violating parameter δ may then be expressed as: +β K (∆a Y sin χ cos θ sin Ωt + ∆a X sin χ cos θ cos Ωt)} , The ∆a 0 parameter can be measured through the difference between K S and K L charge asymmetries, A S − A L . Each asymmetry is integrated over the polar angle θ, thus averaging to zero any possible contribution from the terms proportional to cos θ in eq. 74, giving the relation: From our measured value of A S [31] (sect. 5 . 4.2) and a preliminary evaluation of A L , we get A S − A L = (−2 ± 10) × 10 −3 , which gives as a preliminary evaluation of the ∆a 0 parameter [10]: With the analysis of the full data sample, an accuracy σ(∆a 0 ) ∼ 7 × 10 −18 GeV could be reached. The parameters ∆a X,Y,Z can be measured using φ → K S K L → π + π − , π + π − events, by fitting the decay intensity I (π + π − (cos θ > 0), π + π − (cos θ < 0); ∆t), where the two identical final states are distinguished by their forward or backward emission. A preliminary analysis, based on ∼ 1 fb −1 , yields [10]: ∆a X = (−6.3 ± 6.0) × 10 −18 GeV ∆a Y = (2.8 ± 5.9) × 10 −18 GeV ∆a Z = (2.4 ± 9.7) × 10 −18 GeV , which represents the first determination of ∆a Z to date. When running DAΦNE at the φ peak, the dominant decay channels of the φ meson are KK pairs. However, as shown in fig. 36, the φ meson decays with large rates also to other interesting hadronic final states. In this chapter, we mainly discuss the so called φ radiative decays i.e. electromagnetic transitions (φ → meson+ γ) to other mesons, which result in a great profusion of lighter scalar and pseudoscalar mesons. The transition rates are strongly dependent on the wave function of the final-state meson. They also depend on its flavor content, because the φ is a nearly pure ss state and because there is no photon-gluon coupling. These radiative φ decays are unique probes of meson properties and structure. This hadron source, coupled with KLOE 's great versatility, allows to perform many hadronic experiments, with great precision. Moreover, we can study with high statistics the properties of vector mesons. Due to space constraint, we will limit ourselves to mentioning only a handful of these measurements, just enough to give a taste of hadron physics with KLOE. 1. η meson. -The BR for the φ → ηγ decay is 1.3%. In 2.5 fb −1 of KLOE data, there are ∼ 100 million η mesons which, for most of the final states, are clearly identified by their recoil against a photon of E = 363 MeV. The η's decay predominantly into two photons or three pions. The large production rate and the clean η signature made possible to search for rare or forbidden η decays even with reduced samples. A good example is provided by the decay η → π + π − which, being the η an isoscalar, violates both P and CP . With the first 350 pb −1 , KLOE had searched for evidence of this decay in the M ππ distribution of e + e − → π + π − γ events where the photon is emitted at large polar angles (θ > 45 • ). No peak is observed in the distribution of M ππ in the vicinity of m η . The corresponding limit is BR(η → π + π − ) ≤ 1.3 × 10 −5 at 90% CL [99], which improves the previous limit by a factor of 25. Similarly, the decay η → 3γ violates C. KLOE has conducted a search for this decay using 410 pb −1 and has set the limit BR(η → 3γ) ≤ 1.6 × 10 −5 at 90% CL, the most stringent result obtained to date [100]. In the following subsections, we will instead show precise measurement on the η meson such as the mass and the description of the dynamics for the η → 3π decays. We conclude by showing preliminary BR measurements for very rare but observed decays which are still in progress. 1.1. η mass. KLOE has performed the most precise determination to date of the η mass, on the basis of 17 millions of φ → ηγ, η → γγ decays. The φ → ηγ events are selected by requiring at least three energy deposits in the barrel with a polar angle θ γ : 50 • < θ γ < 130 • , not associated to a charged track. A kinematic fit imposing energy-momentum conservation and time of flight of photons equal to the velocity of light is done for all 3 γ's combination of N detected photons. The combination with the lowest χ 2 is chosen as a candidate event if χ 2 < 35. The inputs of the fit are the energy, the position and the time of the calorimeter clusters, the mean position of the e + e − interaction point, x φ , and the total four-momentum, p φ , of the colliding e + e − pair. The mean values of x φ and p φ are determined run by run using e + e − → e + e − events (almost 90000 events for each run, allowing a very precise determination of the relevant parameters). The events surviving the cuts are shown in the Dalitz plot, fig. 37-left, where three bands are clearly visible. The band at low m 2 γγ is given by the φ → π 0 γ, π 0 → γγ, while the other two bands are φ → ηγ, η → γγ events. Using the cuts indicated by the black line shown in the Dalitz plot we select pure samples of η, π 0 → γγ events. The resulting m γγ spectrum for the η ( fig. 37-right) is well fitted with a single gaussian of σ ∼ 2.1 MeV/c 2 , thus showing that the kinematic fit improves of a factor ∼ 20 the mass resolution obtained by calorimetric reconstruction only. To determine the systematic error associated with the mass measurement, we have evaluated the uncertainties on all the quantities used in the kinematic fit and their effect on the fitted value. A sample of e + e − → π + π − γ events has been used to check the mean position of the interaction point and the alignment of the calorimeter with respect to the Drift Chamber. The absolute energy scale of the calorimeter and the linearity of the energy response was checked using both the e + e − → e + e − γ and the π + π − γ events. A linearity of better than 2% is found and the absolute scale results set to better than 1%. Since the kinematic fit overconstrains the photon energy with the cluster positions, the systematic uncertainties related to the energy reconstruction are small and result in a η mass error of 4 keV for the scale and 4 keV for the linearity. For the same reason, a larger effect can be expected if the calorimeter does not have angular uniformity in φ or θ. Misalignment of single modules in the calorimeter has been checked resulting in a spread of about 10-15 keV on the η mass. While the chosen cut on χ 2 has negligible effect, the systematics due to the particular choice of the event selection cut on the Dalitz plot, shown in fig. 37-left, was determined to contribute 12 keV to the error. The measured value of the mass is, however, very sensitive to the center of mass energy of the ηγ system. Due to initial state radiation emission (ISR), the available center of mass energy is a bit lower than the one of the e + e − beams, W , measured using e + e − → e + e − events. A variation of 100 keV of the measured mass value is obtained from the MC simulation which includes ISR emission. Due to the large value of this correction, the η mass has been determined as a function of W both in data and with the MC simulation+ISR. The resulting data-MC spread of 8 keV is assumed as systematic error. All the above mentioned studies on systematics have been also done for the π 0 mass using the φ → π 0 γ events and for the ratio of the two masses r = m η /m π 0 . The mass scale is set by the knowledge of the absolute scale of W determined using the CMD2 m φ value as shown in the K mass measurement section. We obtain [25]: m π 0 = (134.906 ± 0.012 stat ± 0.048 syst ) MeV (78) m η = (547.874 ± 0.007 stat ± 0.031 syst ) MeV (79) the π 0 mass value is in agreement with the world average [21] within 1.4 σ. As a check of this result, we use the measured ratio r = m η m π 0 = 4.0610 ± 0.0004 stat ± 0.0014 syst (80) and the world average value of the π 0 mass, m π 0 = (134.9766 ± 0.0006) MeV to derive m η = (548.14±0.05 stat ±0.19 syst ) MeV, consistent with the results quoted above although affected by a larger systematic error. This is due to the soft energy spectrum of the photons from π 0 which results in a worse position resolution. While being the most accurate result today, our measurement of the η mass is in good agreement with the recent determinations based on η decays [101,102]. Averaging those mass values and our result we obtain m η = 547.851 ± 0.025 MeV, a value different by ∼ 10 σ from the average of the measurements done studying the production of the η meson at threshold in nuclear reactions [103]. 1.2. Dynamics of the decays η → 3π. The decay η → 3π violates isospin invariance. Electromagnetic contributions to the process are very small and the decay is induced dominantly by the strong interaction via the u, d mass difference. The η → 3π decay is therefore an ideal laboratory for testing Chiral Perturbation Theory, ChPT. We have studied the dynamics of the decay η → π + π − π 0 using about 1.4 million of such events. From a fit to the Dalitz plot density distribution we made precise determinations of the parameters that characterize the decay amplitude. In addition, we have also obtained the most stringent tests on C-violation from left-right, quadrant and sextant integrated asymmetries. [104] A three body decay( 5 ) is fully described by two variables. We can choose two of the pion energies (E + , E − , E 0 ) in the η rest frame, two of the three combinations of the two-pion masses squared (m 2 +− , m 2 −0 , m 2 0+ ) also called (s, t, u). Note that E + is linear in m 2 −0 and so on, cyclically. The Dalitz variables, X, Y are linear combinations of the pion energies: where Q is the reaction "Q-value". The decay amplitude is given in [105] as: where ∆ 2 ≡ (m 2 s − m 2 )/(m 2 d − m 2 u ) and m = 1/2(m u + m d ) is the average u, d quark mass. F π = 92.4 MeV is the pion decay constant and M(s, t, u) the amplitude we would like to know. From eq. 82 it follows that the decay rate for η → π + π − π 0 is proportional to ∆ −4 . The transition η → 3π would be therefore very sensitive to ∆ if the amplitude M is known. At lowest order in ChPT: and the decay width at leading order is Γ lo η → π + π − π 0 = 66 eV [105] to be compared with the measured width 295 ±16 eV [21]. A one-loop calculation within conventional ChPT improves considerably the prediction giving Γ nlo η → π + π − π 0 ≃ 167 ± 50 eV but is still far from the experimental value. Higher order corrections help but do not yet bring agreement with measurements of both total rate and Dalitz plot slopes. Good agreement is found combining ChPT with a non perturbative coupled channels approach using the Bethe Salpeter equation [106]. Therefore a precision study of the η → 3π Dalitz plot, DP, is highly desirable. The amplitude squared is expanded around X = Y = 0 in power of X and Y |A(X, Y )| 2 = 1 + aY + bY 2 + cX + dX 2 + eXY + f Y 3 + . . . ( 5 ) Both η and π are spinless, therefore there is no preferred direction. The parameters (a, b, c, d, e, f, ... ) can be obtained from a fit to the observed DP density, see Fig. 38 and should be computed by the theory. Any odd power of X in A(X, Y ) implies violation of charge conjugation. The event selection consists of a pre-selection, the requirement of one charged vertex inside a small region together with three photons and a total photon energy below 800 MeV. A constrained kinematic fit imposing four-momentum conservation and correct time of arrival for the photons is then applied and only events with a χ 2 probability greater than 1% are kept for further analysis. We finally required that the recoil photon energy lies between 320 and 400 MeV and we add two further constraints on the sum of charged pion energies and on the invariant mass of the two softest photons. All selection criteria and efficiencies were checked by data and Monte Carlo. The overall selection efficiency, taking into account all the Data-MC corrections is found to be ǫ = (33.4 ± 0.2)%. The expected background, obtained from MC simulation is as low as 0.3%. After the background subtraction the observed number of events in the Dalitz plot is N obs = 1.337 ± 0.001 million events. The results, from fitting the matrix element of Dalitz-plot as shown in eq. 84, including the statistical uncertainties coming from the fit and the estimate of systematics are: a = −1.090 ± 0.005(stat) +0.008 −0.019 (syst); b = 0.124 ± 0.006(stat) ± 0.010(syst); d = 0.057 ± 0.006(stat) +0.007 −0.016 (syst); f = 0.14 ± 0.01(stat) ± 0.02(syst). Our fit raises several points unexpected from previous, low statistics analyses. In particular: (i) the fitted value for the quadratic slope in Y , b, is almost one half of simple Current Algebra predictions (b = a 2 /4), indicating that higher order corrections are probably necessary, (ii) the quadratic term in X, d, is unambiguously found different from zero, and (iii) the same applies for the large cubic term in Y . We have also fitted the Dalitz plot with a different parametrization of the event density which takes into account the final state π-π rescattering. Since strong interactions are expected to mix the two isospin I = 1 final states of the η → 3π decay, it is possible to introduce a unique rescattering matrix R which mixes the corresponding I=1 decay amplitudes [107]. Thus, we have made a fit using the alternative parametrization of the Dalitz plot density distribution, from which it is possible to extract the Dalitz plot slope α parametrizing the 3π 0 decay amplitude. The result α = −0.038 ±0.002 is in reasonable agreement with the direct measurement by KLOE of α = −0.027 ± 0.004 +0.004 −0.006 [108]. While the polynomial fit of the Dalitz plot density gives valuable information on the matrix element, some specific integrated asymmetries as defined in fig. 39 are very sensitive in assessing the possible contributions to C-violation in amplitudes with fixed ∆I. In particular left-right asymmetry tests C-violation with no specific ∆I constraint; quadrant asymmetry tests C violation in ∆I = 2 and sextant asymmetries tests C violation in ∆I = 1. In the following we present results on asymmetries which use four times the statistics entering the Particle Data Group's fits and are the most stringent tests to date. We first obtained from MC the efficiency for each region of the Dalitz plot as the number of events reconstructed in a particular region divided by the number of events generated in the same region, thus taking resolution effects into account. Since no asymmetries were introduced in the Monte Carlo, we have checked that the asymmetries estimated with our technique in Monte Carlo are compatible with zero. We then evaluated the asymmetry on data by counting events in the regions, subtracting the background and correcting for efficiency . The measured asymmetries are as follows: A LR = (0.09 ± 0.10(stat.) +0.09 −0.14 (syst.)) × 10 −2 A Q = (−0.05 ± 0.10(stat.) +0.03 −0.05 (syst.)) × 10 −2 A S = (0.08 ± 0.10(stat.) +0.08 −0.13 (syst.)) × 10 −2 9 . 1.3. η → π 0 γγ. The decay η → π 0 γγ is particularly interesting in chiral perturbation theory. There is no O(p 2 ) contribution, and the O(p 4 ) contribution is small. At KLOE, the decay can be reconstructed with full kinematic closure and without complications from certain backgrounds present in fixed-target experiments, such as π − p → π 0 π 0 n. Using 450 pb −1 of data, we obtain the preliminary result BR(η → π 0 γγ) = (8.4 ± 2.7 ± 1.4) × 10 −5 , which is in agreement with several O(p 6 ) chiral perturbation theory calculations discussed in a recent η workshop [109]. The final measurement using the complete set of data is forthcoming in 2009. 1.4. η → π + π − e + e − . This reaction is interesting because it can test a particular formulation [110] of CP violation beyond the SM, through the measurement of the angular asymmetry between the pions and electrons planes. The analysis is based on the reconstruction of the invariant mass of four charged particles recoiling against a monochromatic photon in φ → ηγ events. The main background is represented by φ → π + π − π 0 events, with either the π 0 undergoing a Dalitz decay or one of the two photons from the π 0 decay converting in the beam pipe. The same happens for φ → ηγ, with η → π + π − π 0 with the π 0 Dalitz decay, and for φ → ηγ with η → π + π − γ events with photon conversion. These backgrounds are well reduced by kinematic cuts. Signal and background counts are extracted from the fit of the invariant mass of the four tracks events, using MC spectra for the signal and the various backgrounds, fig. 40. Using a data sample of ∼622 pb −1 , about 700 signal events have been observed while previous experiments collected less than 20 events. A preliminary estimate of the BR is (24 ± 2 stat ± 4 syst ) × 10 −5 . Since approximately 3 times more statistics is available, the CP violation test aforementioned will be accessible [111]. To select φ → η ′ γ events, we require, in addition to seven prompt photons, a vertex in a cylindrical region around the interaction point formed by two oppositely charged tracks. Then we perform a kinematic fit requiring energy-momentum conservation and times and path lengths to be consistent with the speed of light for photon candidates. From about 1.4 billion φ collected, the final number of η' events from the two selected decay chains is, after background subtraction, N η ′ γ = 3405 ± 61. The selection efficiency for detecting the events, evaluated from MC simulations and checked extensively with data sub samples, is ǫ η ′ = (23.45 ± 0.16)%. 2.2. Pseudoscalar mixing angle. The value of R φ can be related to the pseudo-scalar mixing angle. The η − η ′ system can be parametrized in terms of just an angle only in the quark-flavor basis: |η > = cos ϕ P |qq > + sin ϕ P |ss > |η ′ > = − sin ϕ P |qq > + cos ϕ P |ss > where |qq >= (1/ √ 2)|uū + dd >. Using the approach of ref. [113,114], where the SU(3) breaking is taken into account via constituent quark mass ratio m s /m, R φ can be parametrized as where ϕ V = 3.4 • is the mixing angle for vector mesons; p η(η ′ ) is the η(η ′ ) momentum in the φ center of mass; the two parameters C N S and C S take into account the OZIrule effect on the vector and pseudoscalar wave-function overlap, they are 0.91±0.05 and 0.89±0.07 respectively, for m s /m=1.24±0.07 [114]. Then we obtain: The theoretical uncertainty on the mixing angle has been evaluated from the maximum variation induced from the spread of the ratio m s /m, C N S and C S values. In the traditional approach to the mixing, the η − η ′ meson are parametrized in the octet-singlet basis; in this basis the value of the mixing angle becomes: θ P = ϕ P − arctan √ 2 = (−13.3 ± 0.3 stat ± 0.7 sys ± 0.6 th ) • . 2.3. η ′ gluonium content. The QCD involves quanta, the gluons, which are expected to form bound states, and mix with neutral mesons. While the η meson is well understood as an SU (3)−flavor octet with a small quarkonium singlet admixture, the η ′ meson is a good candidate to have a sizeable gluonium content. If we allow for a η ′ gluonium content, we have the following parametrization: where the Z η ′ parameter takes into account a possible mixing with gluonium. The normalization implies where φ G is the mixing angle for the gluonium contribution. A possible gluonium content of the η ′ meson corresponds to a non zero value for Z 2 η ′ , that implies and eq. 85 becomes: Combining our R φ result with other experimental constraints, we estimate the gluonium fractional content of η ′ meson as Z 2 = 0.14 ± 0.04 and the mixing angle ϕ P = (39.7 ± 0.7) • [115]. 3. Scalar mesons, f 0 (980) and a 0 (980). -The composition of the scalar mesons with masses below 1 GeV, σ(600), f 0 (980) and a 0 (980), is not yet well understood. KLOE is well suited for this task, since the radiative φ decays into two pseudoscalar mesons are dominated by a scalar meson exchange, φ → Sγ, with S = σ, f 0 , a 0 , and we have now collected literally millions of such decays. A detailed proposal on how to elucidate the nature of these mesons with KLOE was one of the first papers JLF had written, after her arrival to LNF in 1991 [116]. Since then, theoretical models explaining how to describe these scalars and how to calculate their decay rate have proliferated greatly. For this review, we only pick two models which are illustrated below. The investigation on these mesons is experimentally carried out by measuring branching ratios and studying the mass-spectra or the event density in the Dalitz plot. Branching Ratios: The branching ratio for the decays φ → f 0 γ → ππγ and φ → a 0 γ → ηπ 0 γ are suppressed unless the f 0 and a 0 have significant ss content. The BRs for these decays are estimated to be of ∼ 10 −4 if the f 0 and a 0 are qqqq states, in which case they contain an ss pair, or of ∼ 10 −5 if the f 0 and a 0 are conventional qq states. If the f 0 and a 0 are KK molecules, the BR estimates are sensitive to assumptions about the spatial extension of these states. Invariant Mass distributions: Fits to the ππ (ηπ 0 ) invariant-mass distributions can also shed light on the nature of the f 0 (a 0 ) since they provide an estimate of the couplings of these mesons to the φ , g φSγ , and/or to the final-state particles. The rate expression for φ → Sγ E1-transitions contains a factor E 3 γ , where E γ is the energy of the radiated photon, as required by considerations of phase space and gauge invariance. As a result, the invariant-mass distributions are cut off above m φ and develop a long tail toward lower mass values. The fit results therefore strongly depend on the scalar meson masses and widths and on the assumed specific model for the decay mechanism. Because of the proximity of the f 0 and a 0 masses to the KK threshold, and because these mesons are known to couple strongly to KK, the kaon-loop model, fig. 41 left, is often used [117]. In this model, the decay proceeds through a virtual K + K − pair emitting the photon and subsequently annihilating into a scalar. This loop function damps the E 3 behavior. The transition amplitude depends on the coupling of the scalar meson both to the pseudoscalar meson in the final state and to the K + K − pair in the loop function. The propagator include finite width corrections which are relevant close to the KK threshold. A second model, called no-structure [118], sketched in fig. 41 right, describes the process as a point-like interaction where the dynamics of the scalar production is absorbed in the g φSγ coupling. The amplitude is described by a Breit-Wigner propagator, with a mass dependent width, which accounts for analytical continuation under ππ and KK thresholds, and a complex polynomial describing a continuum background. The amplitudes contributing to φ → π 0 π 0 γ include φ → Sγ, with S = f 0 or σ, and φ → ρ 0 π 0 with ρ 0 → π 0 γ. The final state was selected requiring five prompt γ in the event and pairing the photons to get the best value for the pion masses. The energy resolution was improved by performing a kinematic fit to the event. In this final state, more than 50% of the events were due to the not-resonant e + e − → ωπ 0 process, with ω → π 0 γ. This background was largely removed by directly cutting around m ω on the reconstructed πγ invariant mass. After this rejection, the M ππ distribution was fit with a function including in the amplitude the terms describing the process φ → Sγ, φ → ρπ, and their interference. We obtained a BR(φ → π 0 π 0 γ) of (1.09 ± 0.03 ± 0.05) × 10 −4 . The analyses of the φ → π 0 π 0 γ and φ → ηπ 0 γ decays based on a later data set, benefit of an increase in statistics by a factor of ∼30 such that we could improve background removal and analysis cuts. For the π 0 π 0 γ final state, the analysis is done without cutting on the π 0 γ invariant mass, which could bias the residual M ππ spectra, and plotting all events with same topology in the Dalitz plots shown in fig. 42 left. Two clear bands appear related to the not-resonant process which constitutes the larger background to the scalar term for M ππ < 700 MeV. Above this threshold, the f 0 (980) contribution is almost background free. We estimate with theory the event density on the Dalitz-plot by adding to the amplitudes φ → Sγ and φ → ρπ 0 the complete VDM description of the not-resonant process. The interferences between all diagrams are also taken into account [119]. We then fit the Dalitz plot density, by folding theory with reconstruction efficiencies and with a smearing matrix describing the probability for an event to migrate from a bin to another one. In particular, we fit data with (1), an improved k-loop model [120], and (2) with the no-structure model [118]. In the improved k-loop model,the f 0 and σ description are strongly coupled with the energy dependence of the ππ scattering phase. Ten different parametrizations are introduced. We find that, without adding the σ in the model, it is impossible to adequately describe our data-set. In fig. 43 the function describing M ππ , as extracted by our best fit result for the k-loop model, is shown. The contribution of each process is also represented. A large coupling of the f 0 (980) to kaons is obtained with large theory errors related to the parametrization choice (see table XII). Discussing with the authors of ref. [120], some typos in the parametrization were found. After correcting for it, a new preliminary set of values is extracted which much reduced theory errors, referred as π o π o γ update in table XII. All of the above indicates a notconventional structure for the f 0 meson. For the no-structure model, the situation is really different. The f 0 coupling to kaons get reduced and the introduction of the σ does not substantially modify the results. However, the physical interpretation gets somehow hidden from the introduction of the continuum background in the parametrization. Parameter π + π − γ π 0 π 0 γ π 0 π 0 γ updates By integrating the scalar term contribution, we determine a value for the BR which slightly depends on the theory model used in the fit. We quote a BR to φ → π 0 π 0 γ [119] of: For the ηπ 0 γ final state, same analysis strategy of the previous work was followed with the advantage of using a more realistic detector response and simulation of the machine background in the official KLOE Monte Carlo. The physics background was estimated after event preselection by fitting MC shapes to data. At the end of the analysis, we have a signal efficiency of ∼ 40% (20%) and a residual background of 55% (15%) for the sample with η → γγ (η → π + π − π 0 ). The absence of a major source of interfering background allows to get the BR directly from event counting. We get a preliminary value for the BR(φ → ηπ 0 γ) of (6.98±0.10±0.23 syst )×10 −5 and (7.05±0.10±0.21)×10 −5 , for the 5γ and π + π − 5γ samples respectively, which corresponds to a decrease of the central value with respect to our previous measurement of ∼ 12% with an improvement in precision better than a factor 3. In fig. 44, the ηπ 0 invariant mass distribution, M ηπ 0 , is shown for both final states. A simultaneous fit to the spectra is also performed to extract the relevant a 0 coupling. Also for this meson, a large coupling to kaons is found. 3.2. φ→π + π − γ. KLOE has also published a study of the decay φ → f 0 γ → π + π − γ [121]. The f 0 's charged mode was extremely difficult to identify because only a small fraction of the e + e − → π + π − γ events involve the f 0 ; the principal contributions are from events in which the photon is from ISR or FSR. The analysis is performed on the M ππ distribution for events with large photon polar angle where the ISR contribution is much reduced. The M ππ distribution was fit with a function composed of analytic expressions describing the ISR, FSR, ρπ contributions and two terms that describe the decay φ → Sγ → ππγ and its interference with FSR which could be constructive or destructive. Fig. 45 shows the M ππ distribution, with the result of the kaon-loop fit superimposed. The overall appearance of the distribution is dominated by the radiative return to the ρ; the f 0 appears as the peak-like structure in the region 900-1000 MeV. Both the kaon-loop and no-structure fits strongly prefer destructive interference between the Sγ and FSR amplitudes. The kaon-loop fit gives coupling values in reasonable agreement with those obtained from the fit to the KLOE φ → π 0 π 0 γ data discussed before. Integrating the appropriate terms of the fit functions gives values for BR(φ → f 0 γ) in the neighborhood of 2 × 10 −4 . Another important test in this final state is the pion forward backward asymmetry. The π + π − pair has a different charge conjugation eigenvalue depending on whether it is produced from FSR and f 0 (980) (C = +1) or ISR (C = −1). An interference term between two amplitudes of opposite charge conjugation gives rise to C-odd terms that change sign by the interchange of the two pions and results in an asymmetry A c : -Distribution of M π + π − spectra for φ → π + π − γ events. 3.3. φ → KKγ. A fundamental consistency check of the study of φ radiative decays, as well as being another important tool for the investigation of the s quark content of φ 0 (980) and a 0 (980), is the measurement of final states involving kaons. The search of φ → KKγ, which is expected to proceed mainly through the [f 0 (980) + a 0 (980)]γ intermediate state, adds relevant information on the scalar meson structure. Theoretical predictions for the BR spread over several orders of magnitude, although the latest evaluations essentially concentrate in the region of 10 −8 (see [124] and references therein). Experimentally, this decay has never been observed. Using 1.4 fb −1 collected at the φ resonance and an equivalent MC statistics to model the background, the KLOE experiment searched for this decay using the φ → K S K S γ → π + π − π + π − γ decay channel [124]. The significant signal reduction (24% of the total rate) is compensated by a very clean event topology. The MC signal has been generated according to phase-space and radiative decay dynamics. The selection cuts applied to reject the main background component (φ → K S K L with an ISR or FSR photon) have been optimized using MC only: no events survive the analysis cuts, while one event is found in data. The resulting upper limit on the branching ratio of φ → K S K S γ < 1.8 × 10 −8 @ 90% CL rules out most of the theoretical predictions. 9 . 4.1. φ leptonic widths. The φ-leptonic widths provide information on the φ-structure and its production cross section in e ± annihilations. They are necessary for decay branching ratio measurements and estimates of the hadronic contribution to vacuum polarization. There is no direct measurement of the leptonic width. The cross sections for processes e + e − → e + e − , µ + µ − can be written as σ ee = σ ee,φ +σ int and σ µµ = σ µµ,φ +σ int , with σ int , the interference term, given by: where W is the energy in the collision center of mass (CM), θ min and θ max define the acceptance in the polar angle θ (see later) and where Γ ℓℓ = Γ ee for e + e − → e + e − and Γ ℓℓ = Γ ee Γ µµ /ξ for e + e − → µ + µ − . The ξ term takes into account for the phase space correction. KLOE used data collected at CM energies W of 1017.2 and 1022.2 MeV, i.e. m φ ± Γ φ /2, and at the φ peak, W =1019.7 MeV. For e + e − → µ + µ − we measure the cross section. Since Bhabha scattering is dominated by the photon exchange amplitude, the interference term is best studied in the forward-backward asymmetry, A F B , defined as where σ F and σ B are the cross sections for events with electrons in the forward and backward hemispheres. 9 . 4.2. φ → π + π − π 0 . The decay of the φ meson to π + π − π 0 , with a branching ratio (BR) of ∼15.5%, is dominated by the ρπ intermediate states [126] ρ + π − , ρ − π + , and ρ 0 π 0 with equal amplitudes. CPT invariance requires equality of the masses and widths of ρ + and ρ − , while possible mass or width differences between ρ 0 and ρ ± are related to isospin-violating electromagnetic effects. Additional contributions to e + e − → π + π − π 0 are the so called "direct term", φ → π + π − π 0 , and e + e − → ωπ 0 , ω → π + π − . We use some 2 million of e + e − → π + π − π 0 events to determine the masses M (ρ +−0 ) and widths Γ(ρ +−0 ) of the three charge states of the ρ-meson, from a fit to the Dalitz plot density distribution [127]. If we define the variables x = T + − T − and y = T 0 , where T + , −, 0 are the kinetic energies of the three pions in the center of mass system (CM), the Dalitz plot density distribution D(x, y) is given by: where p * ± are the π ± momenta in the CM and A ρπ , A dir and A ωπ are the three amplitudes described above and containing the dependence on the masses and widths of the ρmesons. π + π − π 0 events are selected by asking for two non-collinear tracks with opposite sign of curvature and polar angle θ > 40 • which intersect the interaction region. The acollinearity cut (∆θ < 175 • ) removes e + e − γ events without incurring an acceptance loss for the signal. The missing mass, where E and p are laboratory energies and momenta, is required to be within 20 MeV of the π 0 mass. This corresponds to an effective energy cut of ≤ 20 MeV on the total energy radiated because of initial state radiation (ISR). Two photons in the calorimeter are also required, with opening angle in the π 0 rest frame cos θ γγ < −0.98. The Dalitz plot variables x and y are evaluated using the measured momenta of the charged pions, boosted to the center of mass system: . E φ and p φ are measured run by run using Bhabha scattering events. The resolution on x and y is about 1 MeV over the full kinematical range. Three χ 2 fits to the Dalitz plot density distribution have been performed: a) a fit assuming CPT and isospin invariance, i.e. m ρ 0 = m ρ + = m ρ − , Γ ρ 0 = Γ ρ + = Γ ρ − ; b) a fit assuming only CPT invariance i.e. m ρ + = m ρ − , Γ ρ + = Γ ρ − ; and finally c) without assumptions on masses and widths. The convergency of fit (a) shows that the experimental distribution is consistent with CPT and isospin invariance. The ρ masses and the widths are close to recent results [21]. The results obtained for mass and width differences, including correlations between the parameters, are: Note that the neutral and charged ρ masses are essentially equal within statistics. 9 . 4.3. φ → ωπ. Using ∼ 600 pb −1 collected at center of mass energies between 1000 and 1030 MeV, the KLOE experiment studied the production cross section of e + e − → ππ − π 0 π 0 and e + e − → π 0 π 0 γ processes [128], which mainly proceed through the nonresonant ρ → ωπ 0 intermediate state. The dependence of the cross section on the center of mass energy is parametrized in the form where Z is the complex interference parameter between the φ decay amplitude and the non-resonant processes, whose bare cross section is σ 0 (W ). m φ , Γ φ and D φ represent the mass, the width and the inverse propagator of the φ meson respectively. A model independent parametrization is used to describe the increase of the non-resonant cross section with W , which is linear in this energy range: σ 0 (W ) = σ 0 + σ ′ (W − m φ ). 10. -The hadronic cross section and a µ 1. The muon anomaly a µ . -The observation of the anomalous magnetic moment of the electron helped drive the development of quantum electrodynamics (QED). The value of the muon anomaly, a µ , is (m µ /m e ) 2 ≈ 40,000 times more sensitive than that of the electron to high-mass states in the polarization of the vacuum. The Muon g − 2 Collaboration (E821) at Brookhaven has used stored muons to measure a µ to 0.5 ppm The muon anomaly receives contributions from QED, weak, and hadronic loops in the photon propagator, as well as from light-by-light scattering. The lowest-order hadronic contribution is a had µ = ∼7000 × 10 −11 , with an uncertainty of ∼60 × 10 −11 . To the extent that this uncertainty can be reduced, the E821 measurement offers a potential probe of new physics at TeV energy scales. The low-energy contribution to a had µ cannot be obtained from perturbative QCD. However it can be calculated from measurements of the cross section for e + e − annihilation into hadrons via the dispersion integral where K(s), the QED kernel, is a monotonic function that varies approximately as 1/s, with s the squared center-of-mass collision energy, see Fig. 48, left. This amplifies the importance of the cross-section measurements at low energy. Approximately two-thirds of the integral is contributed by the process e + e − → π + π − for √ s < 1 GeV, i.e., in the vicinity of m ρ . Before the arrival of the KLOE results we discuss below, calculations of a had µ from world e + e − data (dominated at low energies by the CMD-2 measurement of [13]) led to a value of a µ ∼2.4 σ lower [130] than the final value reported by E821. 2. Measurement of σ π + π − and a µ at KLOE . -DAΦNE is highly optimized for running at W = m φ and no large variation of W are accessible. However, initial state radiation, ISR, via the process e + e − → π + π − γ naturally provides access to hadronic states of lower mass. The cross section σ π + π − over the entire interval from threshold to m φ is related to the distribution of s π = M 2 ππ for π + π − γ events, σ π + π − (s π ) = s π H(s π |s) where H, the "radiator function," describes the ISR spectrum. Note the subscript ISR on the differential π + π − γ cross section. This is very important because the contribution from final state photon emission, FSR is of the same order as that from the ISR process unless one imposes special fiducial volume cuts to suppress it. On the other hand, as is depicted in fig. 49 one must remember eventually to include FSR in the final evaluation of a had µ . To correctly calculate H and estimate the effects of FSR, an accurate simulation of the σ ππγ is critical. The radiator function used by KLOE to obtain σ ππ is provided by the Phokhara code [131] by setting the pion form factor F π (M ππ ) = 1, see Fig. 48, right. The Phokhara event generator has been continuously improved by inclusion of next-to-leading-order ISR (two initial-state photons), leadingorder FSR and ISR-FSR [132]. Furthermore, σ ππ has to be corrected for the running of the fine structure constant [133] (vacuum polarization) and for shifting from M ππ to the virtual photon mass, M γ * for those events with both an initial and a final state radiated photon. The Phokhara version of Ref. [134] contains also these contributions and it is used for the evaluation of the acceptance correction and for all reconstruction efficiencies. 2.1. Extraction of a ππ µ and pion form factor. The differential cross section of e + e − → ππγ as a function of M ππ is obtained by subtracting, bin by bin, the background events N bkg from the number of observed events N obs , correcting for the selection efficiency, ǫ sel (M 2 ππ ), and finally dividing by the total luminosity obtained as described in sect. 3 . 3: where our mass resolution allows us to have bins of width ∆M 2 ππ = 0.01 GeV 2 . The background content is found by fitting spectra of selected data sample with a superposition of Monte Carlo distributions describing the signal and background sources. The only free parameters of such fits are the relative weights of signal and backgrounds in data, computed as a function of M ππ . The "bare" cross section σ bare ππ(γ) which is obtained after applying the corrections written in eq. 96, inclusive of FSR, is used to determine a ππ µ : where the lower and upper bounds of the spectrum depend on specific measurement configurations. The measured σ ππ is related to the pion form factor via the relation σ ππ (s) = πα 2 3s β 3 π |F π (s)| 2 , where s is the center-of-mass energy squared, m π is the pion mass and β π = 1 − 4m 2 π /s is the pion velocity in the center-of-mass frame. Since |F π (s)| 2 is measured by many experiments, it is customarily used for spectra comparisons. Two different selection regions are used: the first one is named "Small angle γ selection" (SA) since photons are required to be within a cone of θ γ < 15 • around the beam line (narrow cones in fig. 50, left), the second one is called "Large angle γ selection" (LA) since there should be at least one photon at a polar angle of 50 • < θ γ < 130 • (large central cones in fig. 50, left). In both cases the two charged pion tracks should have 50 • < θ π < 130 • . 2.2. Small angle γ selection and data analysis. At small photon polar angle θ γ , ISR events are vastly more abundant than FSR events. Requiring angular separation between the pions and the photon further suppresses FSR [135]. Thus, the first KLOE measurement [9] was performed using this event selection criterion. Since ISR-photons are mostly collinear with the beam line, a high statistics for the ISR signal events remains. On the other hand, a highly energetic photon emitted at small angle forces the pions also to be at small angles (and thus outside the selection cuts), resulting in a kinematical suppression of events with M 2 ππ < 0.35 GeV 2 . In this analysis the photon is not explicitly detected. Its direction is reconstructed from the tracks' momenta by closing kinematics: p γ ≃ p miss = p φ − ( p π + + p π − ). The separation between pion and photon selection regions greatly reduces the contamination from the resonant process e + e − → φ → π + π − π 0 , in which the π 0 mimics the missing momentum of the photon(s) and from the final state radiation process e + e − → π + π − γ FSR . Discrimination of π + π − γ from e + e − → e + e − γ events is done via particle identification based on the time of flight, on the shape and the energy of the clusters associated to the tracks. In particular, electrons deposit most of their energy in the first planes of the calorimeter while minimum ionizing muons and pions release uniformly the same energy in each plane. Events with at least one of the two tracks not identified as electron are selected. This criterion results in a rejection power of 97% for e + e − γ events, while retaining a selection efficiency of ∼ 100% for π + π − γ events (see fig. 51, left). Contaminations from the processes e + e − → µ + µ − γ and φ → π + π − π 0 are rejected by cuts on the track mass variable, m trk , defined by the four-momentum conservation, assuming a final state consisting of two particles with the same mass and one photon, and on the missing mass, m miss = E 2 X − | P X | 2 , defined assuming the process to be e + e − → π + π − X. [9], which we will refer to as KLOE05 in the following. The statistical errors ranged from ∼2% at the lower limit in s π to ∼0.5% at the ρ peak. The experimental systematic uncertainties were mostly flat in s π and amounted to 0.9%. The luminosity was estimated using large-angle Bhabha scattering and contributed an additional, dominantly theoretical uncertainty of 0.6%. The cross section σ π + π − was obtained via eq. 96. The contribution to the systematic uncertainty on σ π + π − from FSR was 0.3%, and the theory accuracy of H was 0.5%. Finally, for the evaluation of a had µ , it was necessary to remove the effects of vacuum polarization in the photon propagator for the process e + e − → π + π − itself by correcting for the running of α em . This contributes 0.2% to the systematic uncertainty. Thus from KLOE05 analysis we obtained for the contribution to a ππ µ in the energy interval 0.35 < s < 0.95 GeV 2 : a ππ µ (0.35 < s < 0.95 GeV 2 ) = (388.7 ± 0.8 stat ± 3.5 syst ± 3.5 th ) × 10 −10 . Inclusion of the KLOE05 data together with the CMD-2 data in the evaluation of a had µ increased the discrepancy between the calculated and measured values of a µ to (25.2 ± 9.2) × 10 −10 (∼2.7σ) [136]. Furthermore, the precise KLOE spectrum exhibited a pronounced deviation from those derived from τ lepton's spectral functions that led to the later being excluded in a µ compilations [136]. KLOE in 2008 completed the analysis of 241 pb −1 data taken in 2002, which we will refer to as KLOE08 in the following. These data benefit from cleaner and more stable running conditions of DAΦNE resulting in less machine background. In particular the following changes are applied with respect to the data taken in 2001: i) an additional trigger level was implemented at the end of 2001 to eliminate a 30% loss due to pions penetrating up to the outer calorimeter plane and being misidentified as cosmic rays events; ii) the offline background filter, which contributed the largest experimental systematic uncertainty (0.6%) to the published work [9], has been improved and includes now a downscale algorithm providing an unbiased control sample. This greatly facilitates the evaluation of the filter efficiency which increased from 95% to 98.5%, with negligible systematic uncertainty; iii) in addition, the knowledge of the detector response and of the KLOE simulation program has been improved. Thus the relative systematic errors on the extraction of a ππ µ in the mass range [0.35,0.95] GeV 2 decreased from 1.3% for the 2001 to 0.9% for 2002 data. Finally, we want to stress that the dominant term contributing to the systematic error in KLOE08 analysis is given by the 0.5% uncertainty quoted by the authors of Phokhara [134]. The KLOE08 results are shown in fig. 52. On the left the number of π + π − γ events per 0.01 GeV 2 are shown. The π + π − γ differential cross section is shown in the right. Finally, while performing the 2008 analysis we revisited the published 2001 data analysis, named KLOE05 updated, and we found a bias in the evaluation of the trigger correction that affects mostly the low M ππ region. Correcting for this effect and normalizing to the new Bhabha cross section we updated the published spectrum to compare with KLOE08. Fig. 53 , left shows the pion form factor evaluated from these two sets of data. Comparison among a ππ µ ([0.35, 0.95]GeV 2 ) values in units of 10 10 evaluated with the KLOE small angle γ selection analyses: KLOE05: 388.7 ± 0.8 stat ± 3.5 sys ± 3.5 th ; KLOE05 Updated: 384.4 ± 0.8 stat ± 3.5 sys ± 3.5 th ; KLOE08: 387.2 ± 0.5 stat ± 2.4 sys ± 2.3 th . We see that in terms of σ ππ the net shift of the updated value is about one standard deviation below the KLOE05 one and it is also in excellent agreement with the KLOE08 analysis. -Left: KLOE08 π + π − γ events per 0.01 GeV 2 . Right: KLOE08 π + π − γ differential cross section Finally, we make a comparison of the most recent a ππ µ evaluations released by the CMD-2 [137] and SND [138] experiments with the KLOE08 [139], in the mass range M ππ ∈ [630, 958] MeV. CMD-2: 361.5 ± 1.7 stat ± 2.9 sys ; SND: 361.0 ± 2.0 stat ± 4.7 sys ; KLOE08: 356.7 ± 0.4 stat ± 3.2 sys ; The CMD-2 and SND dispersion integrals are performed with the trapezoid rule, the KLOE value is done extrapolating our bins to match the [630,958] MeV mass range, and summing directly the bin contents of the dσ ππγ differential spectrum, weighed for the kernel function. The CMD-2 and SND values agree with the KLOE08 result within one standard deviation. Thus, inclusion of the latest KLOE results in the world average definitely established without a doubt the discrepancy between the measured and calculated values of a µ , leading to hopes that there is a glimpse of which direction we should search for new physics. 2.3. Large angle γ selection and data analysis. The contribution to a had µ from e + e − → π + π − in the interval in s between threshold and 0.35 GeV 2 is approximately 1000 × 10 −11 . To explore the low-mass part of the spectrum, KLOE selects events requiring a photon in the calorimeter with energy greater than 50 MeV and 50 • < θ γ < 130 • . With the photon explicitly detected, about 40% of the background from φ → π + π − π 0 events is rejected by kinematic closure. A further variable, the 3-dimensional angle Ω between the missing momentum and the photon, which is peaked at 0 degrees for signal events, removes most of the remaining φ → π + π − π 0 background. There are however two sources of irreducible background : FSR and φ → f 0 γ → π + π − γ. FSR here is a very complicated problem: for θ γ > 40 • ISR and FSR events contribute nearly equally to the π + π − γ spectrum in the tails of the ρ, and the accuracy of the generator used to obtain FSR corrections is critical. As discussed by Binner et al. in Ref. [135], the ISR-FSR interference results in a mea- surable charge asymmetry, as well as a distortion in the mass spectrum. The measured charge asymmetry can be used as a useful gauge of the accuracy of the simulation. KLOE has performed comparisons of this type in the analysis of φ → π + π − γ as discussed in the previous section on scalar mesons [139]. We have made a measurement of σ ππ with photon emitted at large angle using the data collected in 2002. Since it is an independent event selection scheme, by comparing this result with the small angle γ analysis, we can also test the present models of FSR contributions. Fig. 54 shows both spectra; note that the ππ threshold is reachable with the large angle selection. To make a quantitative comparison, at present we are limited by the unknown interference between the FSR process and the resonant decays φ → π + π − γ as mentioned before. Therefore we limit the comparison to the m 2 ππ range where resonance contributions are negligible, that is, a µ to [0.5,0.85] GeV 2 where from the "small angle selection" we obtain 255.4 ± 0.4 stat ± 2.5 sys and for the "large angle selection", 252.5 ± 0.6 stat ± 5.1 sys , in excellent agreement. The main source of systematic uncertainty in the large γ angle selection is the f 0 (980) background subtraction. The unavoidable presence of photons from final state emission of both the pions and the muons in the data leads to a deviation from the "ideal" eq. 100 which requires treatment of final state radiation (FSR) with care. The advantage of this method is that one does not need an absolute luminosity measurement, thus providing a totally independent measurement of the pion form factor from those obtained using small or large angle γ selections. The experimental challenge of measuring the pion form factor using the ratio is the overwhelming µµ cross section over ππ cross section at low momentum, see fig. 55, left. Furthermore, there is an overlap region between the ππγ events and µµγ events which makes precision event counting/separation a delicate operation. fig. 55, right, shows the separation between π + π − γ and µ + µ − γ events, achieved selecting different m trk regions around the missing mass peaks. The residual contamination of π + π − π 0 events are seen at high m trk values. We are developing additional tools for πµ separation and the results from extracting |F π (s)| 2 spectra from using the ratio is extremely promising [141]. We are confident of obtaining a ππ µ from yet another independent method by the end of 2008. -Conclusions and outlook The contributions by KLOE to particle physics during the years 2002-2008 are of great importance. The study of the decays of the kaons have demonstrated unitarity of the CKM matrix with accuracy better than one per mil and universality of the couplings between leptons and quarks. New mass and lifetime values have been obtained. Studies of the η meson have reached new levels of precision. KLOE has made the most accurate measurement of the hadronic cross section pertaining to the calculation of the muon anomaly, and has made the most detailed studies on the nature of light scalar mesons. A summary of all the masses, lifetimes, widths and branching ratios measured by KLOE to date is shown in table XIV. Results of tests on conservation of discrete symmetries, as well as on the unitarity of the CKM matrix are shown in table XV. It is interesting to note that over half of the results noted above are based on analysis of one fifth of the total KLOE data sample. Several of the measurements have reached the state where additional statistics will not improve accuracy due to theoretical or intermediate state uncertainties as described in the specific sections. There are measurements that could profit with respect to statistical and systematic errors continuing the studies of the entire body of data. Furthermore, several new measurements are only possible using the whole data set whose analysis is expected to be completed in the coming couple of years. These are discussed in the following three subsections. In the fourth subsection we describe a future project KLOE-2 that entails minor upgrade of KLOE in order to complete the original physics program proposed in 1992 but had been stymied by DAΦNE's limited luminosity. Thus it is clear that we have our hands full for the foreseable future! 1. Kaon sector . -1. Studies on rare kaon decays can benefit of the use of the entire KLOE statistics. The accuracy in the determination of the branching ratio of K S → π + e −ν (γ) events and of the related charge asymmetry will scale with statistics with respect to those shown in sect. 5 . 4.2, thus reaching ∼0.5% (fractional) and 5×10 −3 respectively. 2. We have observed the K S → π + µ −ν (γ) decay channel and will aim at reaching a precision better than 2% on its BR. 3. We plan to improve our limit on K S → 3π 0 decays (see sect. 5 . 4.4). Preliminary studies have shown that, by slightly modifying the analysis cuts, one can improve on background rejection, while leaving the signal efficiency almost unaffected. This means that we can hope to set the limit on the corresponding BR at the level of ∼2×10 −8 . 4. Moreover, we can search for K S → π + π − π 0 events, which proceed mainly via a CP-conserving, ∆I = 3/2 transition. A preliminary, but rather detailed, analysis shows that KLOE should be able to measure the corresponding branching ratio with a ∼60% precision, comparable to the precision with which this BR has been measured by three experiments. 5. The CP violating K L → 2π 0 events can be measured with a ∼0.5% accuracy, thus setting at the few per mil level the precision on Re(ǫ ′ /ǫ) which KLOE can reach. 6. The study of the main K L decay channels, will allow us to improve our accuracy in the measurement of the K L lifetime by a factor of ∼2. We can also improve the Lepton universality rµe (from f+(0) Vus for K ±0 e3 and K ±0 µ3 ) 1.000 ± 0.008 Γ(K ± → e ± νe)/Γ(K ± → µ ± νµ) (2.55 ± 0.05 ± 0.05) × 10 −5 CKM matrix unitarity f+(0) Vus 0.2157±0.0006 Vus/V ud × fK /fπ 0.2766± 0.0009 |Vus| 2 + |V ud | 2 − 1 (V ud from [87]) −0.0004 ± 0.0007(∼ 0.6σ) 9. Semi-rare charged kaon decay channels can also be studied. In particular, we are planning to measure the branching ratio of the decay into three charged pions. With the present data sample a few per mil statistical accuracy can be reached. Using the same data set the measurement of the branching ratio of the K ± → π 0 π 0 e ± ν e decay can reach a 10% statistical accuracy. This will allow us to complete the measurement of the main K ± branching ratios. 11 . 2. Light meson spectroscopy. -Many analyses on light meson spectroscopy could be refined in the future. Although the level of accuracy for the measurements on the scalar sector has already reached its limit, a combined analysis of different final states, such as φ → π + π − γ/φ → π 0 π 0 γ, will give more insight into the structure of light scalar mesons. On the other hand, the pseudoscalar mesons could benefit from the larger statistics of the whole KLOE data sample. 1. In particular, we can complete the study of the main η decay channels by measuring the BR's of η → π + π − γ, η → e + e − γ with a statistical precision of few per mil. 2. Moreover, the knowledge on numerous rare η decay channels can be improved. Apart from lowering the limits on C (P,CP) violating decays η → 3γ (η → π + π − , π 0 π 0 ), we could reach a statistical precision of 3.5% on the BR(η → π + π − e + e − ) and measure for the first time the asymmetry between the ππ and ee decay
2008-11-13T11:47:36.000Z
2008-11-12T00:00:00.000
{ "year": 2008, "sha1": "6ff7e5b01f4ea1e875f82e7f46b5e9663c29070f", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "6ff7e5b01f4ea1e875f82e7f46b5e9663c29070f", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
226463065
pes2o/s2orc
v3-fos-license
Expert System to Determine Characteristics of Students With Special Needs With Forward Chaining Method Pandeglang is one of the disadvantaged areas in Banten Province. The regional original income in this city is very small compared to other cities in the Banten region. Here there are rarely human resources who do community service. This is due to the low regional minimum wage value. Therefore, an expert system was made to determine the character of special needs children. This requires a method-based, method step so that the completion step is proceeding properly. Forward chaining method is a method based on fitting rules in determining the character of students with special needs. This research contributes to teacher training in the field of counseling and computer science, especially in the field of expert systems. Keywords—Expert System, Children Characteristic, SLB I. INTRODUCTION Nowadays the development of computers has undergone many very rapid changes, along with human needs that are increasingly numerous and complex. Computers are human aids in completing work. One reason why computers are more likely to be said to be human aids is that the speed and accuracy of the process are more reliable. Human desire to create something new which can help ease the burden of work constantly being done. This is because so many conveniences offered by computers, both in terms of accuracy and speed of information. This encourages experts to increasingly develop computers in order to help human work or even exceed the ability of human work. Artificial intelligence or artificial intelligence is part of computer science that makes machines (computers) can do work as and as well as humans do. Intelligent systems (intelligent systems) are systems that are built using artificial intelligence techniques. Expert Systems are knowledge-based programs that provide expert quality solutions to problems in a specific domain. Expert systems are computer programs that mimic expert thought processes and knowledge in solving a particular problem. This type of program was first developed by artificial intelligence researchers in the 1960s and 1970s and was applied commercially during the 1980s. The development of science and technology is greatly influenced by globalization which brings positive changes in various fields of social life, development is science and technology in the form of computer development which was originally used as a calculation tool, but also as a tool to help solve problems and can display various information in data processing automatically. Another example of the field of artificial intelligence development is an expert system that combines knowledge and data tracking to solve problems that normally require human expertise. The purpose of developing expert systems is actually not to replace the role of humans, but to substitute human knowledge in the form of systems, so that it can be used by many people.  Problem formulation-This research contributes to the science of education and to computer scientists, especially in the field of expert systems, in this study also contributes to the view of schools that have difficulty in getting teacher guidance counseling, this expert system can help these problems.  Limitation of the problem-This study only discusses the characteristics of services, with a forward chaining methodology approach.  Research Functions and Objectives-The function of this study is to determine the character of students in SLB schools, because determining the character of students with special needs is more difficult than with normal students in general. II. METHOD In this study using the forward chaining methodology approach the reason using the forward chaining approach is based on the rules studied. A. Characteristics In early childhood character education books [2] Look at the character refers to a series of attitudes, (behaviors), (motivation), and (skills). The real character comes from Greek which means "to mark" or"marking", and focus on how to apply the value of goodness in the form of actions or behavior. Character is the character, character, character, or personality of a person that is formed from the internalization of various virtues that are believed and used as a basis for perspective, thinking, and acting. B. Inference Mechanism Inference theory is part of an expert system that makes reasoning by using the contents of a list of rules based on a certain sequence and pattern. During the consultation process between systems and users, the inference mechanism tests the rules one by one until the conditions are correct. In general, there are two main techniques used in the inference mechanism for testing rules, those are forward chaining and backward chaining. C. Forward Chaining In advanced reasoning, the rules are tested one by one in a certain order. The sequence may be the order in which rules are entered into the rule base or also other sequences specified by the user. When each rule is tested, the expert system will evaluate whether the condition is wrong or wrong. If the conditions are right, then the rule is saved then the next rule is tested. Conversely, if the condition is wrong, the rule is not saved and the next rule. D. Backward Chaining It is the reasoning of a set of hypotheses towards the supporting facts, so the tracking process starts backwards by determining the conclusions to be sought and then the facts of the conclusion builders. III. RESULTS AND DISCUSSION Knowledge representation with rules is often called a production system. One rule consists of 2 (two) parts, i.e.: A. Antocedent that is the part that expresses a situation or premise (knowledge begins with IF) Consequent is the part that says a certain action or conclusion is applied if the situation or premise is true (statement starts with THEN). Inference with rules (as well as logic can be effective, but there are some limitations on certain techniques B. Expert system Expert systems are computer-based applications that are used to solve problems as thought by experts (Kusrini, 2008). The experts referred to here are people who have special expertise who can solve problems that cannot be solved by ordinary people. In its preparation, the expert system combines inference rules with certain knowledge bases provided by one or more experts in a particular field. The combination of these two things is stored in a computer, which is then used in the decision making process for solving certain problems. The development of expert systems aims to implement expert knowledge on software that can be used easily by users (Istiqoma and Fadlil, 2013). To build an expert system, several basic components are needed: Knowledge Base; Inference Machine; Database; User Interface. The explanation of the basic components to build an expert system (Hu and friends, 1987): 1) Knowledge Base The knowledge base is a representation of an expert, which can then be entered into a specific programming language for artificial intelligence (for example PROLOG or LISP) or expert system shells (for example EXSYS, PC-PLUS, CRYSTAL, etc.) 2) Inference Engine The inference engine functions to guide the reasoning process for a condition, based on the available knowledge base. In the inference engine there is a process to manipulate and direct the rules, models, and facts stored in the knowledge base in order to reach a solution or conclusion. 3) Database The database is used to store data from observations and other data needed during processing. 4) User Interface This facility is used as an intermediary for communication between the user and the system. Comparison between the capabilities of human expert systems and computer systems which are considered as expert systems development. C. Children with Special Needs The concept of children with special needs has a broader meaning compared to the understanding of extraordinary children. Children with special needs are children who in education need specific services, different from children in general. Children with special needs are experiencing obstacles in learning and development. Therefore, these children need services that are suited to the learning needs of each child. In general, the range of children with special needs includes two categories, namely: children with special needs who are permanent, namely the result of certain disorders and children with special needs that are temporary, that is, children who experience learning and development barriers caused by environmental conditions and situations. Every child with special needs, both permanent and temporary, has different learning constraints and learning needs. Learning barriers experienced by every child are caused by three things: Environmental factors; Children internal factors; The combination of environmental and internal factors. Every child with special needs has certain characteristics that are different from another, children included in the special needs category, among others: visually impaired, deaf, mentally retarded, blind, disabled, learning difficulties, slow learning, autistic children, gifted children and hyperactive children. (Delphie, 2009). 1) Engineering Knowledge The conversion of production rules into a decision table for children with special needs can be seen in table 1. Rows indicate characteristics and columns indicate types of special needs children. Children with behavioral and emotional 6 ABK 06 2) Results and Review Children with specific learning difficulties 7 ABK 07 Slow learning children 8 ABK 08 Autistic child 9 ABK 09 Talented kid 10 ABK 10 Hyperactive child IV. CONCLUSION Based on an expert system analysis diagnosing the characteristics of special needs children using the forward chaining method, it can be concluded as follows: 1. Expert system diagnoses the characteristics of special needs children using the forward chaining method to make it easier to find out the characteristics of children and provide solutions to problems determining the characteristics of special needs (ABK). 2. This research produces an expert system algorithm to determine the characteristics of children with special needs.
2020-08-06T09:04:24.473Z
2020-01-01T00:00:00.000
{ "year": 2020, "sha1": "b14f1260cb549816062360327533938c54363df8", "oa_license": "CCBYNC", "oa_url": "https://www.atlantis-press.com/article/125942090.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "cfb255d9c5d529854417bbd83831924c25fd5ecc", "s2fieldsofstudy": [ "Education" ], "extfieldsofstudy": [ "Computer Science" ] }
266367873
pes2o/s2orc
v3-fos-license
Diagnosis of Induction Motor Faults Based On Current and Vibration Signals Using Support Vector Machine Model . Early fault diagnosis of the induction motor can prevent sudden failure of the motor, which implies loss of production and sometimes brings safety problems. The purpose of this paper is to explore a method for induction motor fault diagnosis using a support vector machine model. The raw current and vibration signals are pre-processed using variational mode decomposition to eliminate the noise. Eight features in the time domain are extracted from the signals. These features are then evaluated using principal component analysis to reduce their dimension. Two principal components that cover 95% of the variance of the data are used as predictors in the SVM model. SVM models with different types of kernels are evaluated for their performance. The results show that the current signals give better accuracy in diagnosing induction motor faults than the vibration signals. The current signals perform very well at all speeds and in all types of kernels. Their accuracy is 100% for training and testing data. Meanwhile, the accuracy of the vibration signal in diagnosing the motor faults is good at speeds of 749 rpm, and the diagnosis accuracy decreases at speeds of 1499 rpm. Introduction The invention of induction motors in the 1800s heralded the beginning of their use in modern industrial environments.The interaction of the revolving magnetic field and the rotor converts electrical energy into mechanical energy in these motors [1].Induction motors have various advantages, including their simple design, low production costs, ease of maintenance, high efficiency in ordinary operating circumstances, and dependable operation [2].Nevertheless, extended and consistent operation can lead to the emergence of issues within the components of induction motors.These problems arise primarily from the unavoidable friction, which generates excessive heat in different motor parts [3].Such faults in induction motors can be categorised into two main types: mechanical and electrical [4].These faults may manifest independently or occur simultaneously.Among the typical types of faults encountered, bearing faults constitute approximately 41%, stator faults make up 37%, rotor faults account for 10%, and the remaining 12% consist of other types of faults [5]. As a result, early detection is critical for proactively avoiding unforeseen problems in induction motor components.Incorporating such preventative measures is critical, especially when developing maintenance strategies to ensure continuous and flawless operations in modern industrial processes [6] [7].In order to accomplish this, several monitoring systems can be used to examine the state of the machinery and identify any possible problems before they become serious problems [4]. * djoksus@staff.uns.ac.idThere are three approaches to implementing such monitoring methods.Firstly, the model-based approach utilises mathematical modelling.Secondly, the signature extraction approach involves extracting signals such as vibrations, temperatures, sounds, loads, and currents in the time and/or frequency domain from the operating motor.Lastly, the knowledge-based approach builds upon conventional methods and often employs artificial intelligence to apply predictive maintenance techniques [7]. Numerous artificial intelligence techniques are employed in predictive maintenance, including Expert Systems (ES), Artificial Neural Networks (ANNs), Fuzzy Logic Systems (FLS), Genetic Algorithms (GAs), and Support Vector Machines (SVM) [8].Among these methods, the SVM approach has gained widespread popularity and is currently the preferred choice.This preference is mainly attributed to its excellent predictive performance and its ability to minimize training and testing times, thus reducing the processing load during analysis [3].The SVM method is particularly wellsuited for diagnosing faults in rotating machines that experience faults caused by excessive vibration.As a result, it is extensively used for damage diagnosis and for classifying the various types of damage that may occur in such machines [9]. The predictors that feed the SVM model influence its accuracy and efficacy.The predictors, also known as features, are retrieved and chosen from the measured signals.The signal is represented by the features.These can be signal statistical metrics like mean, median, rms, variance, kurtosis, skewness, and so on.The measuring signal is frequently mixed with noise.As a result, the derived features lose the key information regarding the health of the motor.To overcome the noisy signal, some researchers propose to decompose the signals into their components using discrete wavelet transform [10], [11].The others use empirical mode decomposition to eliminate the effect of the noisy signals [12], [13].A relatively recent and advanced method used for signal decomposition is Variational Mode Decomposition (VMD).By applying VMD as the decomposition step, the raw signal can be effectively broken down into its constituent components [14].These resulting signals, obtained through VMD, are then utilized for further analysis and classification using SVM, leading to more accurate and reliable fault detection and diagnosis. The extraction of features that effectively capture the properties of the signal is required for the diagnosis of rotating machine states.However, not all statistical variables are equally appropriate for correctly defining damage patterns, and using too many of them can raise the computational cost.To overcome this, it is critical to carefully choose the most significant statistical features and minimise the data's total dimensionality.Principal Component Analysis (PCA) is one method for accomplishing this.PCA assists in finding and maintaining the most important features while eliminating those that are less relevant, simplifying the data, and optimising the analysis process [15]. The most common signals used to monitor induction motor health are vibration and current signals.Therefore, the aim of this paper is to apply a support vector machine model to diagnose induction motor faults using vibration and current signals.The raw signals will be decomposed using the variational mode decomposition method to eliminate noisy signals. Methodology The data for the induction motor faults is taken from the induction motor test rig, as shown in Fig. 1.The test rig consists of a 2-hp induction motor, an inverter, and an AC voltage regulator.In this test, artificial faults are given to the bearing and stator of the induction motor to simulate the real situation.Artificial defects in the bearing are created by drilling a 1 mm hole in the outer race using electrical discharge machining (EDM), while artificial defects in the stator are induced by applying a 10% voltage drop to one of the phases using an AC voltage regulator.The motor was operated at two different speeds: 749 and 1499 rpm. The data are picked up at a rate of 1 second every 5 minutes.The data are sampled at 20.000 Hz. 300 data sets are collected for each motor condition.As a result, it obtained 900 sampels total.The data will then be divided into training data and testing data in an 80% to 20% ratio. Characterization of Natural Zeolite Data preprocessing is the first step in processing the data obtained from the measurement.The raw signals are decomposed using variational mode decomposition (VMD).The purpose of this is to reduce unnecessary external signals through the selection of intrinsic mode functions (IMFs) with the highest relative energy value.The VMD is performed because the signals generated in this experiment exhibit patterns of non-stationary or semi-stationary conditions with a more limited frequency variation.Fig. 2 shows preprocessing data using VMD on current and vibration signals. Characterization of Natural Zeolite Feature extraction is a crucial step in selecting statistical features that have a significant impact on the data analysis to be conducted.The selection of statistical features is essential as it can enhance the accuracy of both the training and testing data, reduce overfitting, improve computational efficiency, and eliminate irrelevant or noisy features in the statistical analysis.The statistical features used include Crest Factor (CF), Kurtosis (Kur), Mean, RMS, Skewness (Sk), Standard Deviation (Std), Variance (Var), and Peak to Peak (P2P).The eight statistical features are used because they are considered to represent each characteristic of a given artificial fault variation. Then, to aid in understanding the relationship between statistical features, a step was taken in the form of varying the number of statistical features.This step was taken to provide information about the dataset's characteristics across different numbers of features, identify patterns or trends for each variation in the number of features, and assist in evaluating the performance of the model on both the training and testing datasets for each variation in the number of features.These features are evaluated in stages, starting with the use of a total of 4 features and increasing to a total of 8 features.The feature composition is described in Table 1. Furthermore, the results of each change in the number of feature extractions are processed to minimise the dimensions of the data using the Principal Component Analysis (PCA).The goal is to minimise the data dimensions of each feature variation by projecting each connected variable in the original dataset onto a lower-dimensional space.The Principal Component (PC) produced as a result of this feature selection has the proportion of variance in each PC produced.This study employs two PCs, PC1 and PC2, with values more than 95%, which are subsequently employed as predictors in the classification procedure. Support Vector Machine The implementation of the Support Vector Machine (SVM) classification method to diagnose faults in induction motors provides a solution for the challenge of fault classification involving more than two classes.SVM offers the capability of handling multiclass classification tasks through the application of the One vs.One technique.This technique is widely adopted for classifying data by constructing separate SVM models for each class.By using this approach, the SVM models are then tested on the respective class data, enabling efficient and accurate classification of multiple fault types in induction motors. The data for each type of class is divided into training data (TRD) and testing data (TED) with a percentage of 80% and 20%, respectively.The purpose of dividing the data is to ensure that the SVM model can generalize to each type of data.The training data are used to build and train the SVM model, while the testing data are used to test the SVM model's ability to classify the data that has been formed in the training data. The classification using various forms of SVM is conducted by considering different feature variations and SVM kernel types, which include linear (SVM-L), quadratic (SVM-Q), cubic (SVM-C), fine Gaussian (SVM-FG), medium Gaussian (SVM-MG), and coarse Gaussian (SVM-CG) kernels.Each kernel function yields its own accuracy for classification, allowing for the identification of the most effective model based on kernel selection.Subsequently, the accuracy results for classifying the condition of the induction motor's damage are analyzed, and an error analysis is performed by comparing the outcomes obtained from training data and testing data. Result and Discussion This research examine the application of SVM model to diagnose the induction motor faults using current and vubration signals.It will cover a comparative analysis of current signals and vibration signals acquired at two distinct operating speeds.The subsequent investigation revolves around assessing the accuracy of various SVM kernels while also taking into account the utilization of different statistical features.The primary objective is to identify the most optimal number of statistical features that yield the best performance for the diagnosis of induction motor faults.By conducting a comprehensive comparison of both types of signals and exploring various SVM kernel options, the study aims to determine the most effective combination of features and classifiers to enhance the accuracy and reliability of motor fault diagnosis in real-world applications. Current Signal Fig. 3 shows the current signal at a speed of 749 rpm, while Fig. 4 shows the current signal at a speed of 1499 rpm. Speed of 749 rpm The motor condition is divided into three classes.They are normal motor, motor with bearing fault, and motor with stator faults.The results of the SVM classification for a single fault 3 classes using current signal at a speed of 749 rpm with linear kernel can be seen in Fig. 5 as follows.The confusion matrix of the training data in Fig. 6 shows that the accuracy of the model is 100%.It can be seen that under normal conditions, bearing faults and stator faults do not occur data errors.Then, the model is tested using the testing data.The result is shown in Fig. 7. Fig. 7. Confusion matrix testing data of single fault 3 classes on current signal at 749 rpm The confusion matrix testing data in Fig. 7 shows an accuracy of 100%.There is no error in the confussion matrix.Furthermore, the results of SVM classification with other kernels are presented in Table II as follows.Table II shows good results in each SVM kernel and the use of each number of features.The SVM classification accuracy shows a value of 100% on training data and data testing.This shows that the SVM model is able to effectively predict and classify data with a very high success rate. Speed of 1499 rpm The results of the SVM classification for a single fault 3 classes on current signal at a speed of 1499 rpm are carried out for various kernel functions which can be seen in Fig. 8 as follows.The confusion matrix training data in Fig. 9 shows a model that has an accuracy of 100%.It can be seen that under normal conditions, bearing faults and stator faults do not occur data errors.Then, the model is tested using the testing data shown in Fig. 10 as follows.The confusion matrix testing data in Fig. 10 shows an accuracy of 100%.It can be seen that the results of data testing under normal conditions, bearing faults and stator faults do not occur data errors. Furthermore, the results of SVM classification with various type of Kernel are presented in Table III.Table III shows good results in each SVM kernel and the use of each number of features.SVM classification accuracy shows a value of 100% on training data and testing data.This shows that the SVM model is able to effectively predict and classify data with a very high success rate. Discussion on diagnosing induction motor fault by determining the optimal number of features visualized on a bar chart.Each signal data contains average accuracy information at 749 rpm speed and 1499 rpm speed.Fig. 8 shows average accuracy of single fault 3 classes on current signals.11 shows the maximum accuracy in using the total feature variation of up to 100% on all types of SVM kernels.This indicates optimal performance in classifying using SVM on current signals. In this research, a 100% accuracy was achieved by employing current signal analysis at two different speeds.This highlights the effectiveness of combining VMD PCA and SVM techniques for accurately classifying single mechanical and electrical faults in induction motors, outperforming the 97.6% accuracy attained by Ali et al. [7] using a fine Gaussian kernel.This study underscores the superior performance and potential of the proposed approach in fault classification for induction motors. Vibration Signal Fig. 12 shows the vibration signal at a speed of 749 rpm, while Fig. 13 shows the vibration signal at a speed of 1499 rpm. Speed of 749 rpm The results of the SVM classification for a single fault 3 classes on vibration signal at a speed of 749 rpm using cubic SVM is shown in Fig. 14.The confusion matrix in Fig. 15 shows that the model has an accuracy of 99.3%.There are 5 data in the normal class which are predicted to be a stator fault class.Then, the model is tested using the testing data.The result is presented in the confusion matrix as shown in Fig. 16.The confusion matrix data testing in Fig. 16 shows an accuracy of 100%.It can be seen that the results of data testing under normal conditions, bearing faults and stator faults do not occur data errors.Furthermore, the results of SVM classification with other kernels are presented in Table IV.98.8 98.9 98.9 98.9 98.9 99.4 98.9 98.9 98.8 100 Table IV shows the results of the SVM classification in each kernel and the use of each number of features.In the use of 4 features, the kernel with the highest accuracy occurs in the fine Gaussian kernel with an accuracy of 99.2% for training data and 100% for testing data.In the use of 5 features, the kernel with the highest accuracy occurs in the cubic kernel with an accuracy of 99.3% for training data and 99.4% for testing data.In the use of 6 features, the kernel with the highest accuracy occurs in the cubic kernel with an accuracy of 99.3% for training data and 100% for testing data.In the use of 7 features, the kernel with the highest accuracy occurs in the cubic kernel with an accuracy of 99.3% for training data and 99.4% for testing data.In the use of 8 features, the kernel with the highest accuracy occurs in the fine gaussian kernel with an accuracy of 99.2% for training data and 100% for testing data.This shows that the SVM model is able to predict and classify data with a good level of success. Speed of 1499 rpm The results of the SVM classification for a single fault 3 classes on vibration signal at a speed of 1499 rpm are carried out for various kernel functions which can be seen in Fig. 17 The confusion matrix testing data in Fig. 19 shows an accuracy of 90%.There are 10 data in bearing fault class predicted as normal class, 1 data in bearing fault class predicted as stator fault class, there is 1 data in normal class predicted as bearing fault class, 1 data in normal class predicted as stator fault class, there is 1 data in the stator fault class it is predicted as a bearing fault class and 4 data in the stator fault class is predicted as a normal class.Thus, the total misclassification that occurred was 18 data.Furthermore, the results of SVM classification with other kernels are presented in Table V as follows.Table V shows the results of the SVM classification in each kernel and the use of each number of features.In the use of 4 features, the kernel with the highest accuracy occurs in the fine Gaussian kernel with an accuracy of 88.1% for training data and 88.3% for testing data.In the use of 5 features, the kernel with the highest accuracy occurs in the fine gaussian kernel with an accuracy of 91.4% for training data and 86.1% for testing data.In the use of 6 features, the kernel with the highest accuracy occurs in the fine gaussian kernel with an accuracy of 89.4% for training data and 93.3% for testing data.In the use of 7 features, the kernel with the highest accuracy occurs in the fine gaussiam kernel with an accuracy of 91.1% for training data and 90% for data testing.In the use of 8 features, the kernel with the highest accuracy occurs in the fine gaussian kernel with an accuracy of 90.8% for training data and 91.1% for testing data. Discussion on diagnosing induction motor fault by determining the optimal number of features visualized on a bar chart as seen in Fig. 20.Each data contains average accuracy information at 749 rpm speed and 1499 rpm speed.20 shows that the average accuracy of the SVM model with various kernel functions fluctuate.The highest accuracy is achived when using 6 features in almost all kernel except for linear kernel.The highest accuracy is 93.9 % when using fine gaussian kernel.This result is lower that obtain by Dong et.al.[16], who achieved a 99.1% accuracy through the amalgamation of VMD PCA and SVM techniques on bearing faults classification. This results also show that in the case of induction motor faults diagnosis using the SVM model, the current signals give outstanding accuracy than the vibration signals. Conclusions Fault diagnosis of induction motors based on current and vibration signals using the SVM model at speeds of 749 and 1499 rpm has been carried out in this study.The variation in the utilization of the number of features aims to ascertain the optimal approach for using these features in the analysis of current and vibration signal data.The current signal produces a maximum result of 100% for each use of the number of features in each SVM kernel used.while for the vibration signal the best results are obtained by using 6 features and the fine gaussian kernel has the highest average accuracy among other kernels. Fig. 2 . Fig. 2. Preprocessing data using VMD on current and vibration Signals Fig. 5 . Fig. 5. Scatter plot of single fault 3 classes on current signal at 749 rpm The induction motor with normal condition are shown in the orange circle.The motor with bearing faults are shown in blue colour, and the motor with stator faults are shown in red colour.It can be seen that the model can separate the motor well.To show the accuracy of the classification, the training data confusion matrix is shown in Fig. 6 as follows. Fig. 6 . Fig. 6.Confusion matrix training data of single fault 3 classes on current signal at 749 rpm Fig. 8 . Fig. 8. Scatter plot of single fault 3 classes on current signal at 1499 rpm The scatter plot in Fig. 8 shows the SVM classification using a linear kernel.Induction motors with normal conditions are shown in orange, induction motors with bearing faults are shown in blue and induction motors with stator faults are shown in yellow.It can be seen that the kernel function can separate the models well.To show the accuracy of the classification, the training data confusion matrix is shown in Fig. 9 as follows. Fig. 9 . Fig. 9. Confusion matrix training data of single fault 3 classes on current signal at 1499 rpm Fig. 10 . Fig. 10.Confusion matrix testing data of single fault 3 classes on current signal at 1499 rpm Fig. 11 . Fig. 11.Average accuracy of single fault 3 classes on current signals Fig.11shows the maximum accuracy in using the total feature variation of up to 100% on all types of SVM kernels.This indicates optimal performance in classifying using SVM on current signals.In this research, a 100% accuracy was achieved by employing current signal analysis at two different speeds.This highlights the effectiveness of combining VMD PCA and SVM techniques for accurately classifying single mechanical and electrical faults in induction motors, outperforming the 97.6% accuracy attained by Ali et al.[7] using a fine Gaussian kernel.This study underscores the superior performance and potential of the proposed approach in fault classification for induction motors. Fig. 14 . Fig. 14.Scatter plot of single fault 3 classes on vibration signal at 749 rpm The scatter plot in Fig. 14 shows that the motor data with different conditions can be separated clearly.Induction motors with normal conditions are shown in orange, induction motors with bearing faults are shown in blue, and induction motors with stator faults are shown in yellow.Models that experience data errors are in the stator fault class.To show the accuracy of the classification, the confusion matrix is shown in Fig. 15. Fig. 15 . Fig. 15.Confusion matrix training data of single fault 3 classes on vibration signal at 749 rpm Fig. 16 . Fig. 16.Confusion matrix testing data of single fault 3 classes on vibration signal at 749 rpm Table 4 . SVM Single Fault 3 Classes on Vibration Signal at 749 RPM as follows. Fig. 17 . Fig. 17.Scatter plot of single fault 3 classes on vibration signal at 1499 rpm The scatter plot in Fig. 17shows the SVM classification using a fine gaussian kernel.Induction motors with normal conditions are shown in orange, induction motors with bearing faults are shown in blue, and induction motors with stator faults are shown in yellow.Models that experience data errors are in the normal condition class, bearing fault and stator fault.To show the accuracy of the classification, the training data confusion matrix is shown in Fig. 18 as follows. Fig. 18 . Fig. 18.Confusion matrix training data of single fault 3 classes on vibration signal at 1499 rpm The confusion matrix training data in Fig. 18 shows a model that has an accuracy of 91.1%.There are 17 data in the bearing fault class predicted as a normal class, 8 data in the bearing fault class predicted as a stator fault class, there is 1 data in the normal class predicted as a bearing fault class, 13 data in the normal class predicted as a stator fault class, there are 7 data in the stator fault class it is predicted as a bearing fault class and 18 data Fig. 19 . Fig. 19.Confusion matrix testing data of single fault 3 classes on vibration signal at 1499 rpm Fig. 20 . Fig. 20.Average accuracy of single fault 3 classes on vibration signals Fig.20shows that the average accuracy of the SVM model with various kernel functions fluctuate.The highest accuracy is achived when using 6 features in almost all kernel except for linear kernel.The highest accuracy is 93.9 % when using fine gaussian kernel.This result is lower that obtain by Dong et.al.[16], who achieved a 99.1% accuracy through the amalgamation of VMD PCA and SVM techniques on bearing faults classification.This results also show that in the case of induction motor faults diagnosis using the SVM model, the current signals give outstanding accuracy than the vibration signals. Table 2 . SVM Single Fault 3 Classes on Current Signal at 749 RPM Table 3 . SVM Single Fault 3 Classes on Current Signal at 1499 RPM Table 5 . SVM Single Fault 3 Classes on Vibration Signal at 1499 RPM
2023-12-20T16:04:26.064Z
2023-01-01T00:00:00.000
{ "year": 2023, "sha1": "a264bdc59ac887a16a4cc3b90dc3b9b374605293", "oa_license": "CCBY", "oa_url": "https://www.e3s-conferences.org/articles/e3sconf/pdf/2023/102/e3sconf_icimece2023_01027.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "ee32190894fa45eb07905a61eea95728f821a68c", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [] }