id stringlengths 3 9 | source stringclasses 1 value | version stringclasses 1 value | text stringlengths 1.54k 298k | added stringdate 1993-11-25 05:05:38 2024-09-20 15:30:25 | created stringdate 1-01-01 00:00:00 2024-07-31 00:00:00 | metadata dict |
|---|---|---|---|---|---|---|
233939840 | pes2o/s2orc | v3-fos-license | The Relationship between Green Supply Chain Management and Profitability
In today’s business environment, companies are facing increased pressure from different sources, such as consumers and communities, stricter governmental regulations and scarcity of resources; to enhance their sustainable behavior. This escalating global awareness of the impact of manufacturing and operational processes on the environment has translated into increased managerial action to balance incorporates economic, financial and environmental performance. Green Supply Chain Management (GSCM) has emerged as an important organizational philosophy to achieve targeted economic and market objectives while reducing environmental risk, reducing cost, optimizing resources and enhancing operations throughout the supply chain. Green supply chain management practices, activities, and their impact on different aspects of corporate performance have been gaining increased attention from academia, industry and customers. The current research aims to investigate the effect of GSCM practices; Greenhouse Emissions, Recycling, Waste, and Renewable Energy on firms’ profitability. Results showed that waste had a significant negative impact on the profitability measured by return on equity which implied that companies should strive to reduce their waste to be able to increase their profitability.
Introduction
It has been acknowledged that the supply chain is an interconnected system that incorporates the entire sequence of activities and grouping of exercises from dis-
Literature Review
The deterioration of the environment led to therelease of the concept and the principle of green supply chain management which is considered as the recent environmental strategy. To implement this strategy, organizations should make an integrated collaboration between internal practices or procedures, and external practices including material suppliers, whole sellers to deliver a high-quality value to the end consumers. On the other hand, during the previous decade, society everywhere has stirred to the issue of environmental change.
Accordingly, Green supply chain management practices and their impact on various aspects of corporate performance have been gaining increased attention from academia, industry, and customers. Hence, the focus of this research is to investigate the impact of the green supply chain practices on firms' profitability. This research aims to fill the research gap identified as a lack of empirical evidence on the investigated point.
Literature had been reviewed that the green supply chain management practices effect on different parts of corporate performance has gained an expanded consideration from both academics, and practitioners. Hence, the exploration of this research has been made to fill the research gap distinguished to provide empirical evidence on the researched point and give proposals and recommendations to scholastics, experts, academics, and practitioners [2].
The aim and the objective of the research appear in developing a framework of green supply chain management, which would be able to help organizations to improve the practices of green supply chain management. Based on this aim, research objectives are set as exploring the impact of GSCM practices (Greenhouse Emissions, Recycling, Waste, and Renewable Energy) on profitability (ROE).
Green Supply Chain Management (GSCM)
It was claimed that the GSCM is also known as the environmental supply chain D. Allam et al.
DOI: 10.4236/oalib. 1105892 3 Open Access Library Journal management (ESCM) or sustainable supply chain management (SSCM). All of these concepts are defined as the integration of green purchasing, green manufacturing/materials management, green distribution/marketing, and reverse logistics into the organizational operations [3].
Another definition of the green supply chain management is considered as the one introduced by [4] who defined environmental supply chain management as the inclusion and the integration of management in the flow and the stream of material and information through the supply chain to make the customers feel satisfied from their green products and services. These supply chains contend to look for internal health using the ability of the auto-correction based on external environment information.
In addition, it was claimed that sustainable supply chain management appeared as the main approach for organizations to follow environmentally sustainable manner, those organizations are increasing and start to merge the environmental practices in their plans and strategies. It was also claimed that the definitions and concepts that are related to the supply chain environmental and ecological management are usually clear and understood by organizations and industries as environmental performance [5].
Moreover, the supply chain management traditional definition is the process of converting the raw material to final products and delivering those products to consumers. However, according to the recent environmental changes, the concept and the idea of supply chain management has been enhanced and improved so that the green supply chain consists of many factors that lead to improving the environmental and ecological supply chain and provide a technique and procedure to achieve it. Thus, the green supply chain is the new concept of the traditional supply chain as many activities are maintained to reduce the environmental impacts, those activities include green purchasing, green design, decreasing of damaged material, and product recycling.
Furthermore, the green supply chain was also defined as the set of integrating environmental thinking into the traditional supply chain management to balance and stabilize the environmental performance and financial performance of the organizations [7]. GSCM gained more popularity and attention as more countries are considering the environmental and economic effects of the organizational operations as well as the rise in the public awareness of environmental safety [6].
Meanwhile, it is also necessary for organizations to green their audit management system to ensure that suppliers and vendors meet the quality standards of the products and raw materials. This will help suppliers to achieve a well understanding of the industry's environmental strategy. In addition, the implementation of innovating green practices is containing the environmental management system (EMS) implementation, usage, green procurement (acquirement) strategies, green product development, and design (configuration) practices adoption, use, and utilization of environment-friendly products and process
Greenhouse Emissions and Firm Profitability
Recently, environmental change and environmental strategy are higher on the brains of shoppers and consumers around the globe than at any other time. As scholars, top administration, and management have identified environmental change and carbon management as a business reality now [9].
Inside the setting and the context of a carbon and greenhouse emissions constrained business future, there is extraordinary vulnerability and uncertainty over how a move to a low-carbon business market will play out and how to reduce the greenhouse emissions [9]. A lot of scholastics and supervisors are giving a big concern about the environmental strategy especially carbon management. Researchers tried to find another approach to incorporate carbon emissions in the production network and supply chain management. Since environmental change and carbon emissions present difficulties to numerous businesses and industries, expanding their comprehension of how to coordinate carbon emissions in supply chain management, inventory, and production process [9].
Besides, greenhouse emission reduction has been introduced as a tool that helped in reducing the environmental impact of the supply chain process, especially in the manufacturing phase to produce cleaner goods and services.
The main successful keys of greenhouse emission reduction are to make an integrated and helpful system of management commitments, awareness of employees, and training. Management commitments refer to managers' responsibility for planning, implementing, and controlling corrective processes to enhance the eco-system. Also, to create innovations, technologies, programs, and activities to encourage the employees to work harder and reduce wasted materials, produce the integration of products in a cleaner way, reduce product pollution, and enhance organizational profitability. In addition, it is one of the main procedures that improve and strengthen the profitability and efficiency of the firms [2]. Consequently, this research proposes the following main hypothesis: H 1 : There is a significant relationship between Greenhouse Emissions and a firm's profitability measured by ROE.
Recycling and Firm Profitability
A lot of focus and concern has been delivered about design for disassembly and for recycling the products by most large corporations which apply the GSCM. Also, it was determined that recycling is one of the main green supply initiatives, in addition to being viewed as one of the most and main procedures that are applied in enterprises in implementing the GSCM practices [10]. [11] investigated among all the recorded recycling procedures, structure for recycling, and plan for disassembly have all the earmarks of being the least common studied. They mentioned that there are two last techniques introduced as the principal purpose of recycling procedures. The first one is planning and designing for disassembly focus on a diminishing the cost of destroying a product, which can thusly prompt improved recycling and reusing of the product itself, or parts thereof. As an outcome, the waste streams related to the item itself are diminished, weakened and the effects and impacts related to the process of production of a new product or parts are reduced.
The second one is planning and designing for recycling focuses on utilizing more recycled materials in the manufacturing process, assembling procedures, and making the products simpler to reuse and recycle. Consequently, the impact of the product is rapidly diminished by the collection and recycling procedures of certain materials, where they have a less environmental and ecological impact than producing an equivalent amount of similar material from primary assets and resources. Recycling plan is regularly an exceptionally effective strategy, giving a considerable lessening in environmental and ecological effects and simultaneously permitting savings in the utilization and consumption of natural resources and assets which increases the organizational profitability [11]. Consequently, this research proposes the following main hypothesis: There is a significant relationship between Recycling and a firm's profitability measured by ROE.
Waste and Firm Profitability
Most of the large profitable companies aim to grow more effectively and establish new ways and strategies to improve their green brand in order to create a value proposition in customers' minds and to make a competitive advantage that differentiates them from other competitors. Green brand refers to companies that provide eco-friendly products and services to serve in environmental protection and its target customers who focus on healthy products and services and are interested in eco-friendly products and reducing the waste of their production processes. Therefore, the supply chain of those firms comprises of those exercises related to assembling from raw material procurement to final product delivery to focus on eco-friendly products and how to reduce the waste of their manufacturing facilities [12].
As of the late changed environmental prerequisites and agreements that influence production activities and transportation structures, developing consideration is given to the advancement and development of environmental administrative procedures and strategies for supply chains. Green supply chains aim at D. Allam et al.
Open Access Library Journal linking the wastes to the industrial structure and systems to save energy and forestall the dissipation of unsafe and harmful materials into the near environment. Therefore, each link or channel in the supply chain could be a reason to generate waste, or any hazards to the environment would be eliminated or reduced as one of the practices of GSCM [13].
Also, it has been noticed that most managers and executives who apply the GSCM practices in their facilities are focusing on reducing emissions and wastes especially the mercury waste and emissions, and applying improvement to their environmental performance by implementing recycling procedures and programs such as recycling and materials recovery. Therefor another hypothesis could be formulated as follows: There is a significant relationship between Waste and a firm's profitability measured by ROE.
Renewable Energy and Firm Profitability
By reviewing previous studies, it has been found that the relationship between green logistics and energy demand had great importance under the GSCM process, while sustainability in the point of view of green logistics indicators needs sustainable and renewable sources of energy to diminish the destructive impact of worldwide logistics activities and practices on nature and the environment. It has been settled that GSCM practices are a set of procedures to enhance environmental sustainability. Also, it has been investigated and proved that environmental sustainability could be improved by reducing carbon emissions and one of the main methods used to reduce carbon emissions is introducing the concept of consuming renewable energy as a percentage of total energy consumption [14].
Green development could be achieved by utilizing cleaner energy or green energy as one of the promising solutions which could be done by government regulations to support using cleaner technology in industrial activities and the production process. Other empirical research directed with regards to China has been clarified that Chinese firms and organizations keep attempting to improve their ecological and environmental picture with cleaner production, sustainable power source, and renewable energy. In the other word, the GSCM practice's concern with the energy resources which are vital and indispensable to power and control the industrial processes and procedures in manufacturing, assembling, and logistics, while their use and utilization are also a major and significant contributor to carbon emissions and waste [1]. tal. Also, it has been noticed that the firms' profitability and their reputation in the market have been increases significantly after those firms have used renewable energy sources and applied green practices in their production process [14], [1]. Consequently, this research proposes the following main hypothesis: H 4 : There is a significant relationship between Renewable Energy and a firm's profitability measured by ROE.
Research Methodology
The business research is demonstrated and shaped like one of the social sciences researches at large by the intellectual traditions. Thus, it is elaborated and followed by the context of the social science disciplines, which inform the study of business and its specific fields. It had been defined as an applied field that focuses on the nature of organizations and also focuses on solving problems that are facing the managers and related to managerial practice. It is obligatory to be clear in defining the theory, which is as an explanation of observed regularities or to explain the degree of alienation between two or more variables. In the following sections, the researcher will try to identify the methodology that is used to examine and test the hypotheses and explain the results obtained through testing the research sample in applying Green Supply Chain [15].
This research follows the positivism philosophy to understand the structures that generate events of the green supply chain by identifying its perspectives and different dimensions. In addition, the researcher follows this philosophy to gain the advantage of fulfilling the gaps between the finding and build the research hypotheses objectively. Using this philosophy allows the researcher to test the hypotheses according to theories of green supply chain and provide further findings of the phenomenon conducted about the relationship between green supply chain and profitability indicators.
The research process is devoted to the explanation of relationships between variables which helps the researcher to find and choose the best justification of the findings of this research. The researcher follows a deductive approach as this research clearly defines the used dimensions in different theories as well as the extent to which such dimensions are applied in practice. Thus, the researcher will examine the effect of green supply chain practices on firm profitability [15].
Since the research approach adopted in this study is the deductive approach, then, the quantitative research design will be the proper choice for examining relationships between variables. According to the discussion presented in this section, the next section presents different types of data, the ways to collect data, and the types and methods that have been used by the research method. The discussion raised regarding the data collection is described in the following section in detail to be able to specify different tools used for this research [16].
Data Collection and Sample Selection
This research has been focused on secondary data to measure the profitability of The study is based on secondary data observation as a tool of data collection which is a method for quantitative data. The researcher collected historical secondary data from corporate annual and sustainability reports. Variables and Measurement: The variables used in this study can be categorized into two main types which are the dependent and independent variables. Dependent Variable is the profitability. The performance will be measured by the Return on Equity (ROE). ROE is treated as an important measure of a company's earnings performance. The ROE tells common shareholders how effectively their money is being employed. With it, one can determine whether a firm is a profit-creator or a profit-burner and management's profit-earnings efficiency. The higher a company's return on equity, better management is at employing investors' capital to generate profits.
Independent Variables: These are factors identified in prior research as GSCM practices. There are four independent variables that will be measured. These are The Greenhouse Emissions, Recycling, Waste, and Renewable Energy. This research tried to look at the possibility of the relationship between the dependent variable and independent variables. The relationship between the dependent variable and independent variables is explained in Figure 1. Table 1 presents a summary for the conceptualization of all the variables including the dependent, independent variables of the study.
Findings and Analysis
This section of the study is devoted to present the results of the analysis performed on the data collected to test the hypotheses developed in the study. Table 2 shows the descriptive statistics of the research variables. It provides the mean, minimum, maximum, and standard deviation of the variables in the study. The data shows that the mean value for Greenhouse Emissions is 5.088. The results Waste Anything that adds adverse effects to the an environment without adding value [19] Measured using the tons of resources discarded or unused resources throughout the year. [22] Renewable Energy Energy could be utilized over and over [20] Measured using the megawatts of energy generated from renewable resources. [22] Dependent Variables Profitability ROE measures productivity by uncovering how a lot of benefits an organization creates with the cash investors have contributed [21] Measured as a percentage of net income after tax to total equity [21] net income ROE Total Equity = show that the mean value of Recycling is 52.295. The mean value for Waste is 190.905 and the mean value for Renewable Energy is 126.991. In addition, the mean return on equity ROE is 0.113.
Testing the Research Hypotheses
The research hypotheses were tested using OLS regression. Accordingly, all the assumptions were tested and verified as the regression model was found to be linear and no hetroscadiscity was found. Moreover, no autocorrelation and multicollinearity exist. Table 3 shows the summary of the models of the backward stepwise regression. In which the final model has an R square of 8.5% which means that 8.5% of the variation in ROE is affected by the waste with a significance of 0.042. Table 4 shows the multiple regression model of the effect of Greenhouse Gas Based on the research results, the third hypothesis was accepted which indicates that there is a significant negative relationship between waste and organization profitability, while the other hypotheses were proven insignificant. Table 5 summarizes the results of the hypotheses.
Discussion and Conclusions
It is essential to study and understand the GSCM factors that involve a firm's profitability and also to examine and test the role of GSCM practices in enhanc- According to the findings of the report, waste has a big effect on the profitability of organizations, which is confirmed by many previous studies. In the green paradigm, for instance, there are many areas where waste occurs, and resources remain to mitigate it, which will ultimately affect organizations' environmental and economic efficiency. Profits are thus achieved for all layers of the There is a significant relation between GSCM Practices (Renewable Energy) and Profitability, measured by ROE Not Supported Open Access Library Journal supply chain, as companies would work together to improve the production and production efficiency of their goods, contributing to overall waste reduction [12]. In addition, through the waste that occurs during manufacturing, transport, distribution and disposal, the supply chain adversely impacts the environment, and here the aim of green supply chain management appears to be to mitigate and avoid environmental harm. Therefore, the effort and encouragement of suppliers to comply with the required environmental standards are to achieve supply chain goals. This relationship represents the strong correlation between green buying and better organizational profitability.
In the researcher's view, the findings have shown that the minimization of waste is perceived to be an effective green strategy activity, since waste is viewed as a non-value-adding practice and treated as an environmental enemy. Where processing and producing are considered as the key causes of environmental disruption to waste generation, habitat disturbance, and natural resource depletion. In addition, the outcome of the research indicates that waste management contributes to improving production and gaining a strategic edge as well as improving financial results through the efficiency of the use of raw materials and resources and decreasing waste and disposal.
The researcher explains the application of these activities; greenhouse emissions, recycling, and renewable energy; while the presence of the insignificance relation between these variables and profitability indicates that organizations are unable to achieve higher profits by implementing these practices. However, such activities may be mandated and valued by stakeholders such as governments and consumers and may have a long-term impact on firm financial performance.
Implications of the Study
The examination of the research findings provided insight into the nature and magnitude of the relationship between various GSCM practices and firms' financial performance. The research findings allow for several administrative ramifications and managerial implications to improve financial performance indicators by using GSCM practices.
The findings of the study should help managers in strategically planning their green supply chain practices and to link those practices to the organizations' financial performance. The key implication emanating from the research results at the managerial level is the need to establish techniques and instruments that will promote and drive the extension of environmental protection to worldwide supply chain acceptance and implementation. Managers may also use analysis models to reliably forecast the nature of the effect on corporate financial metrics arising from the introduction of diverse practices of green supply chain management. In addition, managers should be more concerned din reducing their organizations' waste to increase profitability.
Limitations of the Research
It has been known that in most empirical studies, there are some limitations to this research that might prevent generalizing the findings. One of the main limitations of this research is the small sample size including twelve companies in addition to the short time frame covering ix years. Moreover, only organizational profitability was examined using one indicator namely ROE.
Suggestions for Further Future Studies
Since this study covered six years from 2014 to 2019, then, further research can consider a longer time frame as the study was limited by the availability of data in the annual reports and sustainable reports. In addition, a comparison across different industries and between regions around the world can be conducted to examine the difference in GSCM practices implementation. Another avenue for further research is to include more factors that affect the relationships between GSCM practices and profitability of firms such as corporate sustainable growth, and corporate size to reach a significant relationship between dependent and independent variables.
In addition, future researches may examine the organizational financial performance as a whole including profitability, liquidity, leverage, efficiency, and market value to gain a full understating of how GSCM practices impact the overall financial performance of the corporations. | 2021-05-08T00:03:18.370Z | 2021-02-22T00:00:00.000 | {
"year": 2021,
"sha1": "26206974f749f866d4707ebb8b3333aa080fe8e5",
"oa_license": null,
"oa_url": "https://doi.org/10.4236/oalib.1105892",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "1bb8547120efb093bc8473d769d13a79a5e15386",
"s2fieldsofstudy": [
"Environmental Science",
"Business"
],
"extfieldsofstudy": [
"Business"
]
} |
247332723 | pes2o/s2orc | v3-fos-license | Psychometric Properties of the Maslach Burnout Inventory Adaptation and Validation among Moroccan Mathematics Teachers
ikram.douelfiqar@uit.ac.ma Abstract— the teaching profession is particularly vulnerable to the develop-ment of stress and burnout syndrome, this research is used to adapt the Maslach Burnout Inventory (MBI) to members of middle and high schools in Morocco. This study aims to evaluate the psychometric properties of Maslach Burnout Inventory (MBI) in terms of validity, reliability, and sensitivity. The sample con-sists of 218 mathematics teachers, working in public schools in Marrakech-Safi region (Morocco). The psychometric properties were examined by the following analyses: confirmative factor analysis (CFA) to test the validity of the Statis tical model, Reliability (Cronbach’s alpha), exploratory factor analysis (EFA) to extract factors and complete dimensionality Cronbach’s a reliability analysis gives a value of (0.814), Principal component analysis with varimax Rotation gives three factors and explains 54.22% of the total variance. The three dimensions give values well above the accepted minimum reliability threshold namely (0.847) for emotional exhaustion (0.766) for depersonalization and (0.720) for a sense of personal accomplishment. The overall adjustment of the model and very satisfactory, Which affects the (0.71). The results indicate that the main psychometric properties of reliability and validity of the theoretical model of MBI appear to be satisfactory for the study of burnout syndrome in the cultural context of Moroccan
Introduction
The burnout syndrome is a set of reactions resulting from situations of chronic professional stress in which the commitment dimension is predominant. It manifests itself in three forms: emotional exhaustion, depersonalization and a decrease in personal accomplishment at work [1].
Different instruments for measuring burnout has developed for to account for this syndrome and to make comparisons between studies. The most widely used instrument is still today the Maslach Burnout Inventory (MBI). It measures the three dimensions of this syndrome previously defined by Maslach. The MBI questionnaire has been adapted to apply not only to human service professions but also to all types of professions in general. An updated definition of burnout, constructed from the latest version of the MBI, is that proposed by Maslach [2].
However, several studies of the MBI in terms of psychometric properties in different cultures and industries have yielded varied results, and cross-cultural comparisons of burnout diagnosis among teachers are completely different [3], a brief look at the working conditions of Moroccan teachers reflects the fact that their profession has a significant painful dimension.
Faced with very heavy demands and the non-recognition of the hardship of their profession, Moroccan teachers report a proven psychological fragility developing various pathologies at work [4].
The Moroccan government has initiated numerous educational reforms. These reforms have made it possible to achieve certain notable advances such as the improvement of infrastructure, pedagogical and didactic equipment and the social conditions of teachers [5].
However, the workload and emotional burden of teachers continues to increase. Similarly, the physical working conditions these work demands can jeopardize the health of Moroccan teachers [6], [7].
Indeed, to protect the psychological health [8] of these resources, it is necessary to produce a diagnostic tool for burnout valid for the cultural context of teachers.
The purpose of this study was the translation and psychometric validation of the MBI for the Moroccan population, in order to, becomes a useful tool in the assessment of burnout in Moroccan teachers.
2
Material and method
Each with 7-point, Likert-type, frequency response scale (0 = never, 1 = a few times a year or less, 2 = once a month or less, 3 = a few times a month, 4 = once a week, 5 = a few times a week, 6 = every day) [10].
Translation procedure
The translation of the MBI was carried out by two bilingual people working in the field of education, an associate professor of the French language working in the Regional Center for Education and Training (CRMEF) in Safi (Morocco) and a language teacher works in a public high school. The questionnaires distributed online via Google Forms and the socio-demographic data recorded, this process followed by a pre-test of the instrument with 76 teachers to verify our questionnaire and validate our measurement scale.
Participants
The validation of the instrument was carried out with a sample of 218 mathematics teachers working in the public sector from 16 high schools and 30 public middle schools. Of the city of SAFI (Marrakech-Safi, Region, Morocco). I found that 39% of women and 61% of men, the average age is 38 ± 22 of which 61% are married, and 37% are single, 58% of the them have a bachelor's degree, and 22% have a Master's degree, and 2% have a PhD degree with years of seniority that varies between 3 and 15 years.
Statistical analyses
Statistical analyses were performed using SPSS V25 Software (IBM Corporation, Armonk, NY) and SmartPLS v3 Pro software [11]. The feasibility of the instrument was tested through the Principal Component analysis (PCA), which allows us to clean our measurement scale to achieve acceptable reliability compared to the cronbach's α indicator, the reliability was examined by calculating the same coefficient for the three dimensions. The data were subjected to an Exploratory Factor Analysis (EFA) to extract the factors and test the dimensionality of the questionnaire. The KMO test that was designed by Kaiser, Meyer and Olkin was calculated to assess the suitability of the sampling. It is significant when it far exceeds the threshold minimum, the Bartlett's test of Sphericity is statistically significant if the risk threshold is close to zero (p<0.05).
To model the structures of the links between the data, we used the structural equation model (SEM). The coefficient of determination (R²) allows estimating the share of variation of our instrument explained by the three explanatory dimensions [12]. The predictive relevance (Q²) of the structural model gives satisfactory results if it exceeds the minimum accepted threshold. The quality of adjustment is calculated by the Goodness of Fit index (GoF) which makes it possible to judge to what extent the theoretical structural model corresponds to the empirical data.
Internal consistency of theoretical dimensions and deletion of items
In our study, we measured the burnout using the Maslach Burnout Inventory (MBI) instrument with 22 items that follow a 5-point measurement scale (1: not at all agree, 5: all agree). Cronbach's a reliability analysis gives a value of 0.626 these results forces us to delete six items 6,7,9,14,21,22 that are inconsistent with other items in order to increase the alpha value to 0.814 (See Table 1). The results show that the Alpha indicator has been significantly improved after item deletions that reduce alpha reliability. Therefore, we can say that the items manage to effectively measure our model.
Exploratory factor analysis (EFA)
Principal component analysis with varimax rotation gives three factors whose eigenvalues are greater than one and explain 54.22% of the total variance. The three dimensions are well defined. The first factor, which includes four elements constituting the Depersonalization dimension, explains 34.14% of the total variance. The second factor has seven elements constituting the Emotional Exhaustion dimension, with 12.94 of the total variance; the third has five items constituting the Personal Accomplishment dimension, which explains 7.175% of the total variance (See Table 2).
The principal component analysis identified 3 dimensions that make up the Maslach Burnout Inventory specific to Moroccan Teachers, The EE dimension (7 items), the DP dimension (with 4 items), and the PA dimension (5 items). Each of these items shows a great correlation with its own dimension. The analysis of the items gives significant results as to the validity and reliability of the scales; we were able to obtain a significant KMO index of 0.889 a value that far exceeds the only accepted minimum. In addition, the Bartlett's test of Sphericity gives a statistically significant value to the risk threshold of 5% [13], for all items in Maslach Burnout Inventory with significance of 0.000, which is below the threshold of 0.05. For the Cronbach's α values, we noticed that the measuring items of the dimensions of MBI namely: (EE, DP, and PA) give values well above the accepted minimum reliability threshold, i.e. 0.847 for the dimension EE 0.766 for the DP and 0.720 for the PA.
The total variance explained shows the share of each dimension in the MBI theoretical model formation, we have the EE dimension which gives a variance of 36.73% followed by the DP dimension with 11.90% at the end the PA dimension with 7.10%.
Analysis of the structure of the dimensions the Maslach Burnout Inventory
In order to identify the existing relationships between the dimensions of Maslach Burnout Inventory we calculated the score for each dimension, The results show that the two dimensions Depersonalization (DP) and emotional exhaustion (EE) are positively correlated with each other with a value of 0.625, on the other hand, we have a negative relationship between the Personal Accomplishment dimension (PA) and the (EE) and the (DP).This means that these two dimensions vary in opposite directions, but with relatively average correlation coefficients, (-0.423 and -0.491).The analysis of variance by sex and age for all three dimensions shows that the relationship is only significant for the relationship between gender and (DP). (See Table 3)
Structural equation model analysis (SEM)
To test our model we chose the PLS structural equation method. Structural equation models (SEM) are multivariate models used to model the patterns related to data. The interest of structural equation modelling lies essentially in its ability to simultaneously test the existence of causal relationships between several latent variables [14].
Aiming at evaluating a reflective structural model we must calculate a set of indicators that allow the measurement of [15]: The coefficient of determination (R²), allows estimating the share of variation of Maslach Burnout Inventory explained by the three explanatory dimensions for our structural model, the MBI model is composed by three dimensions (EE, DP and, PA) As shown in (see Table 4), the explanatory power of these three dimensions is estimated at 0.993 this means that the three dimensions: (EE, DP and, PA) bet to effectively measure instrument variation (MBI) and are important factors in the formation of MBI. The predictive relevance (Q²) of the structural model gives satisfactory results that exceed the accepted minimum threshold, which shows that we have a good predictive relevance of our MBI model. In addition, the overall adjustment of the model is very satisfactory it touches the 0.71, a value that reflects a very good quality of adjustment. The latter allows us to formulate the validation of the three-dimensionality of the three-dimensional character of the Maslach Burnout Inventory translated into Arabic specific to Moroccan math teachers. The Moroccan version of 16 items of MBI Specific to teachers proposed by our research is therefore a validated structure (See Figure 1).
Discussion
The main objective of this study was to construct and test the factor structure, internal reliability, sensitivity, and validity of a measuring scale translated into Arabic of the burnout of Moroccan teachers, designed according to the three-dimensional theoretical model of Maslach Burnout Inventory. The tool composed of 16 items, was validated on a representative sample of mathematics teachers at middle and high schools in the city of SAFI located in western Morocco in the region of Marrakech-Safi. Using sequential analysis (EFA and CFA). Several validation tests have shown the inconsistency of some theoretical items from the MBI instrument of 22 items with the characteristics of samples studied.
Our results, however, contradict the results of previous studies of samples of non-Western teachers (e.g. Abu hilal & al (2018)) [16] that have confirmed that burnout is a multidimensional concept, (the existence of four dimensions). Similarly, the factor structure of burnout in this survey is remarkably consistent with other studies (such as Won sunchen & al (2014) and Abdeslam Amri (2019)) [17], [18]. For our study, we excluded items (6,7,9,14,21,22) and then kept a model of 16 items.
This inconsistency of these items could be explained by the characteristics of the teaching profession, and also the nature of the mathematics subject taught which constitutes, great learning difficulty and it is very often related to high stress [19].
The exploratory factor analysis retained the three dimensions of the 16 items of the MBI specific to Moroccan mathematics teachers with very positive results. For our study, Cronbach's α on the global scale (0.814) proves good reliability of items with a value that far exceeds 0.7 as well as those of emotional exhaustion (0.847) depersonalization (0.766) and the feeling of personal accomplishment (0.720) are satisfactory.
We propose to use this measuring instrument for research in the non-Western Moroccan Educational system for the evaluation of burnout.
Conclusion
To sum up, our study confirmed that the Moroccan version of the 16-item MBI has acceptable psychometric properties, with a well-defined internal structure and very good quality of adjustment. It was valid and appropriate to be used to facilitate future related to burnout studies for Moroccan teachers. | 2022-03-10T16:32:11.341Z | 2022-03-08T00:00:00.000 | {
"year": 2022,
"sha1": "5304d9ad15136d32425f82a4d9234dcf2f477acd",
"oa_license": "CCBY",
"oa_url": "https://online-journals.org/index.php/i-joe/article/download/28029/10855",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "224798fad1c609d608548835f98debf76ae7a861",
"s2fieldsofstudy": [
"Mathematics",
"Education",
"Psychology"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
18834765 | pes2o/s2orc | v3-fos-license | Novel application of heuristic optimisation enables the creation and thorough evaluation of robust support vector machine ensembles for machine learning applications
Today’s researchers have access to an unprecedented range of powerful machine learning tools with which to build models for classifying samples according to their metabolomic profile (e.g. separating diseased samples from healthy controls). However, such powerful tools need to be used with caution and the diagnostic performance of models produced by them should be rigorously evaluated if their output is to be believed. This involves considerable processing time, and has hitherto required expert knowledge in machine learning. By adopting a constrained nonlinear simplex optimisation for the tuning of support vector machines (SVMs) we have reduced SVM training times more than tenfold compared to a traditional grid search, allowing us to implement a high performance R package that makes it possible for a typical bench scientist to produce powerful SVM ensemble classifiers within a reasonable timescale, with automated bootstrapped training and rigorous permutation testing. This puts a state-of-the-art open source multivariate classification pipeline into the hands of every metabolomics researcher, allowing them to build robust classification models with realistic performance metrics.
Introduction
In many areas of biology, machine learning algorithms are used to build models to identify the type, or state, of biological samples from multivariate analytical data. Examples include diagnosis of cancer from vibrational spectra (Sattlecker et al. 2014), confirmation of food authenticity of milk and milk products (Nicolaou et al. 2011), and determination of food freshness (Argyri et al. 2013). The models produced by machine learning algorithms are essentially performing pattern recognition, sometimes referred to more formally as multivariate classification. In the metabolomics community such models have long been used to demonstrate that there is an objectively discernible biochemical difference between sample classes. This is often used to prove a hypothesis, but can also be considered as a first step towards automating the classification of unknown samples, or identifying biomarkers that could be used as the basis of a novel diagnostic test.
There is a large and growing list of machine learning methods available, including linear discriminant analysis (LDA) (Klecka 1980), partial least squares discriminant analysis (PLS-DA) (Wold et al. 2001; Barker and Rayens 2003), artificial neural networks (ANNs) (Hornik et al. 1989;McCulloch and Pitts 1943;Sanger 1989;Yegnanarayana 2009), random forests (Breiman 2001) and support vector machines (SVMs) (Boser et al. 1992;Cortes and Vapnik 1995). Within the metabolomics community, PLS-DA predominates to such an extent that some researchers are not fully aware of the alternatives (Thissen et al. 2004;Szymańska et al. 2012;Gromski et al. 2015). However, other approaches are now gaining ground, with SVMs in particular being successfully applied in metabolomics and beyond (Mahadevan et al. 2008;Liland 2011;Luts et al. 2010). One of the key features of SVMs, as opposed to traditional chemometrics techniques, is the support for both linear and nonlinear prediction models with boundaries of high complexity, which can satisfy the extremely complex nature of metabolomic data (Luts et al. 2010;Xu et al. 2006). Several direct comparisons between SVMs and PLS-DA have shown that SVMs can outperform PLS-DA in terms of prediction accuracy when applied to metabolomics data (Mahadevan et al. 2008;Thissen et al. 2004;Gromski et al. 2015).
Today, building a classification model using any of the aforementioned machine learning methods is technically straightforward thanks to readily available software implementations and an abundance of computing power (Ratner 2011). However, ascertaining a truly representative indication of the classification accuracy for the intended application can be a challenge, potentially leading non-experts to invalid conclusions (Domingos 2012). Overly optimistic assessments of performance are commonplace, leading to classification models that appear to work well in a pilot study often failing when applied to data from a new set of samples.
The most crucial step in supervised learning is the evaluation (testing) process where the generalisation performance of a classifier is assessed on previously unseen data (Geman et al. 1992;Wold et al. 2001;Izenman 2008). The first indicator frequently used to estimate the overall predictive power of a pattern recognition system is the classification accuracy (%CC), which is equal to the percentage of correctly classified samples. Metrics such as sensitivity and specificity, or in cases of multi-class studies the per class accuracies, provide further detail about classification model performance. However, like all performance metrics, the overall classification accuracy, sensitivity and specificity vary substantially according to how exactly the testing is performed. Most metabolomics practitioners are aware that testing a model on exactly the same data that was used to train it is inappropriate because it would lead to perfect training scores (i.e. sensitivities and specificities of 100 %) but would fail to predict new unseen data (Kohavi 1995). Testing with a second data set, totally independent of the training data, is the obvious solution to this problem but proves difficult when limited numbers of samples are available (as is often the case, particularly in clinical studies) and there is a danger of obtaining a fluke result because a single independent test set happens to give particularly good or bad results. This has led to the widespread use of cross-validation (Stone 1974) techniques where testing is performed using mutually exclusive subsets (folds) of the data with approximately equal size, the results of which are combined by averaging. However, cross-validation has been shown to substantially overestimate model performance due to instances of high variance (Kohavi 1995;Westerhuis et al. 2008). Bootstrapping (Efron 1979;Efron and Tibshirani 1994) is therefore the currently preferred solution, whereby new datasets (bootstrap samples) are created from the original data by randomly sampling with replacement. By repeating this resampling process a great number of times, a good estimate of the underlying sampling distribution (Wehrens et al. 2000) can be obtained. More specifically, one of the main advantages of bootstrapping is the fact that it allows robust evaluation of statistical properties (e.g. standard errors, confidence intervals, bias) that would be difficult to obtain analytically (Tichelaar and Ruff 1989;Massart et al. 1997;Wehrens et al. 2000;Liland 2011). However, we must still question whether the model performance obtained is significant compared to random chance. This final step is achieved using permutation testing (Good 2004), whereby the whole model building and testing process is repeated hundreds of times in an attempt to map samples to randomly permuted classes-a model performance that does not differ substantially from performance achieved for the random permutations cannot be considered significant.
From this brief explanation, it is clear that testing procedures have an overwhelming influence on the veracity of performance metrics calculated when applying machine learning and that performing the testing process properly can be laborious and computationally intensive. Indeed, training and rigorous evaluation for a single classification problem requires expert knowledge and can involve training millions of individual classifiers, which can be extremely computationally demanding especially if these classifiers involve complex models such as nonlinear SVMs. To address this issue, we have developed the classyfire R package for the implementation of ensemble SVM training with bootstrapping and rigorous performance evaluation via a handful of high-level functions. The key to this package is a novel solution for optimising SVM hyperparameters that bestows a speed up of more than tenfold compared to the widely applied grid search. We believe that making such a high quality multivariate classification pipeline readily available will improve the quality of metabolomics research by providing a transparent and trusted model building and evaluation workflow that can be used by researchers with limited machine learning experience and inexpensive computer hardware.
Support vector machines
SVMs were chosen for this work because of their proven ability to produce classification models that outperform equivalent PLS-DA models for many metabolomics applications. A detailed explanation of the theory behind SVMs is beyond the scope of this paper (such explanations can be found in Cortes and Vapnik (1995) and Cristianini and Shawe-Taylor (2000)) but, in summary, a SVM attempts to separate classes within the variable space by fitting a hyperplane between different sample groups in a way that produces a low generalisation error while simultaneously aiming to maximise the distance (margin) between the nearest points of the two classes (Bennett and Campbell 2000;Suykens et al. 2002). Because the complexity of most metabolomics datasets makes linear separation between classes impossible, a nonlinear kernel function is typically used to project the data into a higher dimensional feature space where linear separation is theoretically feasible (Chapelle and Vapnik 1999;Cristianini and Shawe-Taylor 2000). Common nonlinear kernels include the radial basis function (RBF, also called Gaussian), polynomial function and sigmoid function (Hearst et al. 1998). Each of these kernels is characterised by a set of hyperparameters that have to be carefully tuned for the specific problem under study (Chapelle et al. 2002). The radial basis function (RBF) kernel is particularly popular and a reasonable first choice (Hsu et al. 2003), especially in cases where there is little or no knowledge about the data under study. The optimisation of RBF SVMs requires the thorough tuning of two hyperparameters-the cost parameter C, which controls the optimal trade-off between maximising the SVM margin and minimising the training error, and the kernel parameter c (gamma), which determines the degree of nonlinearity or width of the RBF kernel. Various methods have been devised to extend the binary classification functionality of SVMs to multi-class cases, usually by dividing a multi-class problem in a series of binary problems (Hsu and Lin 2002;Duan and Keerthi 2005).
Bootstrap training of RBF SVMs
As mentioned in the introduction, bootstrapping is currently the preferred method for validating classification models because it often gives more representative and robust performance metrics than other validation techniques (Wehrens et al. 2000;Liland 2011). Figure 1 demonstrates how we have implemented bootstrapping in our model building workflow. For a given input dataset D, Fig. 1 Flow diagram illustrating the overall process of constructing an ensemble of RBF SVMs optimised via boostrapping. The process is distinctly split into two steps-the training and testing (evaluation) process a random fraction of samples is removed and kept aside as an independent test set during the training process of the model (holdout process). This selection of samples forms the dataset D test . This test set typically comprises a third of the original samples, therefore the test set consists of the same balance of sample classes as the initial dataset D (stratified holdout). The remaining samples that are not selected form the training set D train . Since the test set is kept aside during the whole training process, the risk of overfitting is minimised (Ramadan et al. 2006). In the case of bootstrapping, a bootstrap training set D bootTrain is created by randomly picking n samples with replacement from the training dataset D train . The total size of D bootTrain is equal to the size of D train . Since bootstrapping is based on sampling with replacement, any given sample could be present multiple times within the same bootstrap training set. The remaining samples not found in the bootstrap training set comprise the bootstrap test set D bootTest . In the case of RBF models with bootstrapping, the SVMs are built and optimised using D bootTrain and D bootTest for different hyperparameter settings. More specifically, for each given combination of the hyperparameters C and c, a new SVM model is trained with D bootTrain and tested with D bootTest . To avoid reliance on one specific bootstrapping split, bootstrapping is repeated at least 100 times until a clear winning parameter combination emerges. Several methods can be used to determine the winning parameter; most commonly, the statistical average or the parameter that has most frequently been recorded as optimal is used.
SVM optimisation and ensembles
The optimisation of the hyperparameters is traditionally implemented using a two-step approach based on a combination of a coarse and fine grid-search, where the SVM performance is evaluated at regular intervals across the C-c surface (the ranges are set to C = [2 -5 , 2 -3 , …, 2 15 ] and c = [2 -15 , 2 -13 , …, 2 5 ] respectively) and the best parameter combination used to seed a finer grid search to refine the values of C and c (Hsu et al. 2003;Meyer et al. 2003). This is a relatively slow process, which becomes a particular hindrance when bootstrapping is used since many individual SVMs must be optimised. We have therefore implemented a much faster optimisation strategy based on a constrained nonlinear simplex optimisation (Box 1965), which performs the minimisation of the average bootstrapping test error during the training process of the SVMs within acceptable timescales. In this case, the inequality constraints correspond to the minimum and maximum predefined hyperparameter boundaries, where log 2 c 2 À15; 5 f g and log 2 C 2 À5; 15 f g. The formation of the initial complex begins with the selection of a random feasible point that must satisfy the minimum and maximum hyperparameter constraints. The simplex easily adapts itself to the local landscape such as a three-dimensional surface plot by elongating itself down long slopes, altering direction when encountering a valley at an angle, and contracting as it approximates the minimum (Singer and Nelder 2009). A thorough review and step-by-step explanation of the simplex methodology can be found in Nelder and Mead (1965), and Lagarias et al. (1998).
Ultimately, the optimal parameters are used to train a new classifier with the full D train dataset and test it on the independent test set D test , which has been left aside during the entire optimisation process. Even though the approach described thus far generates an excellent classifier, the random selection of test samples in the initial split may have been fortunate. For a more accurate and reliable overview, the whole process is repeated a minimum of 100 times, as illustrated in Fig. 1, until a stable average classification rate emerges. The output of this repetition consists of at least 100 individual classification models built using the optimum parameter settings. At this stage, rather than isolating a single classification model, all individual classifiers are fused into a classification ensemble. Ensembles have repeatedly been shown to perform better than individual classifiers (Opitz and Maclin 1999;Dietterich 2000;Westerhuis et al. 2008) and have the added benefit of providing a measure of confidence in the predictions -the greater the number of models that vote for a reported class the more confident we can be that this class is correct.
Calculation of performance metrics
The first indicator frequently used in multivariate classification is the percentage of correctly classified samples (%CC): where N c and N nc are the number of correct and incorrect classifications respectively (Ciosek et al. 2005). The sum of N c and N nc is equal to the total number of instances n in the dataset. The model with the maximum number of correctly classified samples is considered optimal. In a similar manner, the percentages of correctly classified samples per class are also calculated. The comparison of the individual class predictions is important as the overall accuracies of a classifier may occasionally be misleading.
Permutation testing
Nonparametric permutation testing can be applied as a means of providing an indication of the statistical significance of the classification model performance (Anderson 2001). In each permutation iteration, the input data matrix remains unaltered while the associated class vector is randomly shuffled; thus, the class distribution in the dataset remains unaltered, however, the samples correspond to randomly assigned classes. This procedure randomises the association between the input data and the classes, while their initial distributional properties are preserved (Westerhuis et al. 2008). Permutation testing is performed repeatedly a large number of times (usually a minimum of 100 times) until a stable distribution under the null hypothesis is obtained. In this case study, the null hypothesis that we are trying to reject assumes that there is no significant relationship between the observed data and the sample classes, and therefore a classification model could have been built to group samples into any arbitrary class.
At the end of permutation testing, we can determine the frequency of models that presented accuracies equal to or higher than the original model. A frequency metric commonly used when testing a statistical hypothesis is the pvalue (Hubert and Schultz 1976). A p-value less than or equal to a predefined threshold value-commonly referred to as the significance level-indicates that the observed data are inconsistent with the assumption that the null hypothesis is true, and thus the null hypothesis must be rejected. A particular benefit of p-values is that they are directly comparable across different cases regardless the number of samples, variables and classes in a dataset. However, it is important to exercise caution when using p-values as a basis for biological conclusions as they are not as reliable or as objective as most scientists believe (Nuzzo 2014).
R implementation
All of the above methods have been implemented in a new R package called classyfire (http://cran.r-project.org/pack age=classyfire). This implementation is highly integrated, such that most of the functionality is accessed using just three functions. The cfBuild() function implements the training and testing workflow as outlined in Sects. 2.2-2.4. As a minimum, two objects need to be provided as input to the workflow. One of these is the data matrix containing the data associated with every sample under study. Any alignment or other pre-processing must be applied prior to passing the data to the function. The other mandatory input object contains essential information about the experimental design, specifically the group (class) to which each sample belongs. Optional objects are used to configure specific details of the workflow, such as the number of ensembles and bootstrap iterations to perform as well as arguments that determine whether execution is in sequence or in parallel.
On completion of the workflow, the function outputs an object containing the classification ensemble produced, together with detailed performance metrics. This object can be used to classify samples in further datasets using the cfPredict() function, and can also be interrogated to reveal performance metrics, in both numeric and graphical forms. The cfPermute() function is used to perform permutation testing to indicate the statistical significance of the classification performance obtained, as described in Sect. 2.5.
Datasets used
To demonstrate the use of the classyfire package, it was applied to two well-understood NMR datasets, both of which are included in the publicly available MetabolAnalyze R package (http://cran.r-project.org/package=Meta bolAnalyze). These are simulated datasets designed to mimic experimental data previously reported in Carmody and Brennan (2010), and Nyamundanda et al. (2010). In brief, mice were randomly assigned to two treatment groups and treated with pentylenetetrazole (treated group) or saline (control group) for a period of 4 weeks. Urine was collected and at the end of the treatment period brain regions were isolated and metabolites extracted, and all samples analysed using NMR. In the following, dataset A is used to refer to the urine dataset-an 18 sample, 189 variable (189 spectral bin) dataset, which is split 50/50 between the two treatment groups (treated vs control). Dataset B is the brain dataset, comprising 33 samples (all from the control group mice) of 164 variables with spectra collected from four different areas of rat brain: brain stem, cerebellum, hippocampus and pre-frontal cortex (Nyamundanda et al. 2010). These datasets therefore provide an example of a two class problem and a four class problem respectively.
Evaluation of classification accuracy
The overall classification accuracy obtained for dataset A was equal to 82.7 %. A breakdown of the classification results by class is shown in Fig. 2a. Figure 2b depicts the overall classification accuracy as a function of the number of SVMs in the ensemble, which shows that it stabilises once the ensemble passes 75 classifiers, suggesting that the decision to use 100 classifiers was appropriate.
For dataset B, the overall classification accuracy was equal to 83.2 %, with most of the erroneous classification being related to class 4. The results are presented graphically in Fig. 3. The breakdown of the classification results by class (Fig. 3a) shows that samples belonging to classes 1 and 2 are always identified correctly, while samples from class 3 are occasionally (in 15 % of attempts) wrongly identified as class 4, and class 4 is the most difficult to predict, with frequent misassignments to other classes. Figure 3b depicts the overall classification accuracy as a function of the number of SVMs in the ensemble, and 100 again appears to be a reasonable number of classifiers to use in this case.
Permutation testing results
Each permutation constitutes a single classification ensemble, which includes a predefined number of individual classifiers set by the user when using the cfPermute() function (by default equal to 100); each of these classifiers consists of 100 bootstrapping iterations (set by default) for the purposes of hyperparameter optimisation. The permutation tests were executed a total of 100 times for each dataset under study, which results in a total of one million iterations per dataset (since there are 100 classifiers per ensemble, each requiring 100 bootstrap iterations).
For both datasets under study, the non-permuted overall accuracies of the classification ensembles are well above the 95 % confidence intervals of the permutation distributions; indeed they are even greater than the 99 % confidence intervals leading to a greater confidence in our results. For instance, the non-permuted %CC for the urine data (average test accuracy of the ensemble) is equal to 82.7 %, which is significantly higher than 53.3 % and 72.0 %, the values corresponding respectively to the upper 95 and 99 % confidence levels of the permuted distribution (Fig. 2c); the 95 % confidence interval of the distribution is Fig. 4 Execution time as a function of the number of processing cores used for training a classification ensemble for the urine dataset (dataset A), which consists of 100 independent classifiers, internally optimised using 100 bootstrap iterations (the default values of cfBuild() were used throughout). The analysis was executed on a dual CPU Intel Xeon X5660 at 2.8 GHz, which features 8 cores with 16 threads each and 32 GB RAM retrieved using built-in classyfire functions as part of the ''five number summary'', and can be graphically represented as in Figs. 2d and 3d. Similarly, in the case of the brain data, the non-permuted classification accuracy was equal to 83.2 % (Fig. 3c), well above the upper 95 and 99 % confidence levels, equal to 26.6 and 38.1 % respectively. In both instances, the p value was less than the significance level of 0.01, which gives us a strong indication about the statistical confidence of our results.
Computational efficiency
In a direct comparison of the SVM optimisation algorithms, our heuristic method outperformed a traditional grid search by a factor of 13.5 when running on a single processing core. Training on multiple cores provides a speedup in proportion to the number of cores used. These results, obtained using the urine dataset, are shown in Fig. 4. Permutation testing was not included in this benchmarking experiment, but the processing required is directly proportional to the number of permutations, so execution times for a 100 iteration permutation test can be extrapolated to approximately 24 h for our heuristic method versus 12 days for the grid search.
Discussion
By speeding up the SVM optimisation by more than an order of magnitude we have been able to produce a robust and easy to use multivariate classification package for R. While every effort has been made to ensure that the package produces high performance classification models, evaluated using the most accurate performance metrics available, there are of course limitations to what can be achieved with a given dataset. The experimental design used to generate the dataset is key. In particular, the generic applicability of the classification ensemble will be determined by the number of samples available, and how well those samples represent the biological phenomena under study. If variance observed in the real world is not represented in the data used to train and test the classification models then the performance reported by classyfire is unlikely to be achieved in real world application. This mistake is commonly made in clinical case/control studies where control samples are only taken from healthy volunteers, not from individuals with other diseases.
The current implementation of classyfire is solely focused on the optimisation of RBF SVMs with bootstrapping. As part of future developments, the application of the package could be extended to support different types of SVMs (e.g. polynomial kernel) as well as different types of classifiers.
Conclusions
We have produced an easy to use R package that implements current best practice in the training and evaluation of models for recognising samples from analytical data acquired. Specifically, the package allows the user to build high performance ensembles of SVM classifiers and thoroughly evaluate and visualise the ensemble's classification ability using bootstrapping and permutation testing. This has been made possible by developing a novel SVM optimisation strategy that reduces the time needed to execute this process by more than an order of magnitude. The package's support for parallel processing enables execution time to be reduced even further, roughly as a function of the number of available processor cores. Our aim in releasing this package is to help increase the uptake of best practice by making our robust training and evaluation workflow available to biological researchers who may previously have been unable to do this due to lack of time or expertise.
Acknowledgments This work was partially funded by the European Commission FP7 via the SYMBIOSIS-EU project (Project Number 211638).
Compliance with ethical standards
Conflict of interest The authors declare no conflicts of interest.
Ethical approval This article does not contain any new studies with human or animal subjects. The datasets used were simulated based on previously collected experimental data, the ethical approval for which is reported in Carmody and Brennan (2010).
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://crea tivecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. | 2017-08-02T23:56:33.444Z | 2015-11-21T00:00:00.000 | {
"year": 2015,
"sha1": "42a19e4b6389dd1359add2a034e5441a834cbc25",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s11306-015-0894-4.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "42a19e4b6389dd1359add2a034e5441a834cbc25",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science",
"Medicine"
]
} |
6184031 | pes2o/s2orc | v3-fos-license | Correlation of abdominal adiposity with components of metabolic syndrome, anthropometric parameters and Insulin resistance, in obese and non obese, diabetics and non diabetics: A cross sectional observational study. (Mysore Visceral Adiposity in Diabetes Study)
Objectives: To measure Visceral Fat (VF) and Subcutaneous Fat (SCF) by ultrasound, in obese and non-obese diabetics and obese and non-obese non diabetics, in a South Indian (Asian Indian) Population and correlate them with Body Mass Index (BMI), Waist Circumference (WC), components of metabolic syndrome and Insulin Resistance (IR) Research Design and Methods: This was a prospective observational study, 80 diabetics (40 obese and 40 non obese) and 80 non diabetics (40 obese and 40 non obese) a total of 160 subjects were enrolled, out of whom 153 completed the study. The subjects were evaluated with respect to BMI, WC, Blood Pressure (BP); Fasting Blood Sugar (FBS) Fasting Insulin levels (FIL), HbA1C and Lipid profile. The SCF and VF were measured by Ultrasonography.The results were statistically analyzed. Results: WC correlated significantly with VF in all the groups. Diabetics had more VF compared to non-diabetics. Insulin resistance was significant in all the groups; however diabetics had greater levels of IR, BMI, WC, VF and SCF had no correlation with IR and had no significant correlation with metabolic parameters. Conclusions: In this study population, WC was found to be a useful surrogate measure of VF conforming to its well established applicability in other populations. Contrary to other studies elsewhere, SCF and VF were found to be poor indicators of Insulin Resistance. BMI, WC, VF and SCF were not useful in the prediction of metabolic syndrome. Ultrasound was found to be an easier and economic method of measuring abdominal adiposity and actual measurement of abdominal fat was more informative than anthropometric measurements.
India can be categorized as overweight or obese, which is an alarming figure for a developing country. [5] Waist Circumference (WC) is a simple measurement for Visceral fat (VF), but may not represent only VF, as subcutaneous fat (SCF) also contributes to it. [4] WC has been shown to correlate with visceral fat and with hyperglycemia, hypertension and dyslipidemia. [6] Asian Indians, have a higher risk for obesity related complications at a lower level of BMI vis-à-vis their Caucasian counterparts owing to higher visceral fat. [7] The cut off points for Asian Indians are different when compared to western population as modified and recommended by World health organization (WHO). The Body Mass Index (BMI) of 23 to 24.9kg/mt 2 for overweight and >25kg/mt 2 for obesity. [5] Measurement of visceral fat may have more significance than measuring WC. Computerized Tomography (CT) is the gold standard for the measurement of visceral fat volume, but is expensive, involves radiation and may not be universally available. Magnetic Resonance Imaging (MRI) is also a good method, but is much more expensive, over estimates fat deposits and may not be again universally available, both these methods cannot be used routinely. [8][9][10] Ultrasonography is relatively inexpensive, readily available equally reliable and involves no radiation and is a method with established validity. [11][12][13][14][15][16][17][18][19] Studies have shown that the Visceral fat volume measured by CT, is very well correlated with the visceral fat measured by ultrasound (r -0.710, P < 0.001), [2] (r -0.860, P < 0.001), with a sensitivity of 69.2%, specificity of 82.8% and a diagnostic concordance of 74%. [18] This study was carried out at Mysore, a city in South India. We have measured the SCF and VF in diabetics (obese and non-obese) as well as non-diabetics (obese and non-obese) using Ultrasonography. We correlated WC and BMI with SCF and VF and each with BP, Triglycerides (TG) high density lipoprotein (HDL), total cholesterol (TC), LDL (components of metabolic syndrome) and Insulin resistance (IR).
Objectives
To correlate sonographically measured SCF and VF with BMI and WC, Blood pressure, Total Cholesterol, Triglycerides, HDL, LDL,(components of metabolic syndrome) and Insulin resistance, in diabetics (obese and non-obese) and non-diabetics (obese and non-obese).
materialS anD methoDS
This was a prospective, cross sectional, comparative, observational study carried out from March 2010 to Feb 2011. Ethical clearance was obtained from the Institutional ethical committee.One hundred and sixtysubjects were recruited in four groups of 40 each, • Group A -Obese Diabetics • Group B -Non obese Diabetics • Group C -Obese non diabetics • Group D -Non obese non diabetics.
Subjects of both sexes aged 18 years and above were recruited. Written Informed consent was obtained from all the participants. Height (cm) and weight (kgs), were recorded. Waist circumference was measured (cm) midway between the lower border of the ribs and the iliac crest with subject in standing position, Blood pressure was recorded in the sitting position, in the right arm, with a standard mercury sphygmomanometer after five minute rest. Average of three readings was taken. Asian Indian BMI criteria for categorizing as obese and non-obese were followed. Fasting Blood Sugar (GOD-PAP method), HbA1C (HPLC method), serum cholesterol (CHOD-PAP method), serum triglycerides (Enzymatic method), HDL (3 rd generation direct assay) LDL (3 rd generation direct assay), Serum Insulin fasting assay (CLIA method) were done for each subject at a NABL accredited standard laboratory. Insulin Resistance was calculated by the HOMA-IR assessment formula of FBS (m moles) multiplied by Fasting Insulin (m IU) divided by 22.5.
Sonographic measurements
The measurement of subcutaneous fat (SCF), pre-peritoneal fat (PPF) and Visceral fat (VF) were done by the same Ultrasonologist for all subjects using a GE P5 Logic system with multiple frequency (2-5 Mg HZ) convex probe for measuring VF and linear probe (8-12 Mg HZ) for measuring abdominal wall fat. Criteria, as defined by Stolk et al., [13] was used for the measurement, the details are as given below.
Visceral fat thickness (defined as the distance between the anterior border of lumbar vertebra and posterior surface of Rectus abdominus muscle) was measured midway between xiphisternum and umbilicus, approximately 5 cm from umbilicus at three positions along the horizontal line [ Figure 1]. All measurements were done at the end of quiet expiration, applying minimal pressure, not displacing or deforming the abdominal contents. [13] Longitudinal scans were obtained using a linear probe along the mid line (linea alba) and fat skin barrier. The thickness of the subcutaneous fat was defined as the distance between the anterior surface of the linea alba and the fat skin barrier. Pre peritoneal fat was measured as extending from the anterior surface of the left lobe of the liver to the posterior surface of the linea Alba [ Figure 2].
Statistical analysis
The group comparisons for various parameters like VF, SCF, etc., were done through one way ANOVA, where as Pearson's product moment correlations was employed to find out the relationship between physical and ultrasound parameters across all the groups for blood pressure, TC, TGL, HDL etc., Confidence limits at 95% interval were calculated for mean intra-abdominal fat values for all the 4 groups. The significance levels fixed for 0.05 levels for all the statistical tests applied. The statistical calculations were done using PASW (version 18.0, previously named SPSS).
reSultS
Out of a total of 160 recruited, 153 completed the study. Number of males were 93 (60.78%) and 60 (39.22%) were females. The mean age of males was 43.40 ± 12.23 years and females were 44.29 ± 11.61 years [ Table 1]. There were 42 subjects in Group-A, 36 in Group B, 38 in Group C and 37 in Group D, [ Table 1]. Table 2, depicts anthropological, clinical and Biochemical measurements in all the four groups. As it can be seen, the highest VF was seen in Group-A followed by Group-C, B and D. Comparison of VF between the groups were statistically significant (P-0.000). The highest IR was also in Group-A followed by Groups B, C and D in that order. Diabetic group as expected had higher IR compared to non diabetics. Even here also, comparisons between the four groups were highly significant (P-0.006) SCF was highest in Group C followed by Group A, B and D. showing that obese non diabetics had both higher SCF and VF. Both BMI and WC were highest in Group A followed by Group C, B and D. In so far as metabolic parameters were concerned, not much significance could be given to the values in diabetics either obese or non obese considering that all of them were on anti hypertensive and lipid lowering medications. The values in non diabetics for Systolic blood pressure (SBP), Diastolic blood pressure (DBP), TC, HDL and TG revealed no significant difference in comparison with diabetics. LDL was the lone parameter not significantly higher in these groups.
BMI and WC are the two anthropometric measurements routinely utilized to grade obesity. We aimed to know whether these anthropometric measurements would correlate with SCF or VF. BMI correlated significantly with SCF and VF in groups A, C and D. WC correlated with SCF in groups A and D, whereas it correlated significantly with VF in all the four groups (P-0.003, P-0.000) [ Table 3]. The VF/SCF ratio considered to be a significant parameter for visceral adiposity was compared between diabetics (both obese and non obese) with non diabetics (both obese and non obese) Diabetics had a significantly higher ratio signifying higher VF in them ( P-0.000) as shown in Table 4.
Blood pressure, HDL, TG and WC are part of metabolic syndrome. We tried to correlate BMI, WC, SCF and VF with the metabolic parameters to know whether any consistent relationship existed. BMI correlated only with DBP in group D, WC correlated with DBP in groups B and D and with TG in groups B and C. SCF correlated with SBP in group A, with TC in group D and with LDL in group D, whereas VF correlated only with TG in group D. None of these correlated with IR in any of the groups. There was no consistency in any of these correlations [ Table 5].
Insulin Resistance (IR) was the most important parameter of the investigations done and we wanted to know whether increase in VF would increase IR or not. IR was significant when compared between the groups, ( P-0.006) as shown in Table 2, but surprisingly did not correlate with either VF or SCF or with BMI or WC [ Table 5].
DiScuSSion
This study is to our knowledge, one of the very few, that has undertaken a comprehensive comparison across four groups: Obese and non obese non diabetics and obese and non obesediabetics, in an Asian Indian population, the age of the participants in both the gender were well matched [ Table 1]. Waist circumference (WC) is considered as the best predictor of VF than BMI in normal subjects, [1] whereas BMI correlated better with SCF than VF. [20] In Diabetic subjects, WC predicted VF better than BMI and SCF. [7] Asian Indians have a higher truncal fat with a lower BMI compared to other ethnic groups. [1,21] Ultrasound measurement of VF correlated better with components of metabolic syndrome (Met-S) than measured WC. [22] Increased VF would play a major role in the development of T2 DM, CVD and Met-S [23] There is no consensus regarding the cut off points of VF above which the risk of these would increase. Studies have postulated VF 6.9cm in women, [24] 7 to 9 cm in men and 7 to 8 cm in non diabetics and 4.67 cm in men an 3.55 cm in women diabetics, [25,26] >5.8cm in men and >4.7cm in women diabetics, [20] A VF/ SCF ratio of 2.7 ± 1.i [24] and >2.5 would likely to increase the risk. [27] This study showed a uniformly high VF (9.16 ± 1.93, 7.03 ± 1.88, 8.08 ± 2.08 and 5.86 ± 1.65cm) in all the VF has been a significant part of the definition of metabolic syndrome which also includes BP, Tg, HDL and blood sugar. Strong association of VF was seen with T2DM, [4,7,27] with Tg, TC and decreased HDL. [1,27] Mesenteric fat had a significant correlation with SBP, Tg and HDL. [14] In this study, we have not defined Met-S in our subjects and those in group 4 did not have all the components. Since, we measured BP and components of Met-S in all the subjects, correlation was done between WC, BMI, SCF and VF and components of Met-S. None of them had any consistent correlation with the components of Met S. The inconsistent correlation in obese and non obese diabetics is probably due to the fact that they were on treatment at the time of recruitment. Obese and non obese non diabetics were not on any treatment and still there was no consistent correlation which probably would mean that VF alone may not be responsible for the changes in the components of Met Syndrome.
Obesity in general and visceral obesity in particular, is considered as the most important factor for the causation of Insulin Resistance (IR). Asian Indians have more IR independent of generalized or truncal obesity. [28] VF predicted IR [21,29] and it was the conduit by which obesity lead to IR. [30,31] VF has been implicated in hepatic IR by producing more free fatty acids and lipolysis. Secretion of several inflammatory adipocytokines by VF has also been said to lead to IR. [27] On the other side, IR was also associated with SCF mass. [32][33][34] Hence clear proof of the association of VF with IR is lacking. [32] It could be that VF and SCF and their joint interaction may lead to IR. [31] Asian Indians probably have a metabolic defect which causes IR, independent of generalized or truncal obesity [28] and there could be contribution of genes and environmental factors. [35,36] It is to be noted here that presently, there is no consensus regarding the cut off values for IR. The values suggested are 1.35 to 1.96 for normal individuals and 2.42 for diabetics [37] and 1.78 for normal and 3.88 diabetic individuals [38] by homeostasis model assessment of insulin resistance (HOMA IR) method. The present study showed significantly higher values than the values mentioned above in diabetics and obese non diabetics. Between group comparisons of IR was significant (0.006) [ Trust (MPMRT) Mysore and minor grant was provided by Medical Education and Research Trust (MERT) Bangalore. We are grateful to both the organizations. We thank all our subjects for their participation and cooperation.
concluSionS
In this study of South Indian Population, WC was found to be a useful surrogate measure of VF. SCF and VF were found to be poor indicators of Insulin Resistance. BMI, WC, VF and SCF did not correlate consistently with the components of metabolic syndrome, suggesting a need for reassessment of their role. Further research, looking at other pathogenic mechanisms, for IR in Asian Indians is needed. Actual measurement of abdominal fat was found to be superior in comparison with anthropometric measurements for evaluation of IR. | 2018-04-03T03:26:57.297Z | 2014-09-01T00:00:00.000 | {
"year": 2014,
"sha1": "470acfbdbc3ed25271ba9a952761d60aa1312805",
"oa_license": "CCBYNCSA",
"oa_url": "https://doi.org/10.4103/2230-8210.139231",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "5823452ca388d677913e7fb8b7b2cded96aae204",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
125939338 | pes2o/s2orc | v3-fos-license | Numerical simulation of the flow distribution in a trap of a propellant tank with micro gravity
A trap device is defined as a closed structure which holds and provides a specific quantity of propellant using the surface tension forces. A trap in a vane-type PMD is investigated. The influence of the cone angle of the trap and flow distributions with different conditions in space are investigated. The VOF model could be used to simulate the flow distribution in a trap with zero or small gravity condition. The optimal angle for the trap is α=45° and it has the best result of expulsion efficiency of the propellant tank. The trap can be full filled with propellant after 21 second at zero gravity condition and 37 second at north-south insurance condition. A gas bubble left in the trap at the end of the refilling process. There is no flow distribution change in the trap when the trap is full filled at reverse gravity condition and sink condition.
Introduction
Surface tension forces are negligible in most engineering problems. However, in the low gravity environment of orbiting vehicles, surface tension forces are significant and often dictate the location and orientation of liquid within vessels, conduits, etc. By carefully designing structures within a propellant tank, one can utilize these forces to ensure gas free propellant delivery. These structures have come to be known as propellant management devices or PMDs.
Traditionally PMDs are designed for each specific mission scenario and tank size. As a result PMDs can be found in numerous sizes and configurations. PMDs can be classified into three broad categories: partial control devices, total control devices, and total communication devices. By definition, communication PMDs provide gas free propellant delivery by establishing a communication path between the bulk of the propellant and the outlet or another device component such as a sponge. The vane type PMD is such a device.
Sharipov [1] investigated gaseous mixture flow of a PMD through a long tube at arbitrary Knudsen numbers. Jaekle analyzed the capability of the vane-type PMD by analyzing the influence of vanes, sponges, galleries, traps and troughs [2][3][4] to the propellant distribution in the tank. Tam [5] proposed a new PMD, which was capable of transferring both gas-free propellant and liquid-free pressurant upon demand. The PMD performance analysis utilized the same design methodology and conservative approaches as all previous PMD design efforts. Hu [6] studied the influence of width and angle on the performance of a vane-type PMD. It was found that increase of width of the vane would improve the flow rate along the vanes, but it also decreases the climbing height of the propellant. Liu [7] analyzed the management performance with different operating condition of a vane-type tank by numerical simulation. Zhuang [8] investigated the natural frequencies and damping effects of Diaphragm-Implemented spacecraft propellant tanks using computational methods. A kind of trap in a vane-type PMD is investigated. The influence of the cone angle of the trap was studied. Flow distributions with different conditions in space were investigated.
Trap Geometry
Trap offers a reservoir of propellant usable during high acceleration maneuvers. A trap retains liquid even when horizontal or inverted by using the surface tension forces present in a wetted porous element. Propellant will remain within the trap against the hydrostatic forces only if the bubble point of the porous element is not exceeded. If the maximum pressure difference across the porous element established by surface tension (the bubble point) is insufficient to balance the hydrostatics and flow losses, gas will enter the trap through the porous element and the trap will leak. By choosing a smaller pored porous element, higher accelerations and/or larger distances can be accommodated. The structure of trap in a vane-type PMD is shown in figure 1 and figure 2. The trap has 24 small vanes. Three kinds of angles α (45°, 48° and 51°) of the trap are studied.
R is the source term. ρ is the mixture density. ui is the mass-averaged velocity.
The properties appearing in the transport equations are determined by the presence of the component phases in each control volume. In a two-phase system, for example, if the phases are represented by the subscripts 1 and 2, and if the volume fraction of the second of these is being tracked, the density in each cell is given by α is the volume fraction of the phase.
The volume fraction equation will not be solved for the primary phase; the primary-phase volume fraction will be computed based on the following constraint:
Simulation conditions
The commercial CFD code FLUENT was used to perform the simulations. The SIMPLEC algorithm was used to enforce mass conservation. No slip boundary condition was using to solve the flow on the wall. Two phases in the tank were He and N2H4. Properties of He and N2H4 are shown in table 2. For present unsteady flow calculation, the time step was 0.0001 s. For all calculations, simulations were run until convergence, which was determined by a reduction in the residual error to less than 0.0001. Three dimensional calculations were performed to investigate the influence of flow distribution in the tank. The model's grids, which were composed of an unstructured hexahedron and tetrahedron, were developed using ICEM, which is a commercial software package used for CFD discretization.
Trap performance analysis
Refillable traps use the hydrostatics and dynamics created by the main engine settling acceleration to eject the gas ingested during ignition; refilling the trap. The filling time for a trap is very important for the propellant system.
North-south insurance condition.
At north-south insurance condition, the gravity direction is the same with X direction. The refilling process of the trap with north-south insurance condition is shown in figure 9 (black color denotes the propellant). The gravity is 5×10 -3 g0. Propellant behaviors in the trap are the same with refilling process of the trap with zero gravity condition. The trap is full filled with propellant after 37 second and also a gas bubble left in the trap at the end. It can be seen that the northsouth insurance condition will not influence the refilling process of the propellant for this trap.
Conclusion
A trap device is defined as a closed structure which holds and provides a specific quantity of propellant using the surface tension forces. This paper involves the propellant distribution and optimization of a vane type trap of a PMD in a tank. The VOF model could be used to simulate the flow distribution in a trap with zero or small gravity condition. The optimal angle for the trap is α=45° and it has the best result of expulsion efficiency of the propellant tank. The trap can be full filled with propellant after 21 second at zero gravity condition and 37 second at north-south insurance condition. A gas bubble left in the trap at the end of the refilling process. There also has no flow distribution change in the trap when the trap is full filled at reverse gravity condition and sink condition. | 2019-04-22T13:12:07.996Z | 2018-07-30T00:00:00.000 | {
"year": 2018,
"sha1": "0efdbf5017dbe989764b98ba102b3ad410057c43",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1755-1315/163/1/012083",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "0d82ed65910667673746a3c4ebf4e17866ba3cb7",
"s2fieldsofstudy": [
"Engineering",
"Chemistry"
],
"extfieldsofstudy": [
"Physics",
"Materials Science"
]
} |
19026550 | pes2o/s2orc | v3-fos-license | The desferrioxamine-prochlorperazine coma—clue to the role of dopamine-iron recycling in the synthesis of hydrogen peroxide in the brain
A major activity at glutamatergic synapses in the brain is the production of potentially toxic oxygen radicals, in particular superoxide, by postsynaptic enzymes such as PGH synthase and by mitochondria. These need antioxidant mechanisms, such as the local production of antioxidant molecules (such as glutathione and ascorbate) and antioxidant enzymes, for protection of the neuron and its component chemicals (Smythies, 1999a,b). There may be also a further mechanism for antioxidant protection of neurons, that has been little explored, and that is the O'Brien cycle. This paper will describe this cycle and present an hypothesis as to its in vivo function.
INTRODUCTION
A major activity at glutamatergic synapses in the brain is the production of potentially toxic oxygen radicals, in particular superoxide, by postsynaptic enzymes such as PGH synthase and by mitochondria. These need antioxidant mechanisms, such as the local production of antioxidant molecules (such as glutathione and ascorbate) and antioxidant enzymes, for protection of the neuron and its component chemicals (Smythies, 1999a,b). There may be also a further mechanism for antioxidant protection of neurons, that has been little explored, and that is the O'Brien cycle. This paper will describe this cycle and present an hypothesis as to its in vivo function.
THE FACTS
For a baseline it will be convenient to give a brief account of the antioxidant defenses of the glutamate synapse. Here the main players in the cleft are ascorbate (the major hydrophilic extracellular antioxidant in brain), glutathione (the major lipophilic antioxidant), carnosine, and the antioxidant enzyme superoxide dismutase (SOD) (Smythies, 1999a). Reuptake of glutamate by the glutamate transporter is associated with a matching release of ascorbate into the cleft (Grünewald, 1993;Rebec and Pierce, 1994). Glutathione and SOD are actively secreted into the cleft by astrocytes (Stone et al., 1999). Presynaptic glutamate vesicles also contain the antioxidant peptide carnosine, which is released along with glutamate (Boldyrev et al., 1997). Lastly dopaminergic boutons en passage are closely border glutamatergic synapses: stimulation of D2 receptors results in the induction in the postsynaptic neuron of anti-oxidant enzymes important for redox cycling including SOD, gammaglutamylcysteine synthase, glutathione synthase, glutathione peroxidase, glutathione S-transferase, and glutathione reductase (Tanaka et al., 2001;Smythies, 2002).
However, less data is available about the antioxidant protection of catecholamines in and around the synapse. In this regard it is important to note that the major mode of interneuronal communication by dopamine is by volume transmissiondopamine is released directly into the extracellular space modulated by the hydrophilic environment of the extracellular matrix (Rice and Cragg, 2008;Fuxe et al., 2013). Since dopamine is easily oxidized (Smythies, 1999b) this requires an efficient antioxidant system. There is evidence that this is partly provided by the hydrophilic molecule ascorbate: -In the retina, Neal et al. (1999) showed that the release of ascorbate protects dopamine from oxidation. -Using slow-scan voltammetry following the injection of ascorbate oxidase into the rat striatum, Rebec and Wang (2001) reported a rapid decline in both ascorbate and behavioral activation. Within 20 min, an ascorbate loss of 50-70% led to a near-total inhibition of all recorded behavior including open-field locomotion, approach of novel objects and social interaction with other rats. -Using microdialysis, Morales et al. (2012) showed that, in the striatum, ascorbate release prevents dopamine oxidation. They also found that glutamate release inhibited dopamine uptake, while dopamine release inhibited glutamate release. However, this system is exceedingly complex. Hara et al. (2009) presented evidence that, under some circumstances involving cooperation with iron, ascorbate can act as a pro-oxidant and stimulate the generation of the highly neurotoxic hydroxyl radical.
Other than this, there is scant information of other antioxidant mechanisms in this system. However, a clue is offered from an obscure clinical source. Blake et al. (1985) reported that a combination of normal doses of the iron chelator desferrioxamine and the antiemetic D2receptor blocker prochlorperazine, given to two patients with rheumatoid disease but otherwise normal iron metabolism, induced a deep coma that lasted for 2 days in one case and 3 days in the other. Their EEG depicted increased slow waves. Neither drug in isolation produces any such effect at any dose. The same effect was obtained in rats, with the coma lasting approximately 7 h in normal subjects, and some 36 h in iron-deficient rats. To date, the mechanism producing this phenomenon remains obscure.
AN UNEXPLAINED CLINICAL SYNDROME: THE DESFERRIOXAMINE-PROCHLORPERAZINE COMA
In a previous communication, Smythies (2011) suggested that this coma might be caused by disruption of the O'Brien cycle in the brain. This cycle is generated by the interaction of superoxide, dopamine and iron (Zhao et al., 1998;Siraki et al., 2000). These workers demonstrated that, in isolated hepatocytes, cycling between ferrous and ferric iron linked to cycling between the catecholamine (in particular, dopamine) and its semiquinone-in the presence of superoxide-forms an effective dismuting system that transforms the superoxide into water and hydrogen peroxide. This cycle depends on the presence of free iron, superoxide and dopamine molecules at the same microanatomical locus (for background information see Smythies, 1999b). Therefore, the question can be asked: where in the brain could such propinquity exist?
Iron is a highly reactive molecule, and the only place within a neuron that free iron is known to exist is in endosomes, where, however, free dopamine is not to be found. With that said, there may be an alternate location worthy of consideration, that being the extracellular space between neurons. Here, as we have noted above, dopamine is present and functioning in its volume transmission role, as well as superoxide derived from NADPH-oxidase on the extracellular side of cell membranes (Oakley et al., 2009), from extracellular microglia (Zoccarato et al., 2005) and from mitochondria (Bao et al., 2009). NADPHoxidase is widely distributed in the brain (Oury et al., 1999). Iron is also present but is ensconced its carrier molecule, transferrin. However, the affinity of catecholamines for iron is greater than that for transferrin, and Sandrini et al. (2010) have shown that catecholamines can "steal" ferric iron from transferrin by forming catecholamine-iron complexes. Therefore, it is quite possible that the O'Brien cycle could operate within the dopamine-containing extracellular space in the brain.
It is well established that dopamine oxidation products, such as dopamine quinone and dopaminochrome, that play a role in the O'Brien cycle, can be highly neurotoxic (Smythies, 1999b). Furthermore, dopamine activates MAO activity, and that, when in presence of ALDH2 deficiency, this activation can be toxic, particularly to mitochondria, due to the formation of H 2 O 2 and toxic aldehydes (Kaludercic et al., 2014). However, our hypothesis is not concerned with the neurotoxic role of dopamine and its quinones (when they are not engaged in the cycle), but with their role in reducing toxic levels of superoxide and providing adequate levels of hydrogen peroxide (when they are active in the cycle). The O'Brien cycle merely recycles dopamine and its quinones with no effect on the levels of either. Therefore, considerations of dopamine and dopamine quinone toxicity per se lie outside the scope of this paper.
Another aspect of redox mechanisms at the dopamine synapse may be related to its co-transmission with glutamate. Most if not all dopaminergic neurons in the midbrain utilize glutamate as a co-transmitter (Descarries et al., 2008;Koos et al., 2011). Activation of dopaminergic neurons elicits small amplitude postsynaptic glutamatergic currents in all spiny pyramidal neurons in the nucleus accumbens, mediated by both AMPA and NMDA receptors. This is accompanied the by simultaneous release of dopamine. VGluT2 knockout has complex effects on risk-taking behaviors and anxiety (Koos et al., 2011). These authors list three possible functions mediated by this system: (1) various local postsynaptic effects, (2) glutamate release from these terminals may function as a heterosynaptic modulator of other inputs, and (3) the development and/or maintenance of dopaminergic synapses. In the case of the third possibility, one detail that may be implicated consists of the redox factors described above. An added benefit to dopaminergic transmission from cotransmission with glutamate may be the availability of the extensive antioxidant resources of the glutamate synapse listed earlier. In other words, if these neurons use both dopamine and glutamate as their transmitters, then the antioxidant mechanisms associated with the glutamate part may also be available to the dopamine part.
OUR HYPOTHESIS
Our hypothesis for the cause of the desferrioxamine-prochlorperazine coma is predicated on the fact that the brain has two powerful antioxidant-protective mechanisms in play that both involve dopamine. The first mechanism is the activation of a series of antioxidant enzymes by the dopamine D2-receptor, which is blocked by prochlorperazine. The second is the O'Brien cycle that would collapse if deprived of iron by desferrioxamine. The brain may be able to operate in the absence of either of these mechanisms, but not both. A series of experiments are indicated to test this hypothesis. These would include repeating the rat experiment by Blake et al. (1985) using a range of dopamine receptor blockers and iron chelation agents.
The implications of confirming that the O'Brien cycle is present and active in the brain are broad, and would extend well beyond a simple redox function or control of superoxide toxicity. The main product of the O'Brien cycle is hydrogen peroxide. This molecule is involved in a wide range of signaling pathways in the brain. For example: -Modulation of both GABAergic and glutamatergic functions (Frantseva et al., 1998;Sah and Schwartz-Bloom, 1999). -Blockade of catecholamine and glutamate uptake by synaptic vesicles (Wang and Floor, 1998). -Inhibition of synaptic dopamine release (Chen et al., 2001). -Inhibition of an adenosine-mediated synaptic transmission in hippocampal slices (Masino et al., 1999). -Maintenance of essential cell populations in the brain (Dickinson et al., 2011). -Modulation of membrane and cytoskeletal properties in astrocytes and increasing their intercellular connections (Zhu et al., 2005).
A general review of this topic is provided by Rice (2011). Conceivably, the action of dopamine in the O'Brien cycle might allow it to modulate the supply of hydrogen peroxide for each of these mechanisms. In and of itself, this would have far-reaching functional consequences in the context
ACKNOWLEDGMENT
We are grateful to Kjell Fuxe for his helpful input to this paper. | 2016-06-17T20:25:22.023Z | 2014-08-04T00:00:00.000 | {
"year": 2014,
"sha1": "771d72f301ab82fdc879d397b297965c2300c269",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fnmol.2014.00074/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "771d72f301ab82fdc879d397b297965c2300c269",
"s2fieldsofstudy": [
"Chemistry",
"Medicine"
],
"extfieldsofstudy": [
"Chemistry",
"Medicine"
]
} |
235761575 | pes2o/s2orc | v3-fos-license | Effect of Maternal Triclosan Exposure on Neonatal Birth Weight and Children Triclosan Exposure on Children's BMI: A Meta-Analysis
Background: Triclosan (TCS) is an environmental chemical with endocrine disrupting effects and can enter the body through the skin or oral mucosa. Human data about the effect of TCS exposure during pregnancy on neonatal birth weight and TCS exposure during childhood on children's growth are scarce. Objectives: To investigate the association between maternal urinary TCS level and neonatal birth weight, as well as children's urinary TCS level and children's body mass index (BMI). Methods: A systematic literature search was conducted using PubMed, Cochrane Library, and Web of Science. Finally, seven epidemiological articles with 5,006 participants from September 25, 2014 to August 10, 2018 were included in the meta-analysis to identify the relationship between maternal exposure to TCS and neonatal birth weight. On the other hand, three epidemiological articles with 5,213 participants from July 22, 2014 to September 1, 2017 were included in another meta-analysis to identify the relationship between children's exposure to TCS and children's BMI. We used Stata 16.0 to test the heterogeneity among the studies and calculating the combined effect value 95% confidence interval (CI) of the selected corresponding models. Results: TCS exposure during pregnancy was not significant associated with neonatal birth weight. The results of forest plots were as follows: ES (Estimate) = 0.41 (95% CI: −11.97–12.78). Children's urinary TCS level was also irrelevant associated with children's BMI: ES = 0.03 (95% CI: −0.54–0.60). Conclusions: This meta-analysis demonstrated that there was no significant association between maternal TCS level and neonatal birth weight, also there has no relationship between children's urinary TCS level and children's BMI.
INTRODUCTION
Triclosan (TCS) is a synthetic, broad-spectrum biocides, and it was first licensed for use in the 1960s (1). Nowadays, TCS is still widely used as an important antibacterial in consumer products such as soap, hand sanitizer, toothpaste, and mouthwash. TCS can be absorbed into the human body, then enter various human fluids and tissues from skin or oral mucosa (2).
Currently, the relationship between exposure to TCS during pregnancy and neonatal birth weight as well as children's body mass index (BMI) is not clear. A previous study has found that females tended to have higher TCS exposure level than males (3). A study of mothers in New York found that TCS could be detected in 100% of the 181 urine samples (4). In pregnant women, TCS in maternal serum can reach the fetus through the placental barrier, and it can be detected in umbilical cord blood (5). According to some researches, maternal exposure to TCS during pregnancy can cause certain effects on neonatal birth outcomes, such as head circumference, birth weight, and birth length (6)(7)(8). Among them, abnormal birthweight is an important birth outcome and will lead to a series of consequences. Babies born with low birthweight will face increased risks for adverse health effects in a lifetime, including cerebral palsy, neurological disabilities, and vision or hearing impairment (9). Whereas, those who are born with higher birthweight are more likely to develop obesity and breast cancer (10)(11)(12)(13). Some studies have suggested that maternal TCS exposure can lead to increased (14) or decreased (6,7,15,16) birth weight. However, some articles have shown that there was no significant relationship between maternal TCS exposure and neonatal birth weight (17)(18)(19)(20). In addition, children are also exposed to TCS during childhood (21), and it was found that TCS exposure during this period may be related to children's growth development (16,22). Childhood obesity is an important risk factor for children's health. In addition to the psychological consequences, obesity also increases the risk of type 2 diabetes mellitus, hyperlipidemia, hypertension, cardiovascular disease, sleep apnea, cancer, and arthritis (23). But the relationship between children's TCS exposure level and children's BMI remains controversial (22,24). Therefore, exploring the effects of exposure to TCS during pregnancy on neonatal birthweight and exposure to TCS during childhood on children's BMI are priorities. However, there is lack of a systematic studies about the topic. The purpose of our research is mainly to compare the effects of TCS exposure on growth indicators before and after birth. We also explored the heterogeneity of the included studies and make a subgroup analysis of involved factors that may affect the outcomes.
Search Strategy
In our study, we used the following search terms to retrieve the relevant literature in three electronic bibliographic databases: PubMed, Cochrane Library, and Web of Science.
TCS Exposure During Pregnancy on Neonatal Birth Weight
The inclusion criteria for the studies were: • Able to search the full text of literature; • Pregnant women as the subject, not experimental animals; • Explicitly specified the substance that pregnant women were exposed to TCS during pregnancy; • The data showing the correlation between TCS concentration and neonatal birth weight were provided, such as estimate (ES) and 95% confidence interval (95% Cl); • If the same population was used in different studies, we selected the recent one with larger sample size; • The research design had no defects and the literature quality was high.
TCS Exposure During Childhood on Children's BMI
The inclusion criteria for the studies were: • Able to search the full text of literature; • Infants or children as the subjects, not experimental animals (population); • Explicitly specified the substance that infants or children were exposed to TCS (ES); • The data showing the correlation between TCS concentration and children BMI were provided, such as ES and 95% CI (outcome); • If the same population was used in different studies, we selected the recent one with larger sample size; • The research design had no defects and the literature quality was high.
Exclusion Criteria
The exclusion criteria for the studies were: (1) Not conform to the research topic; (2) Animal studies, conference abstract, lecture literature, editorial materials or comments, and so on; (3) Studies had design defects and poor quality; (4) The research objects exposure to multiple environmental endocrine disruptors; (5) Raw data was unavailable; (6) Unpublished studies.
Study Selection and Data Extraction
At first, the titles and abstracts of all identified publications were independently reviewed by two authors based on the inclusion criteria and exclusion criteria for the eligibility of the included articles. Then, further screening of the remaining full retrieved papers were implemented, and the references of the selected articles were evaluated. We identified the literature that were finally adopted. Studies were exported in Endnote X7, and duplicates were automatically removed. To ensure the accuracy of the results, the two authors have also checked and corrected for duplicates manually. According to literature inclusion criteria, we used a predefined template to extract information (25). Any disagreement of between the two assessors in any of the above was settled by a discussion with a third evaluator. Then, we extracted the information of each literature, including author name, publication time, study design, location, sample size, outcome, exposure distribution, effect size, and covariate adjustment in our predesigned spreadsheet.
Quality Evaluation
Referring to the Newcastle Ottawa Scale (NOS) standard, the two authors independently evaluated the quality of each literature (26). NOS consists of three categories (selection of subject, comparability, and outcomes) and eight items. NOS ranges from 0 to 9 stars: 4 stars for selection, 2 stars for comparability, and 3 stars for outcomes. We rated each study based on criteria to estimate whether it can be included in the study. If the total number of stars is ≥6, we consider the research quality to be high. If the total number of stars is 3-5, we consider the research quality to be moderate. Otherwise, the research quality is too low and was excluded (27). The disputes regarding the grading of the literature were discussed and resolved together.
Statistical Method
All data acquisition and analysis were processed in the software Stata 16.0. We inserted appropriate information extracted from related literature in the spreadsheets and used the meta-analysis module to perform statistical analysis. We used the adjusted 95% CI in both birth weight and children's BMI as the effect value for TCS exposure during pregnancy on neonatal birth weight and children TCS exposure on children's BMI, respectively.
Specific steps: (1) Subgroup analysis: We explore the characteristics of the original studies and performed a hierarchical analysis based on different adjusted methods of TCS, different gender of infants, and different trimester that TCS was detected, and calculate the 95% CI of each subgroup; (2) Sensitivity analysis: We used a one-by-one elimination method for sensitivity analysis and chose a random effects model to analyze potential instability factors in meta-analysis, and evaluate the impact of publication bias of the overall results; (3) Heterogeneity test: If P < 0.05, i 2 ≥ 50%, the articles were regarded to have high heterogeneity. Considering that heterogeneity will drive different statistical methods to summarize the data, if heterogeneity is expected, random effects models will be preferable to fixed effects models (28). Otherwise, we will choose fixed effects models; (4) Test of publication bias: Begg's funnel chart and Egger's method are used to qualitatively and quantitatively test and evaluate the publication bias of literature data. The funnel plot should be similar to the asymmetrical inverted funnel, and the dispersion of small samples is large, so it is often at the bottom of the funnel chart, and the dispersion of large samples is small, so it is at the top. Egger's method is used to evaluate whether there is asymmetry related to standard error in the research results. Since the number of our studies included <10, we use only Begg's funnel plots to analyze publication bias.
TCS Exposure During Pregnancy on Neonatal Birth Weight
A total of 658 studies were initially included, and 435 records were obtained after eliminating the duplicate. By searching for the keyword (TCS) of the full text, 30 records were remaining. By reading titles and abstracts, 10 records were excluded due to irrelevant exposures or outcomes and eight records were experimenting on animals. The remaining 12 records were selected through reviewing the full papers. Among 12 full papers selected, five records were not considered since two had too small a sample size, and the other three records took the change in z-values as the results. Finally, seven articles were identified and included in our study (7,14,15,(17)(18)(19)(20) (Figure 1). All seven studies used concentrations of urinary TCS as exposure factor, and also four of these studies used creatinine-corrected as the adjusted methods of TCS; the others used specific gravity corrected. Additionally, two out of seven studies analyzed male and female infants separately, so we have nine sets of data.
TCS Exposure During Childhood on Children's BMI
A total of 862 studies were initially included, and 475 records were obtained after eliminating the duplicate. By reading titles and abstracts, 361 records were excluded due to irrelevant exposures or outcomes, 88 records were experimenting on animals, eight records on plants, and 15 records on microbes. Finally, three articles were identified and included in our study (29)(30)(31) (Figure 2). In the same way, all of these studies used concentrations of urinary TCS as exposure factor, and the adjusted methods of TCS were creatinine-corrected.
Quality Evaluation
The methodological quality of included studies was assessed by using the NOS. The characteristics of these studies are displayed in Table 1, and all of them are cohort studies. The quality assessment score of our included studies range from 6 to 8, and we considered the research quality to be high.
Relationship Between Maternal Urinary TCS Level and Neonatal Birthweight
A total of seven articles were combined to analyze the relationship between TCS exposure during pregnancy and neonatal birth weight. Among seven articles, two articles studied male and female infants separately, so we have nine sets of data. The forest plot results showed that only one article (20) indicated that exposure to TCS during pregnancy may lead to the increase of neonatal birth weight with ES = 11.85 (95% CI: 0.06-24.70), while other studies found no relationship. Specfic results were as follows: ES = 0.41 (95% CI: −11.97-12.78), P = 0.196, i 2 < 50%. The forest plot graph (Figure 3) shows that there are large overlaps in the CIs of these studies, indicating little heterogeneity among these studies. And there was no significant association between maternal urinary TCS level and neonatal birth weight.
Relationship Between Children's Urine TCS Level and BMI
A total of three articles were combined to analyze the relationship between TCS exposure during childhood and children's BMI. TCS was adjusted for urine creatinine in all studies. Three articles identified a irrelevant relationship between TCS exposure during childhood and children's BMI (29)(30)(31). Specific results were as follows: ES = 0.03 (95% CI: −0.54-0.60), P = 0.000, i 2 ≥ 50%, indicating a great heterogeneity among these studies. And there had no association between TCS exposure during childhood and children's BMI (Figure 4).
Subgroup Analysis-The Relationship Between Exposure to TCS of Different Adjusted Methods During Pregnancy and Neonatal Birth Weight
We divided included studies into two groups based on the adjusted methods of TCS of either urine creatinine or specific gravity (SG). The results were as follows: ES = 2.28 (95% CI: −50.97-55.53), P = 0.229, i 2 < 50% and ES = 0.33 (95% CI: −12.19-12.84), P = 0.140, i 2 < 50%, respectively, which manifested that whether use urine creatinine or SG as the adjusted method of TCS, the maternal urinary TCS level had no significant effect on neonatal birth weight. Meta-analysis forest plot was shown in Figure 5.
Sensitivity Analyses
In the meta-analysis of the association between maternal urinary TCS level and neonatal birth weight as well as urinary TCS level of children and children's BMI, several articles were combined, which may lead to heterogeneity between studies. However, the results showed that the pooled ES values before and after the exclusion of a study were essentially the same as the 95% CIs, indicating that the original meta-analysis was reliable (Figures 6, 7).
Publication Bias
Publication bias was observed for TCS exposure during pregnancy and birth weight as well as TCS exposure during childhood and children's BMI using Begg's test and funnel plots (P = 0.754; Figure 8 and p = 1.000; Figure 9). In Figure 9, all three points fall outside, suggesting the possibility of heterogeneity, but too few studies have been included, which may lead to bias in the results.
DISCUSSION
This meta-analysis included seven studies that evaluate the effect of TCS exposure during pregnancy on neonatal birth weight and three studies that evaluate the effect of TCS exposure during childhood on children's BMI. The results showed that there was no significant association between exposure to TCS during pregnancy and neonatal birth weight, as well as exposure to TCS during childhood on children's BMI. Sensitivity analysis showed that after one-by-one elimination, the results were consistent.
Differences in the characteristics of participants, sampling times, and testing methods might lead to different results in the included studies. In the first part of our meta-analysis, Geer et al. (15) and Messerlian et al. (7) found that TCS exposure would decrease neonatal birth weight, while Ouyang et al. (14) found that there was a positive association between maternal TCS and birth weight in female infants, and a non-significant inverse association was found in male infants. These results were different from the conclusions of several other studies (17)(18)(19)(20). Moreover, the participants in these two studies of Messerlian et al. (7) and Philippat et al. (18) were predominantly white people from the USA who were older, earned more, and most of them had a higher college education. Calafat et al. (32) found that participants who were white, older, more educated, married, and had a higher household income had higher urinary TCS concentrations, and their infants may have been more affected compared to other women. In Ouyang et al. (14), the BMI of 14.6% of participants was 23-24.9 kg/m 2 , 10.4% of the participants were overweight with a BMI ≥ 25 kg/m 2 , and 12.7% of the participants were diagnosed with gestational diabetes mellitus; we considered that this may lead to the inauthenticity of their results. Furthermore, different studies were adjusted for different potential confounders, and this may also contribute to the inconsistent results. For example, the adjustment variables in Geer et al. (15) were only maternal age group, nativity, and neonate gender, while the adjustment variables of Messerlian et al. (7) were maternal BMI, maternal education, and season (ordinal). On the other hand, the adjustment variables of Ouyang et al. (14) were urinary creatinine, passive smoking, parity, and prepregnancy BMI categories. This may also be the reason why the conclusions of the studies by Geer et al. (15), Messerlian et al. (7), and Ouyang et al. (14) were different from others.
In our second meta-analysis, Li et al. (30) found that TCS exposure was inversely associated with BMI. Buser et al. (29) and Deierlein et al. (31) showed no significant association between TCS and children's BMI. Furthermore, Li et al. (30) detected one spot-urine sample, while other studies detected urine samples multiple times at different gestational weeks, which may have caused some bias because TCS is usually rapidly metabolized and excreted (33). The participants in the studies by Deierlein et al. (31) were girls who came from the USA and were predominantly exposed to TCS during their childhood, while both boys and girls during childhood and adolescence were surveyed in other studies. The difference in age and gender may play key roles in the results. Due to few studies have examined the relationship between children exposure to TCS and BMI by different genders, we could not determine the effect of TCS on BMI in children of different genders.
TCS is an environmental endocrine disruptor with antibacterial activity. Endocrine-disrupting chemicals may alter metabolism via estrogenic, antiestrogenic, or antiandrogen action by interfering with other hormone functions (34). However, studies suggest that levels of these chemicals are reasonably stable over time for the purpose of ranking and have acceptable intraindividual variability over more than a year (33). This may explain how exposure to TCS during childhood has no relationship on the children's BMI. Also, exposure to TCS during pregnancy did not effect neonatal birth weight; it was probably due to the placental protection mechanism (35,36).
Our meta-analysis found that there was no significant association between maternal urinary TCS level during pregnancy and neonatal birth weight, which was consistent with the latest published meta-analysis (37). However, our study had stricter literature inclusion criteria and exclusion criteria. We conducted unified classification and screening according to the types of research data and adjustment methods. To explore the reasons for the existence of heterogeneity, we performed subgroup analysis and sensitivity analysis for the results, while Zhong et al. (37) did not. However, the result of our study differs from the article published in June 2020 (38), which showed that exposure to TCS during pregnancy will increase the birth weight of newborns. However, the CI for the results of the study on birth weight crossed zero in the forest plot, indicating that their results were not statistically significant.
The publication bias of a study by Deierlein et al. (31) was significantly greater than the others. We compared the differences between a study by Deierlein et al. (31), and the other two articles, and found that TCS were adjusted for relatively simple covariates including race, age, educational level, socioeconomic status, and baseline BMI, which may explain the asymmetry of the funnel graph.
The strengths of our study were as follows: First, this meta-analysis analyzed the effects of TCS exposure during pregnancy on children and the effects of TCS exposure in children, respectively. Second, subgroup analyses were performed to explore the relationship between TCS exposure and neonatal birth weight by different adjusted methods. Third, the most relevant studies were high-quality prospective cohort studies, which can enhance the credibility of our study. Fourth, in the first meta-analysis to evaluate the relationship between maternal urinary TCS level and neonatal birthweight, we classified the included studies according to the TCS adjusted methods and performed a metaanalysism. In the other studies, urine TCS concentrations were all adjusted for creatinine, and all of these improved the reliability of the results. Finally, we used birth weight as an indicator and referred to a unified calibration method and unit.
Our study also had limitations. First, birth weight and children's BMI may be influenced by other confounding factors, such as other environmental chemicals, mother or child nutrition, mother or child age, and so on. Inadequate adjustment of confounders might cause overestimation or underestimation of the actual effect of exposure to TCS on outcomes. Second, the studies included were generally from the United States, China, and Europe. There was a lack of relevant research in other regions, which may cause regional biases in this study. Third, the two studies of this meta-analysis contained few records, which may lead to unstable results. What's more, all studies included were log or in the transformation of TCS (considerate the abnormal distribution) except for one study that TCS was not converted.
CONCLUSION
In summary, in our meta-analysis, there was no significant association between exposure to TCS during pregnancy and neonatal birth weight, as well as TCS exposure during childhood on children's BMI. We recommend a broader, larger prospective cohort study to assess the relationship between prenatal TCS exposure and neonatal birth outcomes. In addition, we should pay more attention to TCS exposure during childhood and reduce children's exposure to TCS.
DATA AVAILABILITY STATEMENT
The original contributions presented in the study are included in the article/supplementary material, further inquiries can be directed to the corresponding author/s.
AUTHOR CONTRIBUTIONS
JL carefully read and screened the literature related to the research direction and collected the data. DC made figures and tables and they were all the main authors of the manuscript. YH polished the language and optimized the format of the first draft. In the process of reviewing the manuscript, the language was checked and polished by FB. TC reviewed and revised the final manuscript and gave advice during the submission process. XW was the final reviewer. All authors contributed to the article and approved the submitted version. | 2021-07-08T13:29:57.018Z | 2021-07-08T00:00:00.000 | {
"year": 2021,
"sha1": "bc7e17c2bb3cca8612164fbc7977d1a4ddc9b576",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fpubh.2021.648196/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "bc7e17c2bb3cca8612164fbc7977d1a4ddc9b576",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
119328682 | pes2o/s2orc | v3-fos-license | On the Kauffman-Jones polynomial for virtual singular knots and links
We construct an extended Kauffman-Jones polynomial for virtual singular knots and links using a certain type of bivalent graphs. We prove that the resulting Laurent polynomial in indeterminate A decomposes non-trivially into two components with respect to powers of A modulo four, and that both components are invariants under the Reidemeister-type moves for virtual singular link diagrams.
Introduction
A virtual singular link diagram is a decorated immersion of k (k ∈ N) disjoint copies of S 1 into R 2 , with finitely many transverse double points each of which has information of over/under, singular, and virtual crossings, as indicated in Figure 1. If k = 1, we have a virtual singular knot diagram, or equivalently, a virtual singular link diagram of one component. The over/under markings indicate the classical crossings. A filled in circle is used to represent a singular crossing. Virtual crossings are represented by placing a small circle around the point where the two arcs meet transversely. Unless otherwise specified, we will use the term 'link' to refer to both knots and links, for simplicity.
Similar to the case of virtual knot theory, there is a useful topological interpretation for virtual singular knot theory in terms of embeddings of singular links in thickened surfaces S g × I, where S g is a compact oriented surface of genus g and I is the unit interval. A diagram of a virtual singular link in S g × I is then drawn on S g , and virtual crossings are merely byproducts of the projection of the embedding into the surface S g .
Two virtual singular link diagrams are said to be equivalent (or ambient isotopic) if they are related by a finite sequence of the extended Reidemeister moves depicted in Figure 2 (where only one possible choice of crossings is indicated in the diagrams). The move RS 2 exemplifies that the singular crossings can be regarded as rigid disks. Equivalently, there is a fixed ordering of the four strands meeting at a singular crossing and the cyclic ordering is determined via the rigidity of the disk. The set of moves in Figure 2 defines an equivalence relation on the set of virtual singular link diagrams. A virtual singular link (or virtual singular link-type) is then the equivalence class of a virtual singular link diagram. Throughout this paper, we work with oriented virtual singular links. The extended Reidemeister moves in Figure 2 are still used to relate two equivalent virtual singular link diagrams, just now with given orientations on the strands, where all possible choices of orientation are considered and where the fixed orientation on the strands for the two diagrams in a certain move must agree.
In [4], L. Kauffman constructed a polynomial invariant for oriented virtual links, denoted f L (A), which is an extension from classical links to virtual links of the polynomial f [L](A) constructed by the same author in [3]. Recall that the polynomial f [L](A) provides a state model for the Jones polynomial [1]. We refer to the polynomials f L (A) and f [L](A) as the Kauffman-Jones polynomials for virtual links and classical links, respectively.
The goal of this paper is to go one step further and extend the polynomial f L (A) to virtual singular links. In doing so, we provide two definitions for our polynomial, which we denote by L , where L is a virtual singular link. The first definition uses skein relations, while the second definition provides a state summation formula for the new polynomial L . Motivated by N. Kamada's and Y. Miyazawa's work in [2], where they show that the Kauffman-Jones polynomial for virtual links splits non-trivially with respect to the powers of A modulo four, we prove that the same holds for our polynomial. Our approach and proofs are somewhat different from those in [2], as we make intensive use of our defining skein relations for L . Specifically, we show that if L is a virtual singular link with k components, then Both polynomials φ(L) and ψ(L) are invariants for virtual singular links, as it is L .
Kauffman-Jones polynomial for virtual singular links
In this section, we provide our approach in defining the Kauffman-Jones polynomial for virtual singular links. Definition 1. We call a purely virtual magnetic graph an immersed directed graph in R 2 with bivalent vertices such that the edges are oriented alternately, as shown in Figure 4(a), and with self-intersections represented as virtual crossings. Equivalently, each bivalent vertex is either a sink, meaning the edges are oriented towards the vertex, or a source, where the edges are oriented away from the vertex.
We do allow for components in a purely virtual magnetic graph to consist of oriented closed loops without vertices, as shown in Figure 4 We remark that N. Kamada and Y. Miyazawa used in their work [2] the term "magnetic graph diagrams" to refer to such graphs.
Given a virtual singular link L with diagram D, we resolve all of the classical and singular crossings in D in two ways, as shown in Figure 5, leaving the virtual crossings in place. We refer to the two resolutions of a crossing as the oriented and, respectively, disoriented resolutions. By resolving each of the classical and singular crossings in the diagram D in one of the two ways as shown above, we obtain a Kauffman-Jones state associated with D. That is, a Kauffman-Jones state is a purely virtual magnetic graph. The Kauffman-Jones states of D will receive certain weights, which are polynomials in Z[A 2 , A −2 ], where these weights are uniquely determined by the skein relations depicted in Figure 6. The Kauffman-Jones polynomial of D is then a Laurent polynomial, denoted D , defined as a formal Z[A 2 , A −2 ]-linear combination of the evaluations of all of the Kauffman-Jones states associated with D.
We remind the reader that the diagrams in both sides of a skein relation represent parts of larger diagrams that are identical, except near a point where they look as indicated in the skein relation. The Kauffman-Jones states associated with D are uniquely evaluated using the graph skein relations depicted in Figure 7. Note that the first set of relations in Figure 7 say that we can remove or introduce pairs of adjacent bivalent vertices without changing the evaluation of a state, as long as the two vertices are oppositely oriented (one is a source and the other is a sink). Moreover, the last skein relation says that the Kauffman-Jones polynomial is multiplicative with respect to disjoint unions. Finally, any closed loop with or without virtual crossings is evaluated to −A 2 − A −2 . Equivalently, Kauffman-Jones states are regarded up to the pure virtual moves V 1 , V 2 and V 3−v . We note that, up to the first two sets of the graph skein relations and the pure virtual moves (V 1 , V 2 and V 3−v ), each Kauffman-Jones state is equivalent to a disjoint collection of directed circles in the plane.
A quick analysis of the defining skein relations for classical and singular crossings imply that the following skein relation holds, which will come in handy in proofs: Theorem 1. The Kauffman-Jones polynomial · as defined by the skein relations in Figures 6 and 7 is an ambient isotopy invariant for virtual singular links.
Proof. We need to show that · is invariant under the extended Reidemeister moves for virtual singular link diagrams given in Figure 2.
By our definition of the evaluation of Kauffman-Jones states, we have that · is invariant under the virtual moves V 1 , V 2 and V 3−v .
We consider first the Reidemeister move R 1 involving a positive classical crossing; the case of a negative classical crossing follows the same format, and thus we omit it here. We use the appropriate skein relation for the positive crossing, followed by the first and last graph skein relations for evaluating Kauffman-Jones states, as shown below: = .
Next, we consider the Reidemeister move R 2 where both strands are oriented similarly, say upwards. We begin by resolving the top negative classical crossing: We then resolve the positive classical crossing in each of the resulting diagrams to obtain a linear combination of evaluations of Kauffman-Jones states associated with the original diagram, and then employ any necessary graph skein relations: The invariance under the Reidemeister move R 2 with oppositely oriented strands is verified below: Next, we consider the Reidemeister move R 3 . Resolving one of the classical crossings in each diagram, and using that the polynomial is invariant under the Reidemeister move R 2 , results in the following equalities: The diagrams associated with the weight −A 2 have the same evaluations, since the diagrams differ by the move R 2 . Thus, it remains to show the following: Applying the defining skein relations for the crossings in the left hand side diagram of (2.2) leads to: Similarly, for the right hand side diagram of (2.2), we obtain: Using planar isotopy, the diagrams with weights A 2 and, respectively, A −2 on both sides of the desired identity are the same. Moreover, the first (or the last) diagram associated to the left hand side of the identity is the same as the last (or the first) diagram resulting from the right hand side of the identity. Thus, the identity (2.2) holds, and therefore = .
It is known that the invariance under the other seven oriented versions of the Reidemeister move R 3 follow from the invariance under the Reidemeister moves R 2 and the version of the Reidemeister move R 3 that we have verified.
To prove the invariance of · under the move RS 1 for virtual singular link diagrams, we use the skein relation (2.1) and that the polynomial is invariant under the Reidemeister move R 3 , as we explain below: A similar approach is used to show the invariance under the move RS 2 . First, we apply the skein relation (2.1) to the singular crossing in the diagram on the left hand side of the move. We then use that the polynomial is invariant under the Reidemeister move R 2 to pull apart the strands in the second diagram of the resulting equality, and then we employ again the skein relation (2.1), to obtain the evaluation of the diagram on the right hand side of the move, as desired: The invariance under the move V 3−c follows from the defining skein relations for classical crossings and planar isotopy, as exemplified below: Here, we used that the diagrams receiving the weight −A −4 have the same evaluations, as the number of loops in the larger diagrams (which are identical except in the neighborhood shown) remain the same when the horizontal arc crosses virtually the top or the bottom two edges of the disoriented resolution. Lastly, the invariance of · under the move V 3−s follows from the skein relation (2.1) for the singular crossing and the now known invariance of · under the move V 3−c : This completes the proof.
With Theorem 1 at hand, we can now define the Kauffman-Jones polynomial of a virtual singular link L as follows: where D is any diagram representing K. Remark 1. If L is an oriented virtual link (that is, L contains no singular crossings), our polynomial L is the same as the polynomial f L (A) introduced in L. Kauffman's work [4]. Moreover, if L is an oriented classical link, then L is the polynomial f [L](A) constructed by L. Kauffman in [3], which provides a state model for the Jones polynomial [1]. Example 1. We evaluate the Kauffman-Jones polynomial of an oriented figure-eight knot containing one classical crossing, one singular crossing, and two virtual crossings. We begin by applying the defining skein relation for the singular crossing followed by that for the classical negative crossing, as shown below: ON THE KAUFFMAN-JONES POLYNOMIAL FOR VIRTUAL SINGULAR KNOTS AND LINKS 9 We end this section by giving a state sum formula for the Kauffman-Jones polynomial of a virtual singular link. Before we do so, we need to go over some new notation.
Consider a virtual Kauffman-Jones state S obtained from a virtual singular link diagram D. Let a(S) denote the number of negative classical crossings in D minus the number of positive classical crossings that received an oriented resolution to obtain S. Let b(s) be the number of negative classical crossings in D minus the number of positive crossings that received a disoriented resolution to obtain S. Similarly, let α(S) denote the number of singular crossings in D that received an oriented resolution, and let β(S) denote the number of singular crossings that received a disoriented resolution to result in the state S. Finally, denote by ||S|| the number of components in S (immersed closed curves with self intersections represented as virtual crossings), and let c(D) represent the number of classical crossings in D.
With these definitions at hand together with the skein relations given in Figures 6 and 7 (which define our polynomial), we obtain the following state sum formula for the Kauffman-Jones polynomial of a virtual singular link diagram D: where the sum runs over all states S associated with D.
Another approach to the Kauffman-Jones polynomial
In this section, we take another approach to evaluating the Kauffman-Jones polynomial of a virtual singular link, which is inspired by the work of N. Kamada and Y. Miyazawa in [2]. With this new perspective, we show that the Kauffman-Jones polynomial of a virtual singular link decomposes non-trivially into two parts, one belonging to Z[A 4 , A −4 ] and the other to Given a virtual singular link diagram D, we still resolve the singular and classical crossings in D using the skein relations given in Figure 6, but we use a slightly different method for evaluating the resulting Kauffman-Jones states (recall that these are purely virtual magnetic graphs).
Consider a purely virtual magnetic graph S. Let E(S) denote the set of edges of S, where an edge is an arc between two adjacent bivalent vertices in S. An immersed closed curve with no bivalent vertices is considered to be an edge.
Definition 2.
A weight map of a purely virtual magnetic graph S is a function τ : E(S) → {1, −1} such that adjacent edges e and e are assigned different values, that is, τ (e) = τ (e ). The assigned value to an edge e is called the weight of e with respect to τ . Given a weight map τ and a virtual crossing v in S, the parity of v with respect to τ , denoted i τ (v), is defined as follows: where e and e are the two edges meeting at v. The parity of S (with respect to τ ), denoted by i τ (S), is defined as the product of the parities of all virtual crossings in S. That is, We remark that an immersed curve S without bivalent vertices has parity 1.
As we now show, the parity of a purely virtual magnetic graph S is independent on the choice of the weight map τ , and consequently, we will use the notation i(S) instead of i τ (S). The proof of the following statement was given in [2], and we provide it here in order to have a self contained paper. Lemma 1. Let S be a purely virtual magnetic graph. Then, the parity of S is independent on the choice of the weight map.
Proof. Let τ and ω be two weight maps of S. By definition, the only possible values assigned to the edges of S are 1 or −1. We consider a partition of E(S) into two subsets. Let E 1 (S) be the set of all edges e ∈ E(S) such that τ (e) = ω(e), and E 2 (S) be the set of all edges such that τ (e) = ω(e). Note that τ (e) = ω(e) is equivalent to τ (e) = −ω(e). Consider the components of S (that is, immersed closed curves in S). All of the edges of a component of S will either belong to E 1 (S) or E 2 (S). To see this, consider an edge e of a component of S. Suppose that e ∈ E 1 (S), so τ (e) = ω(e), and consider an adjacent edge e . If τ (e) = ω(e) = a, then by definition of a purely virtual magnetic graph, we have τ (e ) = ω(e ) = −a, and thus e ∈ E 1 (D). We repeat this argument as we move through the edges of the component of S, to conclude that all of its edges belong to E 1 (S). The same argument works for the case when e ∈ E 2 (S). Thus all of the edges of a component will either belong to E 1 (S) or E 2 (S).
Let S i denote the components of S whose edges belong to E i (S), for i = 1, 2. Consider a virtual crossing v of S. The parity of v with respect to τ will be different from the parity of v with respect to ω if and only if v is a crossing between an edge from S 1 and an edge from S 2 . However, two components of S will always intersect in an even number of virtual crossings, and therefore, the product of the parities of the virtual crossings that could be different (with respect to τ and ω) will be equal to 1, since there is an even number of such virtual crossings. Hence, i τ (S) = i ω (S). Definition 3. An enhanced purely virtual magnetic graph is a purely virtual magnetic graph S together with a weight map of S.
We are now ready to define the new method for evaluating an enhanced purely virtual magnetic graph. As before, we denote by ||S|| the number of components in S.
Definition 4.
Given an enhanced purely magnetic graph S with parity i(S), the enhanced evaluation of S, denoted R(S), is given by: where A, A −1 and h are commuting parameters.
Consider a virtual singular link diagram D and its associated Kauffman-Jones states together with a weight map for each of the states, which now are enhanced Kauffman-Jones states. We employ the symbols used in the state sum formula given in Equation (2.3), to define a new polynomial associated to D.
Definition 5. Given a virtual singular link diagram D, we define a polynomial R(D) ∈ Z[A, A −1 , h] as the following sum, taken over all enhanced Kauffman-Jones states S associated with D: followed by the application of Definition 4, to evaluate the resulting enhanced Kauffman-Jones states corresponding to D.
Proof. The statement follows easily by comparing the polynomials D and R(D).
Proposition 2. The enhanced evaluation R for enhanced purely virtual magnetic graphs satisfies the following skein relations: Moreover, the parity for enhanced purely virtual magnetic graphs satisfies the following skein relations: Proof. These skein relations follow at once from Definition 4 and the definition of the parity of enhanced purely virtual magnetic graphs.
Theorem 2. The polynomial R( · ) is an ambient isotopy invariant for virtual singular links.
Proof. We have that R(D) |h=1 = D and that the two polynomials D and R(D) differ only by how the Kauffman-Jones states are evaluated. In particular, the polynomial R satisfies the same skein relations (for the classical and virtual crossings) as the polynomial D . Since in the proof of Theorem 1 we used these skein relations to show that D is invariant under the extended Reidemeister moves, it is clear that the polynomial R( · ) is invariant under the moves that involve only classical and singular crossings. Therefore, it remains to show that R( · ) is unchanged under the extended Reidemeister moves that involve virtual crossings, namely the last five moves given in Figure 2. Invariance under the move V 3−v . Consider again two diagrams that are identical except near a point where they differ as shown below: No virtual crossings are added or removed during the move, and the number of components in the resulting states of D and D is unaffected. As before, there is a one-to-one correspondence between the states S of D and S of D , respectively. Each corresponding pair S and S differ by the virtual move V 3−v . We need to show that the parities of the three crossings involved in the move are the same when the move is applied, implying that the overall parities of the corresponding states S and S are the same. Given an assigned weight map of a state, the horizontal strand intersecting virtually the two diagonal strands will have the same weight whether it is above the central virtual crossing or below. Moreover, the move V 3−v does not affect the weights of the diagonal strands meeting at the central virtual crossing. It follows that i(S) = i(S ) for all corresponding pairs of states S and S , and hence R(D) = R(D ) in this case as well.
Invariance under the move V 3−r . Let D and D be virtual singular link diagrams that differ only in a small neighborhood as shown below: We have that The second diagrams in the above two skein relations contain the same number of components, and we need to verify that they have the same parity. Since these two diagrams are identical outside of the neighborhood shown, the only chance of having different parities depends on the parities of the virtual crossings shown. Consider in both of the diagrams the two adjacent edges intersecting virtually with the horizontal arc. By the definition of a weight map, the two adjacent edges will receive opposite weights, and therefore, no matter the weight of the horizontal strand, the two shown virtual crossings in either of the diagrams will have opposite parities. Hence, It follows that R(D) = R(D ). The case when the classical crossing involved in the move is negative follows in a similar fashion. Invariance under the move V 3−s . We note first that Then, the invariance under the move V 3−s follows as shown below, where we use that R is invariant under the move R 3−c .
This completes the proof.
Remark 5. The definition of the polynomial R(D) implies that Proof. It is clear that polynomials φ( · ) and ψ( · ) satisfy the same skein relations for the classical and singular crossings as the polynomials R( · ) and · . Moreover, and similarly, Then, the same proof as in Theorem 1 can be used to show that φ( · ) and ψ( · ) are invariant under the classical Reidemeister moves R 1 , R 2 and R 3 , as well as the moves RS 1 and RS 2 . Moreover, the proof of Theorem 2 implies that φ( · ) and ψ( · ) are invariant under the moves V 1 , V 2 , V 3−v , V 3−r and V 3−s . Table 1 (with certain weight maps), while the third column contains the weighted state contributions R(S) to R(D), for each of the enhanced states S of D. Recall that Taking the sum of the weighted state contributions, we have:
The splitting property
In Example 3, we noticed that where φ(D) ∈ Z[A 4 , A −4 ] and ψ(D) ∈ Z[A 4 , A −4 ] · A 2 . Inspired by the work in [2], we show now that the polynomial R( · ) always has this property. The main result is as follows: Theorem 3. Let L be a virtual singular link with k components. Then The proof of Theorem 3 is similar in spirit to the proof of Theorem 3 in [2]. Before we prove our theorem, we need to look at a few lemmas. For that, consider a weighted state contribution R(S) to the polynomial R(D), where S is any enhanced Kauffman-Jones state associated with a diagram D, and note that the powers of A in all of the monomials of R(S) are congruent to each other modulo 4. Denote by max A (R(S)) the maximum power of A in R(S). That is, Proof. We first note that for the oriented state S 0 , we have: where w(D) is the writhe of D (that is, the sum of the signs of the classical crossings in D) and s(D) is the number of singular crossings in D.
The state S 0 has no bivalent vertices, and we consider the weight map that assigns 1 to all loops in S 0 . Therefore, i(S 0 ) = 1 and R(S 0 ) ∈ Z[A 2 , A −2 ].
By hypothesis, the original diagram D is connected. Consider any two linked components of D and let p be a classical crossing formed by the two components. If we resolve the crossing p in the oriented fashion, we obtain a virtual singular link diagram which is still connected but has one less component. Therefore, it is clear that there exist k − 1 classical crossings in D such that by applying the oriented resolution to all of these crossings, the resulting diagram D 1 represents a virtual singular knot (that is, Choose a base-point on D 1 and consider the walk along D that starts at the basepoint and proceeds according to the diagram's orientation. When arriving at a singular crossing, the walk continues straight ahead. The walk ends when returning to the basepoint for the first time. Label the classical and singular crossings in D 1 and list them in the order they are reached via the walk. If there are two classical and/or singular crossings, p a and p b , which alternate in the list produced by the walk, as shown below: · · · p a , · · · , p b , · · · , p a , · · · p b , · · · , resolve these two crossings using the oriented resolution. This process results in a (connected) virtual singular knot diagram, which we denote by D 2 , as illustrated below.
Note that in the illustration of D 1 and D 2 , we use flat crossings for p a and p b to represent either type of classical crossing or singular crossing. Now choose a base-point on D 2 and consider the walk starting at the base-point and apply the same procedure as for D 1 . This produces a (connected) virtual singular knot diagram, D 3 .
Continuing with this process, we obtain a finite sequence of virtual singular knot diagrams and where any two classical and/or singular crossings p a and p b in D appear in a list produced by a walk along D as follows: · · · p a , · · · , p b , · · · , p b , · · · p a , · · · Let m be the number of classical and singular crossings in D . Resolving all of these m crossings using the oriented resolution produces a purely virtual magnetic graph with m + 1 components without bivalent vertices.
It Therefore, max A ( R(S 0 )) ≡ 2k (mod 4). Since all powers of A in R(S 0 ) are congruent to max A ( R(S 0 )) modulo 4, the desired statement follows. Proof. Let p be a (classical or singular) crossing of D where, say, a disoriented resolution occurred to form the state S. Then an oriented resolution was applied at p to obtain the state S , and all of the other crossings of D received the same type of resolutions to arrive at S and S , respectively.
(1) Suppose that ||S || = ||S||. Since S and S differ only in the neighborhood of p, S either has one less or one more component than S, as demonstrated below (we use dashed lines to represent the regions where S and S coincide).
Consider first the case where ||S || = ||S|| − 1. Assign weight maps to S and S such that the edges shown have weights as depicted in the diagrams above. Then the weight maps assigned to S and S coincide on the common regions, and any virtual crossing will have the same parity with respect to S and S . It follows that i(S) = i(S ).
Suppose that p is a singular crossing (the proof is similar for the case when p is a classical crossing, and thus it is omitted to avoid repetition). We then have the following equalities, Thus, max A (R(S )) ≡ max A (R(S)) (mod 4). Now consider the case where ||S || = ||S|| + 1, and assign weight maps to S and S so that the edges shown in the diagrams receive the same weights as in the previous case. Then a given virtual crossing will have the same parity with respect to S and S , respectively, and hence i(S) = i(S ). Moreover, suppose this time that p is a positive classical crossing. Then Thus, max A (R(S )) ≡ max A (R(S)) (mod 4). If p is a negative classical crossing or a singular crossing, the proof follows similarly.
(2) Suppose now that ||S || = ||S||, and assign weight maps to S and S as shown below.
Note that the arcs C 1 and C 2 (as depicted in the above diagrams) are identical in both of the states S and S , and that these arcs must intersect in an odd number of virtual crossings. In addition, the weight maps for S and S as defined above exist, since both arcs C 1 and C 2 must contain an even number of bivalent vertices.
Then, an edge belonging to the arc C 1 will have opposite weights with respect to S and S , respectively. On the other hand, an edge belonging to the arc C 2 will have the same weights with respect to S and S . It follows that if v is a virtual crossing where arcs C 1 and C 2 intersect, then the parity of v with respect to S is opposite from the parity of v with respect to S . Since the arcs C 1 and C 2 intersect in an odd number of virtual crossings, and since the parity of a virtual crossing where C 1 (or C 2 ) intersects itself is the same with respect to both states S and S , it follows that i(S) = −i(S ).
To show the second part of the statement, suppose that p is a negative classical crossing. Then Thus, max A (R(S )) ≡ max A (R(S)) + 2 (mod 4). We note also that c(D) = c(D 1 ) + c(D 2 ). Then the statement follows easily from the defining state-sum formula for the polynomial R(D).
Proof of Theorem 3. Suppose that L is a virtual singular link with k components and let D = D 1 ∪ D 2 ∪ · · · D n be a diagram of L where D i , for 1 ≤ i ≤ k, is a connected diagram with k i components, and where k 1 + · · · + k n = k. We provide a proof by induction on n.
Since Therefore, the statement holds for any n ≥ 1. Proof. The statement follows from Theorem 3 and the fact that R(L) |h=1 = L . | 2016-10-09T16:24:13.000Z | 2016-10-09T00:00:00.000 | {
"year": 2016,
"sha1": "583530a3cedfa7996dc5651cd5ac356097098931",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "583530a3cedfa7996dc5651cd5ac356097098931",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
267767828 | pes2o/s2orc | v3-fos-license | Design and baseline characteristics of SALT‐HF trial: hypertonic saline therapy in ambulatory heart failure
Abstract Aims Hypertonic saline solution (HSS) plus intravenous (IV) loop diuretic appears to enhance the diuretic response in patients hospitalized for heart failure (HF). The efficacy and safety of this therapy in the ambulatory setting have not been evaluated. We aimed to describe the design and baseline characteristics of the SALT‐HF trial participants. Methods and results ‘Efficacy of Saline Hypertonic Therapy in Ambulatory Patients with HF’ (SALT‐HF) trial was a multicenter, double‐blinded, and randomized study involving ambulatory patients who experienced worsening heart failure (WHF) without criteria for hospitalization. Enrolled patients had to present at least two signs of volume overload, use ≥ 80 mg of oral furosemide daily, and have elevated natriuretic peptides. Patients were randomized 1:1 to treatment with a 1‐h infusion of IV furosemide plus HSS (2.6–3.4% NaCl depending on plasmatic sodium levels) versus a 1‐h infusion of IV furosemide at the same dose (125–250 mg, depending on basal loop diuretic dose). Clinical, laboratory, and imaging parameters were collected at baseline and after 7 days, and a telephone visit was planned after 30 days. The primary endpoint was 3‐h diuresis after treatment started. Secondary endpoints included (a) 7‐day changes in congestion data, (b) 7‐day changes in kidney function and electrolytes, (c) 30‐day clinical events (need of IV diuretic, HF hospitalization, cardiovascular mortality, all‐cause mortality or HF‐hospitalization). Results A total of 167 participants [median age, 81 years; interquartile range (IQR), 73–87, 30.5% females] were randomized across 13 sites between December 2020 and March 2023. Half of the participants (n = 82) had an ejection fraction >50%. Most patients showed a high burden of comorbidities, with a median Charlson index of 3 (IQR: 2–4). Common co‐morbidities included diabetes mellitus (41%, n = 69), atrial fibrillation (80%, n = 134), and chronic kidney disease (64%, n = 107). Patients exhibited a poor functional NYHA class (69% presenting NYHA III) and several signs of congestion. The mean composite congestion score was 4.3 (standard deviation: 1.7). Ninety per cent of the patients (n = 151) presented oedema and jugular engorgement, and 71% (n = 118) showed lung B lines assessed by ultrasound. Median inferior vena cava diameter was 23 mm, (IQR: 21–25), and plasmatic levels of N‐terminal‐pro‐B‐type natriuretic peptide (NTproBNP) and antigen carbohydrate 125 (CA125) were increased (median NT‐proBNP 4969 pg/mL, IQR: 2508–9328; median CA125 46 U/L, IQR: 20–114). Conclusions SALT‐HF trial randomized 167 ambulatory patients with WHF and will determine whether an infusion of hypertonic saline therapy plus furosemide increases diuresis and improves decongestion compared to equivalent furosemide administration alone.
Introduction
The traditional model of managing worsening heart failure (WHF) in both inpatient and outpatient settings has several inherent challenges.Although essential for patients exhibiting severe symptoms such as respiratory failure and unstable arrhythmia, hospitalization is not always needed in patients in whom volume overload is the main driver of worsening symptoms. 1In fact, the reliance on hospitalization is sometimes primarily due to the convenience of administering IV diuretic therapy and conducting close clinical and laboratory monitoring, highlighting a gap in outpatient care options for WHF patients. 2This underscores the need to explore alternative, potentially more efficient, outpatient treatment modalities for managing WHF effectively.
Recognizing that diuresis is the primary intervention in patients hospitalized for HF decompensations and that some patients quickly improve with therapy (i.e.within hours), HF clinics have emerged as outpatient models aiming to provide comprehensive care where patients may obtain same-day or walk-in visits for worsening symptoms rather than a potential visit to the emergency department. 3,4However, although some diuretic protocols have been proposed, 1,2,5 no randomized trials have evaluated different diuretic strategies in the outpatient setting.
Observational and randomized trials have evaluated IV furosemide and hypertonic saline solution (HSS) in hospitalized patients with acute HF.[8][9][10] However, the efficacy and safety of this approach in the ambulatory setting have not been evaluated.This study aims to bridge this gap by assessing both efficacy and safety, as well as feasibility, of this combined therapy (HSS plus IV furosemide) in ambulatory patients with WHF and systemic fluid overload.
Study design
The SALT-HF trial was a multicenter, double-blinded, and randomized trial involving ambulatory patients who presented an episode of WHF that required IV diuretics and without criteria for hospital admission at the treating physician's discretion.Patients were randomized to treatment with a 60-min infusion of IV furosemide (125-250 mg) plus HSS (intervention group) versus an infusion of IV furosemide (125-250 mg) without HSS (control group), as is shown in Figure 1.
The research team conducted training sessions on the design and implementation of the protocol before and during the start of the study.
The local institutional ethics committees approved the trial, and it was conducted in accordance with the Declaration of Helsinki and the International Conference of Harmonization Guidelines for Good Clinical Practice.All participants provided written informed consent.The trial was registered at ClinicalTrials.gov(NCT04533997).
Eligibility
The study's inclusion and exclusion criteria are listed in Table 1.Patients were eligible if they presented with WHF and at least two signs of volume overload (peripheral oedema, jugular enlargement, ascites, or pleural effusion) and had an N-terminal pro-B-type natriuretic peptide (NT-proBNP) > 1000 pg/mL or a B-type natriuretic peptide (BNP) > 250 ng/mL.In addition, patients should be treated with oral loop diuretics for ≥1 month before inclusion at a dose of ≥80 mg of furosemide or ≥40 mg of torsemide per day.2][13] Key exclusion criteria included any of the following: cardiogenic shock, renal replacement therapy, severe metabolic derangements, or other high-risk criteria that would require hospitalization.
The study did not include individuals with acute pulmonary oedema or basal oxygen saturation below 90%.
Objectives and endpoints
The primary objective of the SALT-HF trial was to test whether the administration of HSS plus IV furosemide can improve decongestion over IV furosemide in WHF outpatients with predominant systemic volume overload.The hypothesis was that the combination therapy increases the diuresis volume 3 h after the start of treatment.
Primary endpoint
Diuresis after 3 h of treatment start was selected as the primary endpoint.
Secondary endpoints
Secondary endpoints included between-treatment changes in (a) urinary sodium and body weight 3 h after treatment, (b) 7-day changes in congestion parameters that included the composite congestion score, the body weight, the diameter of inferior vena cava, the presence of lung B-lines by ultrasound, haemoconcentration parameters (haematocrit, albumin and proteins), and circulating biomarkers such as NT-proBNP, antigen carbohydrate 125 (CA125), and urinary sodium, (c) 7-day changes in NYHA and visual analogue scale (Table 2).
The decision to evaluate secondary clinical endpoints at 7 days was made to provide a pragmatic approach in line with routine clinical practice.
Safety endpoints
The safety endpoints included (a) 7-day worsening of kidney function defined as an increase in serum creatinine ≥0.3 mg/ dL, (b) electrolyte abnormalities defined as hypokalaemia (K + < 3.5 mEq/L) or hyperkalaemia (K + > 5.5 mEq/L), (c) WHF that required IV ambulatory diuretic, emergency department visit or HF rehospitalization at day 30, (d) CV mortality on day 30, and (e) all-cause mortality and HF hospitalization at day 30 (Table 2).
Table 1 Eligibility criteria of SALT-HF trial Inclusion criteria
• A clinical diagnosis of acute heart failure and at least two signs of volume overload: ○ Pitting oedema ○ Jugular enlargement ○ Ascites ○ Pleural effusion • Maintenance of daily oral loop diuretic use of ≥80 mg furosemide or ≥40 mg torsemide for ≥1 month.• BNP > 250 ng/ml or NT-proBNP >1000 pg/mL at time of screening.
• Stable treatment in the previous 2 weeks (except diuretic).
• Need for intravenous diuretic therapy to relieve congestion, according to the responsible physician.
Exclusion criteria
• Cardiogenic shock or systolic blood pressure <90 mmHg or >180 mmHg.• Hospital admission criteria in the opinion of the treating physician.
• Acute Pulmonary oedema or basal oxygen saturation less than 90%.
Study intervention and procedures
The study flowchart is depicted in Figure 1, and a summary of the procedures in each visit is presented in Table 3.
Visit 1: Screening and randomization
Patients meeting the inclusion criteria, with prior informed consent, were randomized 1:1 to treatment with IV furosemide plus HSS (intervention group) versus IV furosemide (control group) using a stratified block randomization method based on an automated online system, blinded to the physi-cians who evaluated the patient.Randomization was performed by a trained HF nurse in a separate room.The patient and the treating physician were blinded to the assigned treatment.
Before the start of treatment, data were collected on patient demographics, medical history, and medical and device therapy at baseline.Blood and urine tests were collected at baseline and analysed in the local laboratory at each centre.
A complete clinical evaluation that included vital signs, ECG, NYHA functional class, a visual analogue scale from 0 (worst state of health) to 100 (best state of health), 14 and a congestion multiparametric assessment was performed. 15he multiparametric approach included
Primary endpoint Total diuresis after 3 h of the start of treatment Secondary endpoints
• Change in body weight after 3 h.
• Change in body weight after 7 days.
• Change in congestion score after 7 days.
• Change in diameter of inferior vena cava after 7 days.
• Change in the presence of lung B-lines by echo after 7 days.
• Change in NYHA and visual analogue scale after 7 days.
Safety endpoints
• Worsening kidney function, defined as an increase in serum creatinine ≥ 0.3 mg/d on day 7.
• WHF that requires IV ambulatory diuretic, emergency department visit, or HF rehospitalization at day 30.
• CV mortality on day 30.
• All-cause mortality and HF hospitalization at day 30.
Treatment preparation and administration
After randomization, the HF nurse prepared the treatment in a separate room.The infusion consisted of a fixed furosemide dose that depended on the previous patient's home dose, administered in 100 mL of 0.9% NaCl physiological solution for 1 h (Table 4).
Patients with a home furosemide dose or equivalent equal to or inferior to 160 mg received 125 mg of furosemide.Patients with a home furosemide dose or equivalent superior to 160 mg received 250 mg of furosemide (Table 4).Torsemide was converted to the furosemide equivalent dose: 2 mg of oral furosemide was considered equivalent to 1 mg of oral torsemide.
In the absence of clear guidance from previous studies, or robust evidence supporting the use of double the home oral dose of loop diuretic, the SALT-HF diuretic dose strategy was based on local protocols that had previously evaluated the safety of this diuretic approach. 19n the group of patients randomized to HSS therapy, sodium chloride 200 mg/mL (10-15 mL) was added, depending on the patient's plasmatic sodium (2.6% HSS for patients with plasmatic sodium from 135 to 145 mEq/L, 3.4% HSS for patients with plasmatic sodium from 125 to 135 mEq/L).
Urine collection and sampling
Patients were asked to void empty before the administration of the infusion.From then on, the treatment, as well as the urine collection, started.The infusion was administered for 1 h, and the diuresis was collected for 3 h.Special care was taken to ensure that all urine was collected.The patient was advised to avoid food or liquid intake during this period.
The patients received the treatment and were monitored in a dedicated on-site IV infusion space.All the participant centers (n = 13, Data S2) have well-structured HF programmes led by specialized HF physicians and nurses.
Visit 2: 3-h post-treatment
Three hours after the start of the infusion, diuresis volume, blood pressure, and body weight were evaluated, and a new urine sample was collected.
To prevent heterogeneity in the treatment approach in the following days, we proposed a diuretic protocol adjustment at the time of discharge.Due to the potential risk of hypokalemia during diuretic treatment, the protocol also included recommendations about potassium supplements to mitigate the risk of hypokalaemia (Data S3).
Briefly, an increase in the diuretic treatment or combination therapy was recommended if no cause for decompensation was present.No other HF therapy modifications were allowed during the first 7 days.
Visit 3: 7-day post-treatment
Seven days after randomization, a new clinical and multiparametric evaluation that included all procedures of visit 1 was performed (Table 3).A 7-day evaluation was set to offer a pragmatic approach similar to real-life practice.Concomitant medication and adverse events, including any hospitalizations or deaths between treatment and day seven were recorded.Further therapy and changes in any medication at this stage were left to the treating physician's discretion.
Visit 4: 30-day post-treatment
Randomized patients were contacted by telephone 30 days following completion of the study treatment period to assess vital status, NYHA, the occurrence of adverse events, and current prescriptions for HF medications.Design and baseline characteristics of SALT-HF trial: hypertonic saline therapy in ambulatory heart failure
Sample size and power calculation
The SALT-HF trial was powered for its primary endpoint: diuresis after 3 h.Observational studies about diuretics in outpatients reported a 3-h diuresis of 1100 mL.A similar diuresis after 3 h was considered in the standard of care group (IV furosemide) for sample size calculation.An increase in diuresis of 20% was deemed both achievable and clinically relevant.Assuming a two-sided alpha of 0.05 and a statistical power of 80%, a sample size of 168 patients was calculated.
Statistical analysis
Continuous variables will be expressed as means (±1 standard deviation [SD]) or medians (interquartile range [IQR]), and discrete variables as percentages.At baseline, the means, medians, and frequencies between treatment groups will be compared using the t-test, Wilcoxon test, and chi-square test, respectively.The primary endpoint (3-h diuresis) between treatments will be analysed by linear regression analysis.Secondary endpoints (changes in congestion, changes in kidney function, and changes in electrolytes) will be evaluated by linear regression analysis, including the baseline value of the endpoint as a covariate (ANCOVA).For 30-day adverse clinical events, a Cox regression analysis will be performed.Because of hierarchical levels of nesting (treatment sequence within patient ID and the latter among study centers), the models will include patient ID and study centre as random intercepts.All statistical comparisons will be performed under a modified intention-to-treat principle.
Current status
The SALT-HF trial is complete and is currently in the analysis phase.One hundred sixty-eight participants were randomized across 13 sites between December 4, 2020, and March 31, 2023.One patient had to be excluded due to screening failure (Figure 2).Baseline characteristics of the 167 patients did not present significant differences between the two groups across most parameters (Table 5).The SALT-HF trial encompassed an elderly population [median age: 81 years (IQR: 73-87), 30.5% females] with a high burden of co-morbidities such as diabetes, hypertension, atrial fibrillation, chronic obstructive pulmonary disease, and chronic kidney disease.Approximately half of the participants had an ejection fraction >50%.Most patients exhibited a poor functional NYHA class and several signs of congestion.Natriuretic peptides and CA125 were elevated at baseline.The chronic dose of diuretic was high (median furosemide dose: 120 mg), and the use of combination therapy was common (one-third of the patients were on treatment with SGLT2i and/or thiazides and half of them received mineralocorticoid receptor antagonists).
Discussion
The SALT-HF trial will evaluate whether HSS plus IV furosemide therapy is safe and more effective in improving diuretic response than IV furosemide in ambulatory patients suffering WHF, a subgroup frequent in clinical practice but underrepresented in clinical trials.The ultimate goal is to provide novel insights into diuretic strategies that may help relieve congestion and prevent HF hospitalizations.
Outpatient management of worsening heart failure
A substantial rise in HF burden in the Western population is projected for the next decades. 20We observed that patients included in the SALT-HF trial were significantly older and had more co-morbidities than previously reported series of ambulatory HF patients. 10][23] Therefore, a shift from the classic hospital-centric model to ambulatory WHF management strategies is of growing interest to both patients and healthcare providers.
Multidisciplinary HF management programmes are recommended (class IA) in HF guidelines to reduce hospitalizations and mortality. 11,12Even though guidelines describe the characteristics and components of HF programmes, they do not provide any recommendations about diuretic approaches for ambulatory worsening HF, and the management of these patients remains empirical.To address this gap, the Heart Failure Working Group of the French Society of Cardiology has recently published a document about the practical outpatient management of WHF. 2 The document defines 'outpatient HF' as the worsening of HF signs and symptoms in a patient with chronic HF that requires escalation of therapy without an urgent need for hospitalization.The stratification of patients who will not require hospital admission in the first instance is one of the key elements for a successful ambulatory approach.Determinant clinical scenarios, HF profiles, co-morbidities, and social criteria should be considered to determine the feasibility and safety of outpatient management.SALT-HF inclusion and exclusion criteria define the clinical profile most likely to fit an ambulatory IV diuretic programme.
Diuretic approach in the outpatient setting
Unfortunately, limited data exist regarding IV diuretic strategies and outcomes in ambulatory WHF.The document about the practical outpatient management of WHF proposes a standardized diuretic protocol based on data from the largest study that has evaluated an outpatient IV diuretic approach. 2riefly, Buckley et al. assessed the diuretic response and outcomes in 283 patients with WHF. 1 The diuretic protocol consisted of a 3-h IV diuretic infusion based on the furosemide equivalent of patient's home oral diuretic total daily dose.This strategy was associated with significant urine output and weight loss.This and other observational studies suggest that an IV-diuretic ambulatory approach may provide an alternative to hospitalization for the management of selected patients with HF. 3,4 On the other hand, diuresis after 3 h of treatment was selected as the primary endpoint of our study because (i) urinary output is commonly used as a metric of loop diuretic efficacy, 24 (ii) the direct effect of loop diuretics is increasing diuresis, 25 (iii) urinary output is an objective and reproducible endpoint, not open for bias, (iv) 3-h diuresis has been evaluated in observational studies assessing ambulatory diuretic treatment. 1,5
Hypertonic saline therapy in worsening heart failure
Observational studies, randomized trials, and metanalysis have shown the potential benefits of HSS plus IV loop diuretic in improving diuretic response, kidney function, and outcomes in patients hospitalized with WHF. 10,26,27owever, the differences in the population included in the studies and the heterogeneity in the infusion preparation or the diuretic dose (Data S4) have limited the adoption of this therapy in clinical practice.In addition, many physicians often struggle with administering sodium in patients who present with fluid overload.We specifically excluded patients with pulmonary oedema or low oxygen saturation.
Therefore, in this trial, we will assess the efficacy and safety of this therapeutical approach in patients with predominant tissue systemic volume overload, which includes Design and baseline characteristics of SALT-HF trial: hypertonic saline therapy in ambulatory heart failure patients with lower limb oedema, ascites, and/or pleural effusion.We hypothesize that the administration of HSS may improve the diuretic effectiveness of furosemide in patients with predominant extravascular and systemic volume overload.The rationale of this approach is the osmotic capacity of HSS, which leads to fluid mobilization from the interstitial space into the intravascular compartment, increasing intravascular volume and renal blood flow and facilitating the delivery of the diuretic agents to the nephron. 28Although some research suggests that the blunted diuretic response observed in chronic furosemide users is primarily due to decreased tubular responsiveness rather than insufficient furosemide tubular delivery, 29 we speculate that the volume expansion and the action of IV furosemide will lead to a more efficient diuretic response in a cohort of patients with data of diuretic resistance.Notably, the administration of chloride together with sodium may also play a role in the potential benefits of this therapy in HF patients.Several observational studies have shown the association of low chloride levels with poor diuretic response, increased neurohormonal activation, and a worse prognosis. 30The cardiorenal effects of sodium-free chloride supplementation are currently being tested in patients with ADHF.(Mechanism and Effects of Manipulating Chloride Homeostasis in Stable Heart Failure; NCT03440970).
The hypothesis that will be tested in the SALT-HF trial is important in several aspects.First, there is a growing need for strategies that prevent HF hospitalizations.Second, to our knowledge, no randomized trials evaluating diuretic strategies in the outpatient setting have been performed, and treatment remains empirical.Finally, although HSS therapy appears to be a promising strategy to overcome diuretic resistance, a growing body of evidence supporting the beneficial effects may promote implementing this approach in outpatient WHF patients.
Conclusions
The SALT-HF trial will investigate whether a combined therapy of IV furosemide with HSS can increase diuresis after 3 h compared with IV furosemide in ambulatory patients with WHF and systemic fluid overload.
Figure 2
Figure 2 Flow diagram of patient inclusion.
Table 2
Study endpoints of SALT-HF trial
Table 4
Infusion preparation and placebo and treatment dose in SALT-HF trial HSS, hypertonic saline solution; IV, intravenous; po, orally.
a Considered positive when ≥3 B-lines were bilaterally observed in ≥2 lung fields. | 2024-02-22T06:17:23.170Z | 2024-02-21T00:00:00.000 | {
"year": 2024,
"sha1": "4d513fc50f1be7ca7654b00eaf4d5b20ad8643d6",
"oa_license": "CCBYNCND",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/ehf2.14720",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "6099a2c71311317b72b89f86815aeed6b6438d91",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
244478315 | pes2o/s2orc | v3-fos-license | T RA VLR: Now You See It, Now You Don’t! Evaluating Cross-Modal Transfer of Visio-Linguistic Reasoning
Numerous visio-linguistic (V+L) representation learning methods have been developed, yet existing datasets do not evaluate the extent to which they represent visual and linguistic concepts in a unified space. Inspired by the crosslingual transfer and psycholinguistics literature, we pro-pose a novel evaluation setting for V+L models: zero-shot cross-modal transfer . Existing V+L benchmarks also often report global accuracy scores on the entire dataset, rendering it difficult to pinpoint the specific reasoning tasks that models fail and succeed at. To address this issue and enable the evaluation of cross-modal transfer, we present T RA VLR, a synthetic dataset comprising four V+L reasoning tasks. Each example encodes the scene bimodally such that either modality can be dropped during training/testing with no loss of relevant information. T RA VLR’s training and testing distributions are also constrained along task-relevant dimensions, enabling the evaluation of out-of-distribution generalisation. We evaluate four state-of-the-art V+L models and find that although they perform well on the test set from the same modality, all models fail to transfer cross-modally and have limited success accommodating the addition or deletion of one modality. In alignment with prior work, we also find these models to require large amounts of data to learn simple spatial relationships. We release T RA VLR as an open challenge for the research community. 1
Introduction
Research in psycholinguistics has found that human processing of spatial words activates brain regions associated with the visual system (Tang et al., 2021), suggesting the latter's involvement in processing linguistic input. It is therefore reasonable to expect multimodal neural models to resemble humans in this respect. Following its recent success in the text domain (Devlin et al., 2019), the pretraining-fine-tuning paradigm has been applied to the vision and text modalities to create unified visio-linguistic (V+L) representations. Just as pre- * Equal contribution 1 Code and dataset to be released shortly.
(a) A complete example for spatiality task. trained multilingual models have been shown capable of zero-shot cross-lingual transfer on various NLP tasks (Conneau et al., 2020), we may expect true V+L models to be capable of generalising to a modality not seen during fine-tuning.
However, current approaches of benchmarking V+L models often involve reporting global accuracy scores on the entire dataset, rendering the specific sources of success and failure difficult to diagnose (Ribeiro et al., 2020;Goel et al., 2021). For instance, Visual Question Answering (VQA, Goyal et al. 2017) tasks may allow models to exploit dataset bias (Dancette et al., 2021), or may reduce to object recognition problems which do not evaluate the models' ability to perform more complex tasks beyond aligning words or phrases in the text to a portion of the image (Hudson and Manning, 2019; Acharya et al., 2019), which does not require knowledge of syntactic structure or the ability to reason over several objects in a scene (Bernardi and Pezzelle, 2021). This concern is pertinent given that pretraining tasks often primarily involving masking either the textual or image modality.
Datasets such as NLVR2 (Suhr et al., 2019) address this limitation, but do not allow for finegrained evaluation along specific dimensions (Tan et al., 2021). CLEVR (Johnson et al., 2017) and SHAPEWORLD (Kuhnle and Copestake, 2017) enable targeted evaluations of a V+L model's reasoning abilities but only encode the scene unimodally, as images. Additionally, their test examples may still be in the training distribution with respect to task-relevant dimensions, making it difficult to draw conclusions about generalisation ability.
We thus propose TRAVLR, a synthetic dataset comprising four V+L reasoning tasks: spatiality, cardinality, quantifiers, and numerical comparison. Unlike SHAPEWORLD, we control the train/test split such that examples in the out-ofdistribution (OOD) test set are OOD with respect to task-relevant dimensions. We focus on tasks involving spatial and numerical reasoning, which require reasoning over multiple objects and have been shown to be challenging for V+L models (Johnson et al., 2017;Parcalabescu et al., 2020).
Inspired by the word/picture sentence verification task from psycholinguistics (Goolkasian, 1996), we further propose the cross-modal transfer setting, where the model is trained on input from one modality and tested on input from another. By representing the scene bimodally as both an image and a caption (Figure 1), TRAVLR is the first V+L dataset to support such an evaluation setting, to our knowledge. Being able to transfer cross-modally in a zero-/few-shot manner will improve data efficiency in applications where diverse image data is more difficult to obtain than written descriptions.
We use TRAVLR to evaluate the minimum amount of data and training steps required for various V+L models to learn simple reasoning tasks, in addition to comparing their final performance. We show that existing models often require unreasonably large amounts of data and training steps to learn simple tasks. We argue that our dataset serves as a basic sanity check for the abstract reasoning capabilities of models, and is complementary to datasets such as GQA (Hudson and Man-ning, 2019) that evaluate real-world object recognition and compositional reasoning abilities. Finally, we find current pretrained V+L models to be generally unsuccessful at learning to perform a task from one modality alone, and thus pose this as an open challenge for future V+L models.
2 Related Work V+L tasks and datasets. The Visual Question Answering (VQA) task involves answering a question about an image, and is a complex task as it requires an ability to process input in both visual and textual modalities (Antol et al., 2015). A known issue with VQA datasets is the presence of real-world language priors and statistical biases in the training and testing distribution (Kervadec et al., 2021;Agrawal et al., 2018;. This was a problem with the original VQA dataset that Goyal et al. (2017) addresses in VQA v2.0 by balancing each query with pairs of images. However, Dancette et al. (2021) show that VQA v2.0 still contains both unimodal and multimodal biases that models can exploit. Furthermore, many questions in VQA use non-compositional language that do not require abilities beyond object recognition. Bernardi and Pezzelle (2021) argue that more complex reasoning tasks should involve reasoning about relationships between several objects in the image.
NLVR attempts to address the lack of compositionality in VQA using synthetically generated images of abstract 2D shapes accompanied by human-written English sentences to be judged true or false (Suhr et al., 2017). NLVR2 (Suhr et al., 2019) and SNLI-VE (Xie et al., 2019) also involve truth-value/entailment judgement tasks, and use photographs instead of synthetic images. Both lack detailed annotations of the specific semantic phenomena evaluated by each example. GQA improves over VQA by focusing on compositional questions that require reasoning over multiple objects and contains detailed annotations (Hudson and Manning, 2019), but still suffers from statistical imbalances and the lack of an out-ofdistribution test set (Kervadec et al., 2021).
Other synthetic datasets focusing on reasoning include CLEVR (Johnson et al., 2017) and SHAPEWORLD (Kuhnle and Copestake, 2017). CLEVR is a fully synthetic 3D dataset and contains the annotations necessary to analyse model performance on specific tasks along various di-mensions. SHAPEWORLD is a dataset targeting linguistic phenomena such as spatial relationships and quantifiers. gSCAN (Ruis et al., 2020) focuses on generalisation of commands within a 2D gridworld with objects, including various tasks such as novel composition of object properties, novel movement direction and novel adverbs.
V+L models. Pretrained V+L models differ in their architecture and pretraining methods. VL-BERT (Su et al., 2019), UNITER and VisualBERT (Li et al., 2020a) are single-stream models with a single Transformer while ViLBERT , LXMERT (Tan and Bansal, 2019), and ALBEF (Li et al., 2021) are dual-stream models which encode image and textual inputs separately before fusing them. All models use a combination of masked language modelling and image-text matching objectives for pretraining, with LXMERT additionally pretraining on VQA and ALBEF using a contrastive loss to align the image and language representations. UNITER, VisualBERT, and LXMERT use a frozen Faster R- CNN (Ren et al., 2015) to extract region-based features from the image while ALBEF directly encodes the image with a Vision Transformer (Dosovitskiy et al., 2020).
Cross-modal transfer. Prior work has found models trained on multimodal data to perform better on unimodal downstream tasks than models trained only on one modality. Zadeh et al. (2020) found models trained on multimodal input to perform better than text-only models on three NLP tasks, while Testoni et al. (2019) showed that models trained on textual, visual, and auditory input were better at a quantification task than models trained only on a single modality. Using a task involving queries about typical colours of objects, Norlund et al. (2021) found that BERT trained on linguistic and visual features outperforms BERT trained on language data filtered for mentions of colour. Frank et al. (2021) investigated the crossmodal alignment of pretrained V+L models with an ablative method based on masked-modelling.
Summary. The datasets commonly used to evaluate V+L models such as VQA and NLVR2 lack fine-grained interpretability, due to the lack of annotations for semantic phenomena involved in each example. Additionally, multiple semantic phenomena co-occur within a single training example, making it difficult to control the training distribution and assess the generalisation abilities of models. In contrast, we show that task-specific investigation of the key reasoning capabilities of models can help to compare the data efficiency, performance and limitations of different models.
Existing V+L datasets also only present the scene in the visual modality and cannot be used to evaluate a V+L model's ability to generalise across modalities (cross-modal transfer). By encoding the underlying scene in both visual and textual modalities, we can evaluate cross-modal transfer by training on one and evaluating on the other.
Existing synthetic datasets (e.g., CLEVR and SHAPEWORLD) often fail to split the training and testing distributions along a dimension relevant to the specific task, because they generate captions based on randomly generated images. Our approach exploits the benefits of a synthetic dataset by strictly controlling the training and evaluation distributions to test the generalisation abilities of V+L models and avoid statistical biases from language priors and non-uniform distributions.
TRAVLR: Cross-Modal Transfer of Visio-Linguistic Reasoning
Psycholinguistic studies have demonstrated the effect of input modality on the performance of humans on truth-value judgement tasks. Goolkasian (1996)'s word/picture-sentence verification task found human subjects to exhibit faster reaction times and fewer errors when asked to provide truth value judgements on images as opposed to words, even when both encode the same underlying concept. We similarly ask if pretrained visiolinguistic models also exhibit asymmetries in accuracy and amount of required fine-tuning data when the input modality is varied. There is also evidence that human infants learn abstract rules better when presented with bimodal cues such as visual shapes and speech sounds, compared to when information is presented in a single modality (Frank et al., 2009;Flom and Bahrick, 2007). We similarly ask if presenting the context in both visual and textual modalities improves performance for V+L models.
To answer these questions, we construct TRAVLR, a synthetic dataset comprising four visio-linguistic reasoning tasks. These tasks were previously identified to be challenging for textonly models (Lin and Su, 2021;Dua et al., 2019;Ravichander et al., 2019). TRAVLR aims to eval-uate the extent to which pretrained V+L models already encode or are able to learn these four relations between entities present in the input scene. We first describe the general task format before elaborating on the cross-modal transfer problem.
Given a scene with objects, S = {o 1 , ..., o n }, where each object can be represented as a tuple < colour, shape, position >, and a textual query q involving some relation r(o 1 , ..., o i ) between two or more objects in S, each task involves learning a function y = f (S, q) where y ∈ {true, false}. This is essentially a binary classification task. For instance, in the spatiality task, the relation r could be left or right, which compares the positions of two objects. In the numerical comparison task, the noun phrases in the query refer to subsets of objects, while the relations (e.g., more) compare the cardinality of two sets of objects. Successfully assigning a truth value to the query thus involves reasoning over several objects (Bernardi and Pezzelle, 2021). However, a model can never have direct access to the underlying representation scene in reality and must operate on visual or textual forms. Depending on the modality under evaluation, S may be presented in the form of an image or a textual description. In prior work such as VQA, S is presented as an image. In TRAVLR, S is represented bimodally as an < image, caption > pair.
Each example consists of an image, an accompanying caption, and a query. Images include abstract objects arranged in a grid, where each object has two properties: colour and shape. In our experiments, we draw from 5 possible colours (red, blue, green, yellow, orange) and 7 possible shapes (square, circle, triangle, star, hexagon, octagon, pentagon), giving 35 unique objects in total. Each caption fully describes the image with the coordinates of each object (e.g., "There is a red circle at A 1, a blue square at B 2..."). A description of the coordinate system, e.g., "Columns, left to right, are ordered A to F. Rows, top to bottom, are ordered 1 to 6." is prepended to the caption. The caption and query are separated by the [SEP] token when presented to the models. Removing the caption reduces our tasks to VQA-like tasks.
Reasoning Tasks
When generating the examples for each task, we constrain the training distribution along a dimension relevant to the specific task. For instance, Figure 2: An example of OOD test set construction. In a left/right relationship reasoning task, the relevant dimension is the column ID. Specific ID pairs ( ) are held out to form this test distribution.
in generating the training and out-of-distribution (OOD) test sets for the spatial relationship task, we ensure that the positions of the queried objects do not overlap between the training and test sets along the relevant axis (e.g., the horizontal axis for horizontal relations left/right). This differs from the approach adopted by SHAPEWORLD, which randomly generates images which are subsequently fed to a module responsible for generating query statements and assigning a truth value based on the corresponding scene. Consequently, the distribution of the images in SHAPEWORLD cannot be directly constrained depending on the specific task, and may lead to statistical bias in the distribution of queries. Furthermore, SHAPE-WORLD does not enforce task-specific train/test splits. We next explain how we construct the train/test splits.
Spatiality. The spatiality task involves queries of the form "The
The red circle is right of the blue triangle"), where the possible relationships are to the left of, to the right of, above, below.
For horizontal relationships (left/right), the train and test sets are split based on the pair <column(object1), column(object2)> (Figure 2), while for vertical relationships (above/below), the train and test sets are split based on the pair <row(object1),row(object2)>. This tests the model's ability to generalise its understanding of spatial relationships along the relevant dimension, as opposed to memorising fixed positions. Cardinality. The cardinality task involves queries of the form "There are [number] [shape/colour] objects." (e.g., "There are 3 circle objects"). The train and test sets are split by the <number, shape/colour> pair occurring in the input image/caption. For instance, instances containing 2 circles and 3 triangles could occur in the training distribution, while instances containing 3 circles occur only in the OOD test distribution.
Quantifiers. This task involves queries of the form "[quantifier] the [attr1] objects are [attr2] objects.", where the quantifiers include all, some, only and their negated counterparts not all, none and not only. The train-test split is performed based on the pair < a, b >, which varies based on the quantifier, as given in Table 1. For instance, for the relationship not all, a is the number of objects which fulfil both [attr1] and [attr2], and b is the number of objects which fulfil [attr1] but not [attr2]. In the example in Figure 3, the pair is < 2, 3 >.
Numerical comparison. The numerical comparison task involves queries of the form "There are [more/fewer] [attr1] objects than [attr2] objects" (e.g., "There are more circles than squares."). The train and test sets are split by the pair < a, b > where a is the number of [attr1] objects, and b is the number of [attr2] objects. Instances for which |a − b| is smaller than some threshold is assigned to the training distribution, and the remaining pairs are assigned to the testing distribution. Success in this task is evidence of generalisation based on an implicit understanding of numeral scales and the transitivity of comparison i.e., a > b and b > c implies that a > c.
Cross-Modal Transfer
Humans can often reason about relationships between objects regardless of whether they are described with language or presented as an image. If pretrained V+L models have learnt a truly multimodal representation, they should similarly be able to learn a reasoning task with input from one modality and perform inference using input from the other modality with no extra training. We term this ability zero-shot cross-modal transfer, which may have significant implications for sample efficiency. Since annotated examples comprising diverse real-world images may be more difficult to collect compared to written descriptions, it may be desirable to be able to train multimodal models on only textual input before using them to process visual input. Furthermore, it is hoped that transfer from the visual modality can improve spatial reasoning ability even if the scene is represented as text instead of an image.
We draw an analogy to the concept of zero-shot cross-lingual transfer in multilingual NLP, which is often used to evaluate a multilingual model's ability to generalise to languages unseen during fine-tuning (Conneau et al., 2018). Similar to cross-modal transfer, a model is first pretrained on multiple languages before being fine-tuned on a task data from a single language. The model is then evaluated on examples from languages unseen during fine-tuning. Just as an ideal multilingual model is expected to perform well in this setting, we expect a perfectly multimodal model to perform just as well on the "unseen" modality.
Encoding the scene as both an image and a caption allows models to be trained and evaluated on a combination of three settings: i) image-only input, ii) caption-only input, and iii) both image and caption inputs. We note that the query is presented as part of the text input in each setting. In the caption-only setting, a blank white image is presented to the models. TRAVLR is, to our knowledge, the first dataset that supports the evaluation
Generating TRAVLR
We generate the dataset for each task separately. To generate each example, we select objects and determine their attributes with their values randomly sampled uniformly from the predefined distributions. The training and OOD test distributions are determined prior to the generation of both the input scene and queries based on the pairs explained above. We thus ensure that the pairs relevant to each task do not overlap between the train and OOD test sets, and also that all queries in the OOD test set cannot be found in the training set. Distractor objects irrelevant to the intended query are finally added to the scene.
For example, to generate queries for the spatial relationship task, we select two objects and their positions based on the training/testing distributions, before adding a distractor object to the scene. We then randomly select a relationship (e.g., either left or right for a horizontal relationship) for the query, which corresponds to either a true or false answer.
We also generate metadata for each example, comprising abstract representations of the input scene, the caption and the query, and crucial information about each example (e.g. the pairs). The spatiality task's training set comprises 32k examples, the training sets of the other tasks comprise 8k examples each due to differences in the amount of data required for convergence.
In-and out-of-distribution test sets. Prior work on generalisation evaluation recommended the use of in-and out-of-distribution (henceforth InD and OOD, respectively) test sets (Csordás et al., 2021). Hence, we include validation and InD test sets are randomly sampled from the training distribution (10k examples each) in addition to the OOD test set described in section 3.1 (20k examples). (2019)'s implementation. 2 We also use two text-only models, RoBERTa (Liu et al., 2019) and BERT (Devlin et al., 2019), as baselines in the caption-only setting.
Setting. We train models on each task for 80 epochs. Following Csordás et al. (2021)'s finding that early stopping may lead to underestimation of model performance, we do not do early stopping. Hyperparameters are fixed at a batch size of 256 and 2e-5 for ALBEF, based on the recommended parameters for fine-tuning on SNLI-VE (Xie et al., 2019), and a batch size of 32 and a learning rate of 5e-6 for VisualBERT, UNITER and LXMERT. As the hyperparameters recommended for fine-tuning on VQA on VisualBERT, UNITER and LXMERT did not lead to convergence on some tasks, we adjusted learning rates downwards which led to convergence or better performance on our dataset.
Within-Modality Results
We first discuss the results of within-modality testing, i.e., testing the model on the modality it was trained on (Table 3).
Spatiality. In the image-only setting, UNITER achieves the highest F 1 score, followed by LXMERT, VisualBERT, and finally ALBEF. Visu-alBERT requires at least 32k examples to achieve above random performance, while ALBEF completely fails to learn the task (Figure 4a). We note that 32k is a rather significant number of examples given the task's simplicity, where there are only 36 possible positions for each object. For comparison, the full VQA dataset, which aims to cover all possible tasks, consists of only 443k training examples. A potential explanation for the superior performance of UNITER and LXMERT could be that unlike the other models, spatial coordinates from the bounding boxes are explicitly encoded as features in the input to the image encoders, which they are able to directly exploit. This option is unavailable to ALBEF, which takes in the image as input directly instead of relying on a separate object detector. VisualBERT does not make use of these spatial coordinates, which may have impaired its ability to relate the positions of objects. and Frank et al. (2021) posited this limitation of VisualBERT to be the reason for its poor performance on tasks such as RefCOCO+ and Masked Region classification, but the impact of this limitation on spatial reasoning has hitherto not been directly investigated.
Although LXMERT and UNITER achieve similar F 1 scores, UNITER succeeds at learning the task with substantially less data (≤4k examples) compared to all the other models while LXMERT converges in fewer epochs. For instance, LXMERT only requires 4 epochs of training on the 32k dataset to exceed 99% accuracy on the validation set, while UNITER requires 39 epochs. A possible reason for the faster convergence of LXMERT on the spatiality task is that it was additionally pretrained on a VQA task, unlike all the other models. We can conclude that LXMERT is more efficient in terms of training steps, while UNITER is more sample efficient. Johnson et al. (2017) previously found CNN and LSTM models to have trouble learning spatial relationships and often memorise absolute object positions. Our results indicate that Transformerbased models likely face similar issues.
In the caption-only setting, only UNITER and ALBEF manage to achieve non-random performance. Only ALBEF achieves performance close to that of RoBERTa, which achieves an F 1 score of 99.46 on the OOD test set with 32k examples, but requires 16k examples to achieve above random performance (Figure 4b). BERT achieves an F 1 score of 89.47 on the OOD test set, outperforming all models other than ALBEF. Nevertheless, BERT requires at least 8k examples to achieve above random performance, corroborating findings by Lin and Su (2021) that BERT requires While ALBEF achieves similar results in the caption-only and image+caption settings, UNITER's performance in the image+caption setting is significantly better than performance in the caption-only setting (Figure 4c). This may indicate a benefit to training UNITER on both modalities on the spatiality task. Cardinality. The cardinality task requires less data than the spatiality task, and all models are able to achieve non-random performance in the settings where they were trained with 8k examples. In the image-only setting, LXMERT is the best performing model, followed by VisualBERT, UNITER, and finally ALBEF. Furthermore, performance on the OOD test set is poorer than performance on the InD test set for all models except ALBEF. Our results corroborate Parcalabescu et al. (2020)'s finding that current V+L models face difficulties counting objects in images.
All models are generally able to achieve close to a perfect F 1 score in the caption-only and image+caption settings, with the exception of LXMERT. It is notable that VisualBERT is the best performing model in the caption-only and im-age+caption settings, in contrast to its poor performance on the spatiality task. The performance of VisualBERT, UNITER and ALBEF are comparable to that of RoBERTa (OOD: 99.82; InD: 99.93) and BERT (OOD: 98.93; InD: 98.98). These results corroborate findings by Wallace et al. (2019) that numeracy is encoded in the embeddings of language-only models. We hypothesise that the poor performance of LXMERT compared to the other models is a result of not being initialised with BERT parameters prior to pretraining.
Quantifiers. All models perform well on the quantifiers task in most settings, with some exceptions. In the image-only setting, all models exceed an F 1 score of 90, except for ALBEF, which achieves an F 1 score of 60.45. Performance in the caption-only and image+caption settings are similar with the exception of LXMERT, and the best performing model is ALBEF, as in the numerical comparison task. Both RoBERTa and BERT achieve a F 1 score of 100 both the InD and OOD datasets. Good performance on the OOD dataset indicates that models are not memorising specific numbers of objects and instead use more general strategies for understanding quantifiers. This parallels psycholinguistic findings that comprehension of (non-exact) quantifiers does not correlate with counting skills in human children (Dolscheid et al., 2015).
Numerical comparison. Recall that the InD and OOD test sets for the comparison task are split based on the pair < a, b > where a is the number of objects with the first attribute in the query and b is the number of objects with the second attribute. In the main experiment, the value of |a − b| in the InD test set is between 1 and 3, inclusive, and the maximum value of a and b is 9. In contrast to the simpler cardinality task, there is a significant difference between the InD and OOD settings for the numerical comparison task in across most settings, although the models still manage to achieve above random performance on the OOD test set.
In the image-only setting, performance on the InD test set is above 80 with the exception of AL-BEF, which does not achieve above random performance. The performance of the other models on the OOD test set is significantly lower, between 55 to 65, indicating that all models only have a limited ability to generalise beyond the training distribution. In the caption-only setting, all models achieve close to an F 1 score of 100 on the InD test set, but do not generalise well to the OOD test set. Only ALBEF maintains a close to perfect F 1 score on the OOD test set, while Visual-BERT (F 1 =89.55) and UNITER (F 1 =61.90) show a significant drop in performance, and LXMERT's performance is not better than random. Performance in the image+caption setting is similar to the caption-only setting, although performance on the OOD test set is poorer compared to the caption-only setting for all models, with the exception of LXMERT. Notably, the performance of ALBEF is like that of RoBERTa, which achieves similar results on OOD and InD test sets (OOD: 99.94; InD: 100), while VisualBERT and UNITER are closer to that of BERT which performs significantly more poorly on the OOD test set (OOD: 68.47; InD: 99.60).
Our results suggest that models are able to generalise to unseen number pairs by constructing an implicit numeral scale, but only to a limited extent. Furthermore, unlike the cardinality and quantifiers tasks, the numerical comparison task is able to differentiate the models' understanding of the numeral scale. ALBEF performs the best on the OOD test set, followed by VisualBERT, UNITER and finally, LXMERT. As explained earlier, a possible explanation for the poorer performance of LXMERT is that it was not initialised with BERT parameters prior to pretraining.
Adding/Dropping Modalities
We now discuss the effects of either adding or dropping a modality to the input presented dur-ing testing. Understood together with the observation of a clear similarity between the results in the caption-only and image+caption settings across all models and reasoning tasks, these results reveal a bias towards the textual modality across all models. Overcoming this bias is a potential step towards modality-agnostic representations.
First, models trained in the image+caption setting at times exhibit minor drops in performance when tested in the caption-only setting. In contrast, models trained in the image+caption setting perform poorly in the image-only setting in most cases, with random or close to random performance. The only exception is UNITER on the spatiality task, which achieves slightly above random performance when the caption is dropped during testing. This indicates a clear bias towards the textual input and a tendency to be distracted by the caption across all models.
Second, models trained only on captions perform similarly when tested in the image+caption setting. In contrast, testing a model trained only on images in the image+caption setting results in a significant performance drop. This is true even for the quantifiers task, which was shown to be the easiest for all models. In most cases, the F 1 score is either close to or below random chance, although ALBEF and UNITER differ from Visu-alBERT and LXMERT in managing to maintain above random performance when the caption is added to the input during testing.
Cross-Modal Transfer
Despite performing well in the within-modality settings, none of the models succeed at performing zero-shot cross-modal transfer to an unseen modality (i.e., from image-only to the captiononly setting, and vice versa). Our results suggest that existing V+L representation learning methods have not succeeded in producing truly multimodal, or modality-agnostic, representations.
Discussion
Asymmetry between image and text modalities. Thus far, we have seen that performance in the caption-only setting resembles performance in the image+caption setting across all tasks. Models may be distracted by the caption to the extent that they perform more poorly in the image+caption setting than in the image-only setting. Testing a model fine-tuned on both modalities on only one modality reveals that models often rely heavily on the caption, ignoring the image completely, to the extent that they are unable to answer questions when the caption is removed. The overall finding is hence a bias towards the textual modality. This corroborates previous findings by Cao et al. (2020) that the textual modality plays a more important role than the image for both single and dual stream models. Furthermore, we find that V+L models perform poorer than unimodal RoBERTa on various caption tasks, similar to Iki and Aizawa (2021), who show that pretraining on V+L models cause poorer performance on NLU tasks.
Comparing tasks. The spatiality task is the hardest task, requiring at least 32k examples in some cases, as opposed to the 8k examples required for the other tasks. Focusing on the imageonly setting, the easiest task is the quantifiers task (models achieve F 1 scores above 90), followed by cardinality (models achieve F 1 scores below 90), and finally numerical comparison (models achieve F 1 scores below 70). In the caption-only and image+caption settings, all models apart from LXMERT achieve a close to perfect F 1 score in the cardinality and quantifiers tasks, while all models except ALBEF suffer a performance degradation on the OOD dataset.
Our results thus suggest that while most models may succeed on the quantifiers task, they succeed at counting only to a limited extent. Furthermore, while success on the cardinality task indicates an understanding of the meaning of numbers in absolute terms, the numerical comparison task is able to more clearly differentiate the models in terms of their understanding of individual numbers' relative positions on a numeral scale.
Comparing models. In general, the performance of UNITER, VisualBERT and ALBEF in the caption-only and image+caption settings is better than performance in the image-only setting. In contrast, LXMERT appears to perform better in the image-only settings compared to the captiononly settings. Although UNITER achieves slightly higher results than LXMERT on the spatiality and quantifiers tasks, LXMERT significantly outperforms UNITER on the other tasks, likely due to its having been pretrained on a VQA task.
Our findings corroborate 's findings that differences between models cannot be clearly attributed to differences in model architecture (i.e. whether they are single or dual-stream). Since LXMERT and ALBEF are both dual-stream models, our results suggest that the pretraining method has a significant effect on the model's performance on a downstream task. The performance of ALBEF in image-only settings is poorest amongst all models across all tasks. We hypothesise that the pretrained object detector used by the other models but not ALBEF confers an advantage on the image-only setting because the embeddings presented to the models already encodes the objects directly. We further note that while ALBEF may succeed at aligning phrases in the text to a portion of the image, all our tasks involving numerical reasoning include noun phrases which refer to multiple and spatially noncontiguous objects in the image.
UNITER is the only model which succeeds on all tasks on all settings, and seems to be less susceptible to performance degradation when modalities are added or removed from the input during test. These results suggest that some component of its architecture or pretraining procedure makes it less overly biased towards one modality.
Conclusion
While pretrained multilingual models have been shown to demonstrate zero-shot cross-lingual transfer abilities, it is unclear whether visiolinguistic models are similarly able to perform zero-shot cross-modal transfer of downstream task abilities to a modality unseen during training. We hence contribute a new dataset, TRAVLR, inspired by the word/picture sentence verification task from psycholinguistics. In contrast to existing V+L reasoning datasets that only encode the scene as an image, TRAVLR enables the evaluation of crossmodal transfer ability by encoding the scene in both the visual and textual modalities, allowing either to be dropped during training or testing.
TRAVLR allows us to evaluate specific visiolinguistic reasoning skills in isolation instead of at an aggregate level, enabling finer-grained diagnosis of a model's deficiencies. We found some models to learn better from one modality than the other, and some task-setting combinations to be more challenging across the board. Our results also provide useful estimates of the amount of data required for V+L models to acquire various reasoning skills, indicating that existing models may require unreasonably large amounts of data and training steps to learn certain types of visiolinguistic reasoning. Improving the sample efficiency and training time of V+L models in this regard is a potential direction for future research.
We further found all models to suffer from a bias towards the textual modality and are unable to perform zero-shot cross-modal transfer of reasoning capabilities despite, in some cases, achieving close to perfect performance on a test set encoded in the same modality. Developing new visio-linguistic representations that are capable of zero-shot cross-modal transfer is another direction for future research, and we pose this as a new challenge for multimodal modelling. | 2021-11-23T08:33:09.224Z | 2021-01-01T00:00:00.000 | {
"year": 2021,
"sha1": "bfd1752963697520ceb484a8b8c65b9dba99ca96",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "bfd1752963697520ceb484a8b8c65b9dba99ca96",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
221107438 | pes2o/s2orc | v3-fos-license | Transcranial magnetic stimulation as a tool to understand genetic conditions associated with epilepsy
Abstract Advances in genetics may enable a deeper understanding of disease mechanisms and promote a shift to more personalised medicine in the epilepsies. At present, understanding of consequences of genetic variants mainly relies on preclinical functional work; tools for acquiring similar data from the living human brain are needed. Transcranial magnetic stimulation (TMS), in particular paired‐pulse TMS protocols which depend on the function of cortical GABAergic interneuron networks, has the potential to become such a tool. For this report, we identified and reviewed 23 publications on TMS studies of cortical excitability and inhibition in 15 different genes or conditions relevant to epilepsy. Reduced short‐interval intracortical inhibition (SICI) and reduced cortical silent period (CSP) duration were the most commonly reported findings, suggesting abnormal GABAA‐ (SICI) or GABABergic (CSP) signalling. For several conditions, these findings are plausible based on established evidence of involvement of the GABAergic system; for some others, they may inform future research around such mechanisms. Challenges of TMS include lack of complete understanding of the neural underpinnings of the measures used: hypotheses and analyses should be based on existing clinical and preclinical data. Further pitfalls include gathering sufficient numbers of participants, and the effect of confounding factors, especially medications. TMS‐EEG is a unique perturbational technique to study the intrinsic properties of the cortex with excellent temporal resolution; while it has the potential to provide further information of use in interpreting effects of genetic variants, currently the links between measures and neurophysiology are less established. Despite these challenges, TMS is a tool with potential for elucidating the system‐level in vivo functional consequences of genetic variants in people carrying genetic changes of interest, providing unique insights.
| INTRODUCTION
The epilepsies are a heterogeneous group of conditions characterized by predisposition to recurrent seizures 1 and involve alterations in excitation-inhibition balance of cerebral networks. 2 In over a third of patients, seizures persist despite appropriate medical treatment. 3 In this setting, a precision medicine approach, where a patient would be offered a treatment most likely to be effective in their particular condition, would be especially valuable. 4 The rapidly increasing knowledge of the genetics of epilepsies has contributed to the understanding of their pathophysiological mechanisms and improved diagnostic ability on the individual level. 5 To facilitate the translation of genetics to precision medicine, further study of the functional consequences of genetic variants and better understanding of interindividual variation in phenotypes are required. 4 Whilst traditional preclinical models are powerful tools to reveal basic underlying mechanisms, they have inherent limitations in the context of in vivo whole-organism processes in humans, and there is a pressing need for tools for interpreting disease mechanisms at the level of the individual patient. 6 Transcranial magnetic stimulation (TMS) is a non-invasive means of studying cortical excitability employing electromagnetic induction (Figure 1). 7 Paired-pulse measures (Figure 2), which reflect the function of GABA-dependent cortical interneuron circuits, 7,8 could be especially relevant in the context of some genetic epilepsies, in which abnormal interneuron function in implicated. 9 A number of studies have attempted to identify TMS-based biomarkers for diagnosis or prediction of treatment response in the epilepsies. 10,11 In general, these studies have involved relatively heterogenous patient groups, which may have contributed to the paucity of clinically adopted markers. 10,11 Studying TMS in genetically-defined patient groups may decrease inter-individual variability and increase power for detecting clinically useful signatures of pathophysiological processes yielding biomarkers for diagnosis and treatment of these conditions. This approach has already been employed for several genetic conditions of relevance to epilepsy, which we now review.
AND PARAMETERS
Transcranial magnetic stimulation involves using a timevarying magnetic field generated over the head to induce an electric field reaching the cerebral cortex ( Figure 1). When sufficiently strong, the electric field depolarises cortical neurons. TMS targets neural populations depending on their axonal orientation with respect to the direction of the induced
Key Points
• To translate epilepsy genetics to precision medicine, tools for interpreting disease mechanisms at the level of the individual patient are needed. • TMS-EMG paired-pulse measures, which reflect the function of GABAergic cortical interneuron circuits, may be particularly relevant for some genetic epilepsies. • For several reviewed conditions, TMS-EMG provides evidence of altered inhibition, in keeping with hypotheses of the effects of the conditions on the GABA system. • More research is required to expand and corroborate these findings, including the role of TMS-EEG.
F I G U R E 1 Setup for TMS-EMG.
The magnetic pulse is delivered through the figure-of-eight coil (A), which is held over the primary motor cortex. EMG is recorded from contralateral intrinsic hand muscles using surface electrodes (B). A neuronavigation system consisting of an infrared camera (C) and coil and subject trackers (D) is used to visualise the position of the coil with respect to the brain (E). The evoked EMG trace (F) is also displayed on a computer screen current, and on stimulation intensity. The neural signal is propagated transsynaptically to anatomically connected areas. A number of factors affect cortical excitability as measured by TMS. In the motor system, which has been most extensively studied so far, GABAergic inhibitory interneurons are thought to be involved in modulation of I-waves, which propagate down the corticospinal tract following stimulation of the motor cortex. 12 Thus the function of GABAergic neurons themselves has an impact on responses to TMS, and in some conditions, GABAergic neuronal function is compromised (see below).
When coupled with electromyography (EMG), the TMS pulse is applied over the primary motor cortex, typically the hand area ( Figure 1). As a basic measure of cortical excitability, the resting motor threshold (rMT) is defined as the minimum intensity required to produce motor evoked potentials (MEPs) in at least five out of ten trials. 7 Stimulus intensities in other paradigms are often determined with respect to the rMT (sub-or suprathreshold).
In paired-pulse TMS (Figure 2), a conditioning stimulus (CS) is used to modulate the MEP evoked by the subsequent test stimulus (TS). 7 The effect of the CS on the MEP evoked by the test stimulus is influenced by the inter-stimulus interval (ISI). 7 Other factors may also have an influence. The effects are generally expressed as the ratio between conditioned and unconditioned MEP amplitudes. 7 Short-interval intracortical inhibition (SICI) is elicited with a subthreshold CS and ISIs of 1-5 ms 13 ; it is considered to reflect GABA A ergic inhibition. 8 Intracortical facilitation (ICF; ISI is 10-15 ms) is thought to reflect a net effect of increased facilitation over GABA Amediated inhibition. 8,13 In contrast, short-interval intracortical facilitation (SICF) is elicited when a near-threshold CS follows the TS at intervals of around 1.5 ms, corresponding to those observed between I-waves. 14 Long-interval intracortical inhibition (LICI) is elicited when a suprathreshold CS precedes the TS by 60-200 ms and is associated with GABA B ergic inhibition. 8,15 The duration of the cortical silent period (CSP), elicited when a single suprathreshold single pulse is targeted at the cortical hotspot of a contracted muscle, may also be used as a measure of GABA B ergic inhibition. 7 These paradigms are discussed in more detail in the Material S1.
EEG may be used to measure oscillatory activity of the cortex, which reflects excitatory and inhibitory postsynaptic | 1821 SILVENNOINEN Et aL. potentials generated by populations of cortical neurons. 16 A TMS pulse leads to non-physiological perturbation (simultaneous depolarisation) affecting a large population of cortical neurons, resetting their endogenous oscillatory activity; EEG may be used to study the evoked oscillatory response (Figure 3). 16 The frequency and waveform of this TMS-EEG response are reproducible and characteristic of the stimulated area. 17 However, it is important to note that particularly the later components of the TEP (>100 ms after the TMS pulse) may be contaminated by brain activity resulting from the audible noise and scalp sensation that TMS produces, although these can be reduced with appropriate adjustments to technique. 18,19 Studying the propagation of the TEP may provide an additional measure of cortical function, specifically the effective connectivity within cortico-cortical and cortico-subcortical networks. 16 See Material S1 for more details.
TMS parameters
A summary of the effects of major classes of antiepileptic drugs in healthy controls is presented in Table S1. The effect of introduction of lamotrigine, a sodium channel blocker, on increasing rMT was also demonstrated in people with epilepsy whereas no effect was seen on CSP. 20
| LITERATURE REVIEW
A literature search was conducted using PubMed for "transcranial magnetic stimulation" AND ("genetics" OR "genes" OR "syndrome" OR "gene") NOT rTMS. See Material S1 for more details. The experimental details and findings are summarised in Table 1. The conditions are grouped below by presumed main mechanisms implicated, where possible.
| Synaptic neurotransmission
Conditions directly affecting release of GABA may be hypothesized to affect SICI, LICI or CSP; conditions affecting function of postsynaptic GABA A receptors would be expected to alter SICI. Conditions affecting release of glutamate may be hypothesized to alter ICF or SICF.
| GABRG2
The GABRG2 gene encodes the gamma-2 subunit of the GABA A receptor. GABRG2 mutations have been identified in epilepsy syndromes of varying severity. 21 The R43Q mutation is associated with childhood absence epilepsy and febrile seizures. 22 A mouse model showed reduced expression of gamma-2 subunits and decreased inhibitory post synaptic potentials in cortical neurons. 23 Fourteen people with the mutation, half with a previous history of epilepsy or febrile seizures, were studied. 24 Compared to controls, SICI was reduced in keeping with the hypothesis of altered GABA A ergic inhibition, 24 suggesting that genetic impairment in GABA A ergic neurotransmission may be detected using TMS. 24 ICF was also increased, which may reflect the interrelatedness between the two measures as a balance between net inhibition and excitation.
| NMDARs -GRIN1, GRIN2B
NMDARs are ionotropic glutamate receptors. They consist of two glycine-N-serine binding GluN1 subunits, encoded by GRIN1, and two glutamate-binding regulatory subunits. The four types of glutamine-binding subunits, GluN2A-2D, are encoded by the genes GRIN2A-D. 25 Mutations in GRIN1, GRIN2A and GRIN2B are implicated in epilepsy and other neurological and neurodevelopmental conditions. 25 Mori and others studied cortical excitability in 77 healthy participants with specific single nucleotide polymorphisms (SNPs) in the GRIN1 and GRIN2B genes. 26 The SNPs were rs4880213 and rs6293 for GRIN1 and rs7301328, rs3764028, and rs1805247 for GRIN2B. No association with epilepsy has been reported for any of these SNPs; rs1805247 was F I G U R E 3 Butterfly plot of a TMS-evoked potential (TEP) from stimulation of left premotor cortex. Channels are referenced to average. Channel FCz is highlighted in blue. Components are designated by their polarity and latency. Compared to earlier components, N100 and P180 are less well defined. The scalp maps show the potential distribution and power for the latencies associated with these components. biallelic patients' AEDs two rows above. 27 Healthy controls were medication free.
The differences in TMS parameters between the two patient groups were thought to be unlikely to be explained by drugs given the similarities in drug use between these groups. one also CBZ and another quetiapine).
CSTB
None of the healthy controls had a history of use of neuroactive medication.
reported to be enriched in individuals with bipolar disorder. 27 Participants underwent paired-pulse TMS and intermittent theta burst stimulation (iTBS) to probe long-term potentiation (LTP) -like plasticity. For rs4880213, compared to participants homozygous for the C variant or heterozygotes, participants homozygous for the T variant (allelic frequency 11.5%) had less SICI. 26 This could imply a skewed balance between GABA A ergic inhibition and glutamatergic facilitation. 26 For GRIN2B rs1805247, compared to participants homozygous for the A variant, heterozygotes (AG; allelic frequency 9.7%) had significantly greater ICF at 15 ms, in keeping with enhanced NMDAR function. 26 Furthermore, iTBS led to greater MEP amplitudes in those with genotype AG, suggesting enhanced NMDAR-dependent plasticity. 26 Interpretation of findings for both SNPs is limited in the absence of functional evidence for pathogenicity.
| STX1B
STX1B encodes syntaxin-1B, a part of the SNARE complex, which mediates synaptic vesicle release from the presynaptic membrane. 28 Studies on neurons from a Stx1b null mutant mouse suggested a crucial role for syntaxin-1B in spontaneous and evoked fast synaptic vesicle exocytosis in glutamatergic and GABAergic synapses. 29 Loss-of-function mutations in STX1B have been identified in patients with fever-associated epilepsies of variable severity. 28 Nine carriers of pathogenic STX1B mutations showed no differences to controls in any paired-pulse measures. 30 The authors concluded that the results support normal GABA A ergic and glutamatergic excitability in asymptomatic carriers of STX1B mutations, perhaps influenced by compensatory mechanisms during maturation. 30
| TRPV1
Transient receptor potential vanilloid 1 (TRPV1) channels are non-selective cation channels which regulate the release of glutamate, and are implicated in hippocampal LTP and longterm depression (LTD). 31 Increased expression of TRPV1 has been demonstrated in glutamatergic and GABAergic neurons in hippocampal and temporal cortex specimens from patients with medial temporal lobe epilepsy and it has been proposed as a potential therapeutic target in epilepsy. 32 Mori and others studied cortical excitability in participants with one of two particular SNPs in the TRPV1 gene: rs222749, which is not linked to substantial changes in the expression or properties of the TRPV1 channel, and rs222747, which is associated with enhanced functionality of the channel. 31 No differences in measures emerged between the participants with different status of rs222749. 31 For rs222747, SICF at 1.5 ms and 2.7 ms was significantly greater in the GG group compared to wild-type participants or heterozygotes. 31 The results were interpreted as evidence that TRPV1 regulates glutamatergic synaptic transmission in humans. 31
| Neuronal membrane excitability
Conditions affecting membrane excitability of cortical excitatory interneurons and corticospinal neurones might be expected to influence cortical excitability as measured by rMT, for example. However, conditions affecting other neuronal populations may have different effects.
| ATP1A3
Alternating hemiplegia of childhood (AHC) is a rare neurodevelopmental condition characterized by paroxysmal episodes of hemiplegia and other neurological features, as well as fixed clinical features. Over half of patients have epilepsy; other common features include intellectual disability, ataxia and movement disorders. 33 Eighty-five percent of patients with AHC have a mutation in the ATP1A3 gene, 34 which encodes the α3 subunit of the Na + /K +-ATPase -the main subunit expressed in neurons. 34 Altered function of the Na + /K +-ATPase may lead to changes in membrane excitability and affect other integral processes dependent on ionic gradients. 34 In seven patients with AHC tested between attacks, rMT was significantly lower compared to both healthy controls and people with epilepsy. 36 Patients showed unusual intra-session variability in MEP amplitudes; during attacks, no response was seen. 36 The results were thought to suggest increased cortical excitability between attacks, and reduced cortical excitability during attacks; 36 in keeping with the findings of the mouse model. 35
| SCN1A
SCN1A encodes the alpha subunit of the type 1 voltage-gated sodium channel (NaV 1.1). 37 NaV1.1 is highly expressed in GABAergic interneurons, where loss of function reduces excitability and impairs phasic GABA release. 38 Variants in SCN1A are associated with a wide range of epilepsy phenotypes. 37 On the severe end of the spectrum, Dravet syndrome (DS) is an epileptic encephalopathy with onset in the first year of life. 39 Subsequently, developmental delay becomes evident and multiple seizure types, including myoclonus, occur, often with a refractory course. 39 Most cases are associated with de novo loss-of-function mutations. 40 In a TMS study of five adults with DS, SICI was absent; the difference compared to controls was significant. 41 Lack of SICI in DS patients was thought to reflect low sensitivity of inhibitory networks to subthreshold stimuli, 41 in keeping with a mouse model of DS, in which the threshold for action potential generation in inhibitory interneurons was increased. 42 An SCN1A splice site polymorphism (rs3812718, G>A) is associated with febrile seizures. 43 Functionally, this variant has been shown to lead to relative overexpression of the "neonatal" copy of exon 5 in the gene (5N), compared to the "adult" exon (5A). 43 Preclinical work suggests the relative expressions of the 5N vs 5A may affect neurophysiological properties of NaV1.1, with the 5N type displaying higher sensitivity to changing temperatures 44 and faster recovery from inactivation. 44,45 TMS was studied in 49 healthy people homozygous for the A allele at rs3812718, and compared with 43 people with genotype GG. 46 At baseline, there were no differences in any measures between the groups. Participants were randomised to receive a single dose of either carbamazepine (CBZ) 400 mg or placebo, with TMS performed 5 hours afterwards. Compared to genotype AA, genotype GG was associated with a higher increase in CSP duration following intake of CBZ. The authors concluded that effects on GABAergic interneuron excitability may explain differences in effect for CBZ with the AA genotype compared to wild type. 47 Taken together, reduction of GABA A ergic inhibition was demonstrated in Dravet syndrome, in keeping with reduced NaV1.1 function in GABAergic interneurons in this condition. 41 In contrast, baseline measurements did not show any differences between asymptomatic carriers of an SCN1A variant linked with risk of febrile seizures. 46
| Repeat expansions
Repeat expansions are an increasingly recognised cause of disease in genetic neurological disorders. The effects on cortical excitability are unlikely to be uniform, but rather may reflect different pathological mechanisms in different conditions.
| CSTB
Unverrich-Lundborg disease (EPM1) is an autosomal recessive progressive myoclonic epilepsy caused by mutations in CSTB, which encodes cystatin B, a cysteine protease inhibitor. 48,49 The most common pathogenic mutation involves an unstable expansion of a 12-nucleotide repeat in the promoter region of the gene, leading to reduced gene expression. 49 Clinical features include stimulus-sensitive myoclonus and generalised tonic-clonic seizures, with onset between ages six to 16 years. 48 Disease progression generally occurs within the next 5-10 years with increased myoclonus and development of ataxia and mild cognitive decline. 48 On neuroimaging, cortical thinning, also involving the motor cortex, has been reported. 50 There is evidence for changes in GABAergic signalling in EPM1. Using vesicular GABA transporter (VGAT) immunohistochemistry in knock-out mice, reduction in the density of cortical GABA terminals and in cortical thickness was demonstrated. 51 Reduced paired-pulse depression also suggested impaired GABA A ergic and GABA B ergic inhibition. 51 Together, these findings were thought to be in keeping with loss of GABAergic interneurons. 51 In a series of TMS studies of patients with biallelic CSTB expansion mutations, compared to controls, rMT was significantly higher in patients, thought to be influenced by patients' AED polytherapy and/or increased scalp-to-cortex distance. 52,53 CSP was prolonged in patients compared to controls, 54 implying enhanced GABA B ergic inhibition; this was interpreted to possibly reflect a compensatory increase in inhibition to counteract hyperexcitability. 54 In a multivariate model, CSP duration independently predicted severity of myoclonus. 53 The size of the longer CSTB expansion correlated with both rMT and CSP. 54 In a separate study of paired-pulse TMS in ten patients with EPM1, compared to controls, SICI was reduced. 55 TMS-EEG over the motor area was performed in seven individuals with biallelic CSTB expansion mutations. 56 Compared to healthy controls, despite higher rMT, patients had a higher P30 amplitude. 56 The amplitudes of N100 and P180 were lower in patients compared to controls. 56 Compared to controls, 25-100 ms following TMS, patients showed lower power in the alpha and beta bands over both M1s and lower gamma power over the vertex; also inter-trial coherence in the alpha and beta bands was reduced. 56 TMS-EMG results were also reported for five compound heterozygous patients, all of whom had a monoallelic repeat expansion in CSTB, with the other allele affected by a common point mutation (c.202C>T). 57 Thresholds appeared elevated and CSP prolonged. 57 Despite the limitation of small sample size and limited statistical testing, the findings were thought to fit the more severe phenotype of the compound heterozygous form. 57 In summary, SICI was reduced in keeping with impaired GABA A ergic inhibition. 55 Prolonged CSP implies enhanced GABA B inhibition and the degree of prolongation may correlate with genotypic and phenotypic severity. However, lower amplitudes of N100 and P180 in EPM1 56 may point towards the opposite -decreased GABA B inhibition ( Table 2). The authors suggested that in contrast to impaired baseline cortico-cortical inhibition captured by TEPs, prolonged CSP could reflect excessive cortico-spinal inhibition. 56 The difference might also relate to the fact that CSP is elicited during volitional activity whereas the TEP is elicited at rest.
| Familial cortical myoclonic tremor with epilepsy
Familial cortical myoclonic tremor with epilepsy (FCMTE), also known as familial adult myoclonic epilepsy (FAME), autosomal dominant cortical myoclonus and epilepsy (ADCME), and benign adult familial myoclonic epilepsy (BAFME), is characterized by myoclonus affecting the limbs distally and generalized tonic-clonic seizures, with onset in the second or third decade. 58 Additional features may include mild cerebellar signs and cognitive decline. 58 The inheritance pattern is autosomal dominant and, recently, inserted pentamer repeats, presumed to lead to RNA toxicity, have been identified in STARD7, SAMD12, TNRC6A and RAPGEF2, [59][60][61] in keeping with previously implicated loci. 58 Compared to controls, six members of a Dutch FCMTE pedigree had reduced SICI at ISIs 2 and 3 ms. 62 Compared to controls, four members of an Italian family affected with FCMTE were reported to have a significantly lower rMT, reduced duration of CSP, and reduced SICI. 63 Findings of reduced SICI in both studies suggest impaired GABA A ergic inhibition, also seen in other myoclonic epilepsies. It would be interesting to explore possible changes in GABA B ergic inhibition, implied by reduced duration of CSP, using LICI.
| FMR1
Fragile X-related disorders are associated with excess repeats of the CGG trinucleotide in the promoter region of the FMR1 gene. The product of this gene is the fragile X mental retardation protein 1 (FMRP), an RNA-binding protein with high brain expression levels. A CGG repeat number of over 200 is associated with epigenetic silencing of the FMR1 gene. 64 This is termed the full mutation and is associated with Fragile X syndrome (FXS), characterized by language delay, hyperactivity, intellectual disability, anxiety and certain physical features. 64 Seizures occur in 3%-16%. 64 Fragile X premutation involves 50-200 excess CGG repeats and is associated with RNA gain-of-function toxicity. 65 Individuals may present with varying phenotypes including affective disorders, ADHD and primary ovarian insufficiency. A subset may develop fragile X-associated tremor/ ataxia syndrome (FXTAS) in later life, characterized by intention tremor, cerebellar ataxia, cognitive decline and Parkinsonism. 65 Neuroimaging features include cerebellar and brain stem atrophy and white matter changes, some of which may be present also in asymptomatic carriers. 65,66 Excessive glutamate-mediated signalling via metabotropic group I receptors (mGluRI) is among presumed disease mechanisms. 64 Altered neuroplasticity and reduced expression of GABA A receptor subunits have been demonstrated in animal models of the full mutation 64 ; tonic, but not phasic, inhibition was found to be altered. 67 The picture may be further complicated by changes in expression of proteins involved in GABA metabolism. 68 In the premutation, findings regarding GABA have been less consistent. Increased GABA A inhibition was shown in the cerebellum 68 ; another study demonstrated that an abnormal firing pattern in hippocampal neurons was rescued by the GABA A agonist allopregnanolone, implying an underlying impairment in GABA A ergic inhibition. 69 Thirteen asymptomatic women harboring the Fragile X premutation had significantly less SICI at 2 ms compared to controls. 70 The authors concluded that the differences in mutation carriers and controls were in keeping with GABA A ergic dysfunction with relatively preserved GABA B function. 70 In a study of 18 individuals with Fragile X, compared to controls, patients had reduced SICI and increased ICF and LICI. 71 The findings were interpreted as evidence for reduced GABA A inhibition, along with excessive glutamatergic signalling, as a disease mechanism. 71 Increased LICI was thought to imply preserved postsynaptic GABA B inhibition. 71
| EPM2B
Lafora body disease (EPM2) is another progressive myoclonic epilepsy with autosomal recessive inheritance. 72 It typically presents in adolescence with stimulus-sensitive myoclonus with a rapidly progressive and fatal disease course. 72 Most cases of EPM2 are caused by mutations in the EPM2A or EPM2B genes, which lead to abnormalities in the regulation of glycogen metabolism and polyglucosan accumulations (Lafora bodies). 72 In a mouse model of EPM2A, Lafora bodies first appeared in GABAergic neurons, the number of which was reduced compared to wild type prior to the development of the inclusions. 73 Canafoglia and coworkers studied paired-pulse TMS in ten patients with EPM1 and five patients with EPM2. 55 At ISIs 1-5 ms, SICI was significantly reduced in patients compared to controls with no difference between EPM1 and EPM2. 55 Compared to controls, EPM2 patients showed inhibition instead of facilitation at 10 ms; they also had significantly less LICI at 80 ms and 100 ms. 55 In summary, the findings implied impaired GABA A -and GABA B -mediated inhibition in EPM2B. The finding of reduced ICF was suggested to reflect a compensatory phenomenon to counteract epileptic activity. 55
| NEU1
Sialidosis is an autosomal recessive lysosomal storage disorder caused by mutations in the NEU1 gene, which encodes a lysosomal neuraminidase. 74 Type I sialidosis is a progressive myoclonic epilepsy with onset in the second or third decade; characteristics include macular changes known as 'cherryred spots' and ataxia. 74 Compared to controls, 12 individuals with sialidosis I had greater MEP amplitudes, while SICI and CSP duration were significantly reduced. 75 The findings were thought to be in keeping with increased excitability and reduced GABA A ergic, and possibly also GABA B ergic, inhibition. 75
| NF1
Neurofibromatosis I is an autosomal dominant disorder caused by loss-of-function mutations in the NF1 gene encoding neurofibromin 1, an inhibitor of the RAS pathway. 76 Characteristic features include neurofibromata, multiple café-au-lait macules, skinfold freckling, Lisch nodules, and certain skeletal abnormalities and malignancies. 76 Epilepsy occurs in 14%. 77 A knock-out mouse model of NF1 showed increased RAS-dependent GABA release and impairment of LTP and learning. 78,79 In a study of ten people with clinically diagnosed NF1, SICI was significantly increased compared to controls. 80 Paired-associative stimulation (PAS) suggested impaired cortical LTP-like plasticity. 80 Patients were randomized to a four-day course of either lovastatin, an inhibitor of the RAS pathway, or placebo. Compared to those receiving placebo, patients who received lovastatin showed normalisation of SICI; lovastatin but not placebo was also associated with PAS-associated MEP facilitation. 80 The findings were thought to imply that reduced LTP in NF1 may be mediated by increased (GABA A ergic) cortical inhibition. 80
| Prader-Willi syndrome
Prader-Willi syndrome (PWS) is characterized by infantile hypotonia, developmental delay, excessive eating, behavioural problems, hypogonadism, short stature, and characteristic facial features; seizures are present in 10%-20%. 81 PWS is caused by lack of expression of the paternally-derived copies of genes in the region 15q11.2-q13, which normally constitute, due to genomic imprinting, the active copies of the genes. 81 This is most often due to either deletions involving the paternally-inherited chromosome, or maternal uniparental disomy (UDP). 81 Civardi and others studied 21 patients with PWS, of whom 13 had a deletion and 8 had UPD. 82 Compared to controls, patients had significantly higher rMT and reduced ICF. 82 SICI was significantly decreased in patients with a deletion compared to those with UDP. 82 The 15q11-q13 region contains 50-100 genes, including genes encoding subunits of the GABA A receptor; reduced expression of GABRA5 and GABRB3 was demonstrated in lymphoblastoid cells from individuals with PWS. 83 On the other hand, increased expression GABRA4 and GABRG2, which encodes the gamma-2 subunit common in postsynaptic GABA A receptors, was also shown. 83 The possible effects of these changes on synaptic GABA A ergic transmission are unclear.
Decreased SICI in patients with deletions compared to patients with UDP implies lower GABA A ergic inhibition in the deletion subgroup. It was noted that the incidence of seizures in PWS patients with deletions is higher compared to those with UDP. 82 However, in a previous study of gene expression in PWS patients, no differences in expression of GABArelated genes were found between patients with deletions and those with UPD. 83 Reduced ICF was contrary to the authors' hypothesis. 82 It was postulated that the actual observations might reflect altered inhibition/excitation balance due to overstimulation of GABA A Rs not affected by the mutation; 82 this needs further elucidation.
| Rett syndrome
Rett syndrome is a disorder characterized by developmental regression, including early loss of language and hand skills, gait abnormalities, and stereotypical hand movements; epilepsy is also common. 84 Most patients are female and the majority of typical Rett cases are associated with mutations in the X-linked MECP2 (methyl-CpG-binding protein-2) gene. The protein product, MeCP2, regulates gene expression and is especially abundant in neurons; its function appears integral for normal brain function throughout life. 84 Reduced expression of GABRB3 and UBE3A, which encodes a ubiquitin protein ligase involved in proteostasis in the brain, have been demonstrated. 85 In a mouse model, lack of expression of Mecp2 in GABAergic neurons was sufficient for reproducing many characteristics of Rett syndrome, including electrographic seizures. 86 In a single-pulse TMS study of seventeen patients with Rett syndrome, motor thresholds were significantly elevated compared to controls, interpreted to reflect reduced brain volume and abnormal cortico-cortical connections. 87 CSP duration was reduced compared to controls, suggesting reduced GABA B ergic inhibition 87 ; it would be interesting to also study SICI and LICI to explore this further.
| SSADH deficiency
Succinic semialdehydase (SSADH) deficiency is an autosomal recessive disorder caused by mutations in the ALDH5A1 gene leading to impaired gamma-aminobutyric acid (GABA) degradation and, in turn, to accumulation of gamma-hydroxybutyric acid (GHB) and GABA. 88 Other metabolic changes, including oxidative stress and dysregulation of autophagy may have a role in the pathogenesis of SSADH deficiency. 88,89 Characteristic features of SSADH deficiency include developmental delay with emphasis on language skills, hypotonia, ataxia, seizures, and behavioural problems. 88 Eight patients with SSADH deficiency were studied and results compared to those of parents (obligate heterozygotes), and healthy controls. 90 CSP was significantly shorter in patients compared to all control groups; also LICI was nearly absent in patients but present in all control groups. 90 It was postulated that downregulation of postsynaptic GABA B receptors, which has been demonstrated in an animal model of SSADH deficiency, could lead to increased binding of GABA to presynaptic GABA B receptors and reduced activity-dependent secretion of GABA. 90 Supporting this theory, previous PET imaging studies had shown reduced benzodiazepine binding in SSADH. In the fetal brain, GABA exerts a depolarising effect due to reversed directionality of chloride transport associated with GABA A -receptor activation. 89 The authors postulated that exposure to high levels of GABA during this time could trigger compensatory mechanisms, with unpredictable effects on later GABAergic balance. 90
| DISCUSSION
We have presented several examples of epilepsy-related genetic conditions with evidence of altered inhibition or facilitation as measured by TMS. For many, findings are congruent with hypotheses of the effects of the conditions/variants on the GABA system, particularly synaptic GABA A ergic transmission ( Figure 4). This is most evident with reduced SICI in individuals with loss-of-function variants in GABRG2, 24 and in individuals with SCN1A-related Dravet syndrome. 41
| Decreased GABA A ergic inhibition in myoclonic epilepsies
Decreased GABA A ergic SICI was demonstrated in both EPM1 and EPM2, 55 consistent with existing evidence of loss of GABAergic neurons. 51 Reduced LICI in EPM2 suggests impaired GABA B inhibition. 55 In EPM1, prolonged CSP implies enhanced GABA B inhibition but lower amplitudes of TEP components N100 and P180 might suggest the opposite. 56 What these different parameters tell us about alteration of GABA B ergic inhibition in EPM1 should be studied further.
In EPM1, the TMS-EEG power spectra changes and altered inter-trial coherence were thought to reflect impaired function of cortico-cortical/cortico-subcortical motor circuits. 56 In healthy people, the effects of GABA A ergic drugs on TEP components were sometimes seen only on the contralateral hemisphere, suggesting that changes in GABA A ergic inhibition may be associated with altered thalamocortical connectivity. 91 From this perspective, it would be interesting to study, with TMS-EEG, the connectivity patterns in EPM1 further, and correlate these with the other GABAergic parameters.
EPM1 has emerged as a particularly informative condition also due to the genotype-TMS correlation that emerged: the size of the expansion mutation correlated with rMT and CSP duration. 53 Further, CSP correlated with phenotypic severity as measured by the degree of myoclonus. 53 These findings show promise for use of TMS biomarkers in research and clinical practice.
GABAergic inhibition was also impaired in familial cortical myoclonic tremor with epilepsy, 62,63 and another progressive myoclonic epilepsy, sialidosis type 1. 75 Within the literature on TMS in epilepsies without confirmed (monogenic) genetic etiology, juvenile myoclonic epilepsy and benign myoclonic epilepsy are associated with impaired SICI. 92,93 Indeed, TMS may be particularly suited for studying myoclonic epilepsies.
There is evidence for the role of GABA A ergic mechanisms in myoclonus of other etiologies than EPM1: in rats, intraventricular injection of a GABA A antagonist precipitated myoclonus. 94 The efficacy of GABA A ergic drugs such as benzodiazepines in the treatment of myoclonus of various etiologies 95 may further support the role of GABA A impairment in the pathogenesis. In a single report of a patient with focal epilepsy of unknown etiology, whose seizures included negative myoclonus involving the right upper limb, [ 123 I] iomazenil single photon emission computed tomography suggested reduced GABA A function in the left medial frontal area. 96 As such, the findings of impaired GABA A ergic transmission in epilepsies with myoclonus are plausible.
| Sources of ambiguity
For some conditions and findings, synthesising TMS results with pathophysiology was not straightforward. In SSADH deficiency, contrary to the authors' hypothesis in a condition with elevated levels of GABA, TMS findings suggested reduced GABA B activity. 90 The authors postulated that this could arise due to a negative feedback loop, or through some kind of compensatory mechanism. There is a risk of circularity if among a multitude of possible mechanisms, the one that fits the results may be chosen with no attempt to substantiate the interpretation. In reviewing reports of conditions with TMS evidence for altered GABA A activity, it is useful to consider whether reported data on subunit expression are compatible with synaptic or extrasynaptic changes. The GABAergic system is complex and remains a focus of research; used optimally, TMS could contribute to elucidating GABAergic changes in disease. Overall, with expanding use of these measures, and previously voiced concerns, 97 it would be prudent to establish community standards for protocols and quality control.
Although the dependence of SICI on GABA A ergic inhibition is well-established, SICI may, particularly with higher CS intensities, be contaminated by SICF. Therefore, findings of reduced SICI could be influenced by increased glutamatergic facilitation captured by SICF. 98 However, in the case of minor allele homozygotes for the GRIN1 SNP rs4880213, reduced SICI did not appear to be explained by increased SICF. 26 As the optimal ISIs for inducing SICF may differ between individuals, experimental confirmation of abnormal SICI would ideally include selecting ISIs based on individual SICF and applying multiple CS intensities. 98 This may be challenging in some patient groups, but would be particularly valuable where unexpectedly impaired SICI was observed.
In contrast to studies in patients or mutation carriers, in two studies TMS parameters were correlated with genotype at polymorphic sites. For TRPV1, homozygotes for a SNP linked with enhanced channel functionality showed increased SICF, whereas for another SNP not known to influence channel function, no differences according to genotype were seen. 31 This could present a functionally relevant TMS signature. In contrast, the results of the study on GRIN1 and GRIN2B F I G U R E 4 Summary of impact of some conditions reviewed in this paper on synaptic GABA A ergic signalling. These may be grouped into: 1. Impaired excitability of presynaptic GABAergic interneurons; 2. Altered release of GABA from presynaptic terminals; 3. Altered function of postsynaptic GABA A receptors. 4. For several conditions, there is evidence for altered GABA activity/levels, but how these impact on synaptic function is unclear.
SILVENNOINEN Et aL.
SNPs 26 warrant some caution in interpretation: all genotypes were relatively common, without a clear disease or functional association. These observations emphasize that studies should be carefully designed with both biologically plausible hypotheses and clinically relevant phenotypes, whether disease-or trait-related.
| Challenges and limitations
A recognised problem in studies of brain stimulation is inter-trial variability. 99 Achieving large study size is limited by the rarity of these conditions, as well as practical challenges in conducting experiments in individuals affected by severe epilepsies. These same issues pertain to ensuring sufficient power to detect possible changes. Examples from this review 41 support the notion that robust changes may be detected even with small sample size; it would be important to confirm findings in sufficient numbers of patients.
Especially for studies of SNPs with relatively high allelic frequencies, care in experimental design should be reflected in choice of participants. In the study of GRIN1 and GRIN2B SNPs, the sample included no minor allele homozygotes, likely to reduce power. 26 In both of the SNP studies, the same participants were genotyped for the all SNPs and all patients were included in comparisons across a particular SNP regardless of type of other SNPs. If all SNPs were assumed to potentially exert an effect of similar magnitude on NMDARmediated neurotransmission, this approach may have undermined ability to detect significant effects: more sophisticated models may be needed.
Even with sound hypotheses, not all reviewed studies identified TMS changes. STX1B is an emerging epilepsy gene; mutations are expected to affect synaptic function. However, carriers of mutations in this gene, most of whom had a history of seizures providing some evidence of pathogenicity, failed to show evidence of altered GABA or glutamatergic neurotransmission as measured by TMS-EMG. 30 This could reflect age-dependency of the disease process, but could also be due to the number of limitations of the technique.
An obvious limitation of TMS-EMG is that it only samples the motor cortex. Although the germline variants are present in all DNA of an individual, there are regional differences in gene expression in the brain. Normal patterns, for example of GABA inhibition, demonstrated by TMS-EMG do not rule out abnormalities in GABAergic activity within other networks or regions. In these respects, TMS-EEG may offer advantages. Further, this technique may allow use of intensities lower than those required for TMS-EMG, 16 which may allow extension of studies to more individuals in conditions where stimulator output was insufficient to evoke motor responses in all participants. 36,57 Compared to TMS-EMG, the neurophysiological underpinnings of TMS-EEG measures are less well defined and methodology less standardised. The influence of peripheral evoked potentials on TEPs remains an important concern, 18,19 and there is consensus for the need to better understand and mitigate such confounding effects. 100,101 Addressing these issues will be important for possible future biomarker use.
Some genetic epilepsies, such as EPM1, 53 are associated with cortical atrophy. A linear correlation between scalp-to-cortex distance and rMT has been reported 102 ; as conditioning stimulus intensities are generally chosen as a percentage of rMT, one would not expect the relative stimulus intensities of conditioning stimuli to differ between individuals with different scalp-to-cortex distances. However, modelling has shown that eg sulcal widening may lead to alterations in the current density distributions, which may lead to differences in the populations of neurons targeted, 103 and possible effects of anatomical changes must be considered.
The possible confounding effect of medications, particularly AEDs, is a further limitation and must be considered in the interpretation of findings (Table 1, Table S1). One would not expect the GABA A ergic effects of benzodiazepines to lead to erroneous findings of impaired SICI, but the mechanisms of action for many AEDs are incompletely characterised. Although studying unaffected carriers or patients in remission could be advantageous in this regard, such cases may not be fully representative of the condition of interest, as exemplified by the normal findings in asymptomatic carriers of STX1B. 30 The issue of confounding effects of AEDs can be addressed to some extent by recruiting controls with epilepsy, taking similar medications to the group of interest, although medications may be difficult to match completely. 41 Participants may also be stratified by drug status 71 ; small sample size is a caveat. In one study, potential medication effects were countered by means of withdrawing antiepileptic medication for a minimum of 24 hours. 75 In addition to being generally of some concern due to the risk posed to patients, this may be unlikely to offset possible medication effects on measures of interest.
| TMS -one tool among others
It is important that studies are hypothesis-driven, and avoid circularity of arguments. Among the conditions discussed in this review, similar changes in TMS parameters are described for very different conditions. The notion that pathophysiological specificity of TMS may be overestimated is exemplified by findings of not only decreased SICI, but also increased ICF in people with the GABRG2 variant R43Q. Indeed, any use of TMS for diagnostic purposes would require integration with other investigations. As such, it could be valuable in certain scenarios. When patients are found to have a previously unreported variant in a disease-implicated gene, it may not be clear whether the variant is pathogenic or not. For some genes, both loss of function and gain of function variants may be pathogenic, and the directionality of the change in function may not be apparent based on phenotype alone. Such a situation could arise, for example, in the context of a patient with severe epilepsy found to harbor an SCN1A mutation not previously reported, with a phenotype not fully concordant with Dravet syndrome. The ability to use an in vivo functional assessment of an entire system, using TMS, could be valuable in this situation, and have implications for therapeutic choices, such as avoiding treatment with sodium channel blockers. 38 Further, correlations between TMS findings and genotypic or phenotypic severity, as seen in EPM1, suggest a potential role for TMS in predicting disease trajectory and/or treatment response, which should be explored further.
| CONCLUSIONS
Genes associated with epilepsy are being uncovered at a rapid pace. As a result, more tools are required to study the in vivo effects of variants. Despite limitations, TMS could be useful for exploring the consequences of variants on the function of interneuron circuits. TMS-EMG has already been successfully applied to this end in several conditions; more research is required to expand and corroborate these findings. TMS-EEG may offer a further tool to do so, but the neurophysiological underpinnings do need to be better characterised. | 2020-08-13T10:05:23.565Z | 2020-08-12T00:00:00.000 | {
"year": 2020,
"sha1": "a39d99ef2427f76f04622ba00a82d9d9b99de4bf",
"oa_license": "CCBY",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1111/epi.16634",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "6dc9a64a61020a42fc29a76ff10233c40b0f50ec",
"s2fieldsofstudy": [
"Biology",
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
31425362 | pes2o/s2orc | v3-fos-license | Spin waves in paramagnetic BCC iron: spin dynamics simulations
Large scale computer simulations are used to elucidate a longstanding controversy regarding the existence, or otherwise, of spin waves in paramagnetic BCC iron. Spin dynamics simulations of the dynamic structure factor of a Heisenberg model of Fe with first principles interactions reveal that well defined peaks persist far above Curie temperature T_c. At large wave vectors these peaks can be ascribed to propagating spin waves, at small wave vectors the peaks correspond to over-damped spin waves. Paradoxically, spin wave excitations exist despite only limited magnetic short-range order at and above T_c.
For over three decades, the nature of magnetic excitations in ferromagnetic materials above the Curie temperature T c has been a matter of controversy amongst experimentalists and theorists alike. Early neutron scattering experiments on iron suggested that spin waves were renormalized to zero at T c [1]; however, in 1975, using unpolarized neutron scattering techniques, Lynn at Oak Ridge (ORNL) reported [2] that spin waves in iron persisted as excitations up to the highest temperature measured (1.4T c ), and no further renormalization of the dispersion relation was observed above T c .
Experimentally, it was challenged primarily by Shirane and collaborators at Brookhaven (BNL) [3]. Using polarized neutrons, they reported that spin wave modes were not present above T c and suggested that the ORNL group needed polarized neutrons to subtract the background scattering properly. Utilizing full polarization analysis techniques the ORNL group subsequently confirmed their earlier work and, in addition, they analyzed data from both groups and concluded that their resolution was more than an order of magnitude better than that employed by the BNL researchers [4]. Moreover, angle-resolved photoemission studies [5,6] suggested the existence of magnetic short-range order (SRO) in paramagnetic iron and that this could give rise to propagating modes. Theoretically, SRO of rather long length scales (25Å) was postulated to exist far above T c [7,8] and a more subtle kind was proposed later [9]. Contrarily, it was also suggested that above T c , all thermal excitations are dissipative [10,11]. To further complicate matters, analytical calculations for a Heisenberg model of iron, with exchange interactions extending to fifth-nearest neighbors and a three pole approximation [12], did not reproduce the line shape measured by either experimental group mentioned above. In addition, Shastry [13] performed spin dynamics (SD) simulations of a nearest neighbor Heisenberg model of paramagnetic iron with 8192 spins and showed some plots of dynamic structure factor S(q, ω) with a shoulder at nonzero ω for some q. It was explained to be due to statistical errors instead of propagating modes.
With new algorithmic and computational capabilities, qualitatively more accurate SD simulations can now be performed. In particular, it can follow many more spins for much longer integration time. We use these techniques and a model designed specifically to emulate BCC iron and have been able to unequivocally identify propagating spin wave modes in the paramagnetic state, lending substantial support to Lynn's [2] experimental findings. Interestingly, spin waves are found despite only limited magnetic SRO.
To describe the high temperature dynamics we use a classical Heisenberg model H = −(1/2) r =r ′ J r,r ′ S r · S r ′ , for which the exchange interactions, J r,r ′ , are obtained from first principles electronic structure calculations. For Fe this is a reasonable approximation since the size of the magnetic moments associated with individual Fe-sites are only weakly dependent on the magnetic state [14] and by including interactions up to fourth nearest neighbors it is possible to obtain a reasonably good T c .
Large scale computer simulations using SD techniques to study the dynamic properties of Heisenberg ferromagnets [15] and antiferromagnets [16] have been quite effective, and the direct comparison of RbMnF 3 SD simulations with experiments was especially satisfying [16]. We have adopted these techniques and used L × L × L BCC lattices with periodic boundary conditions and L = 32 and 40. At each lattice site, there is a three-dimensional classical spin of unit length (we absorb spin moments into the definition of the interaction parameters) and each spin has a total of 50 interacting neighbors. We use interaction parameters, J i , for the T = 0 ferromagnetic state of BCC Fe calculated using the standard formulation [17] and the layer-KKR method [18]. The calculated values are J 1 = 36.3386 meV, J 2 = 20.6520 meV, J 3 = −1.625962 meV, and J 4 = −2.39650 meV.
In our simulations, a hybrid Monte Carlo method was used to study the static properties and to generate equilibrium configurations as initial states for integrating the coupled equations of motion of SD [19]. At T c and for L = 32, the measured nonlinear relaxation time in the equilibrating process and the linear relaxation time between equilibrated states for the total energy and for the magnetization [20] are both smaller than 500 hybrid steps per spin. We discarded 5000 hybrid steps (for equilibration) and used every 5000 th hybrid step's state as an initial state for the SD simulations. For the J i 's used here, T c = 919(1)K, which is slightly smaller than the experimental value T exp c = 1043K. The equilibrium magne- The SD equations of motion are where H ef f ≡ − r ′ J r,r ′ S r ′ is an effective field at site r due to its interacting neighbors. The integration of the equations determines the time dependence of each spin and was carried out using an algorithm based on second-order Suzuki-Trotter decompositions of exponential operators as described in [21]. The algorithm views each spin as undergoing Larmor precession around its effective field H ef f , which is itself changing with time. To deal with the fact that we are considering four shells of interacting neighbors, the BCC lattice is decomposed into sixteen sublattices. This algorithm allows time steps as large as δt = 0.05 (in units of t 0 = J −1 1 ). Typically, the integration was carried out to t max = 20000δt = 1000t 0 .
The space-and time-displaced spin-spin correlation function C k (r − r ′ , t) and the related dynamical structure factor, S k (q, ω), are fundamental in the study of spin dynamics [22] and are defined as where k = x, y or z and the angle brackets · · · denote the ensemble average, and where q and ω are momentum and energy (E ∝ ω) transfer respectively. It is S k (q, ω) that was probed in the neutron scattering experiments discussed earlier.
By calculating partial spin sums 'on the fly' [15], it is possible to calculate S k (q, ω) without storing a huge amount of data associated with each spin configuration. Because L is finite, only a finite set of q values are accessible: q = 2πn q /(La) with n q = ±1, ±2, . . . , ±L for the (q, 0, 0) and (q, q, q) directions and n q = ±1, ±2, . . . , ±L/2 for the (q, q, 0) direction. (a is lattice constant.) For T ≥ T c , the ensemble average in Eq. 2 was performed using at least 2000 starting configurations. We average S k (q, ω) over equivalent directions and this averaged structure factor is denoted as S(q, ω).
In Fig. 1 we show the frequency dependence of S(q, ω) obtained for four different temperatures around T c . These, so called, constant-q scans are for q = π/a(1, 0, 0) (|q| = 1.09Å −1 ), which is half way to Brillouin Zone boundary. At 0.95T c , S(q, ω) already has a 3-peak structure: one weak central peak at zero energy and two symmetric spin wave peaks (we only show data for ω ≥ 0 since the structure factor is symmetric about ω = 0). Note that the spin wave peaks are already quite wide. As T goes to T c and above, the central peak becomes more pronounced. In addition, the spin wave peaks shift to lower energies, broaden further and become less obvious, however they still persist. This 3-peak structure at high temperatures is in contrast to the 2-peak spin wave structure found at low temperatures. In the neutron scattering from 54 Fe(12%Si) experiments [4], Mook and Lynn also noticed a central peak, but could not decide whether it was intrinsic to pure iron or a result of alloying of silicon.
In general, constant-q scans are isotropic in the (q, 0, 0), (q, q, 0), and (q, q, q) directions. For very small |q|, there is only a central peak in the scans (as is expected) and the 3-peak structure only develops for larger |q|. We fit the 3 peaks in S(q, ω) using different fitting functions and found the best results with either a Gaussian central peak plus two Lorentzian peaks at ±ω 0 : or a Gaussian central peak plus two additional Gaussian peaks at ±ω 0 : where G = I c exp(−ω 2 /ω 2 c ), L ± = I 0 ω 2 1 /((ω ∓ ω 0 ) 2 + ω 2 1 ), and G ± = I 0 exp(−(ω∓ω 0 ) 2 /ω 2 1 ). For moderate |q| the results are fit best with Eq. 4, while Eq. 5 works better at larger |q|. In Fig. 2 A −1 in the (q, q, 0) direction. The |q| = 0.48Å −1 result fits well to Eq. 4 and has ω 1 /ω 0 < 1, i.e., the excitation lifetime is longer than its period and thus it can be regarded as a spin wave excitation. It should be noted that this |q| value is very close to that (0.47Å −1 ) for which Lynn found propagating modes in contradiction to the findings of the BNL group. At |q| = 1.16Å −1 , the structure factor has much weaker intensity and fits best to Eq. 5 with a ratio ω 1 /ω 0 that is even smaller than at |q| = 0.48Å −1 . This is illustrative of the general conclusion that the propagating nature of the excitation modes is most pronounced at large |q|. Figure 3 shows the dispersion relations obtained by plotting the peak positions, ω 0 , determined from the fits to S(q, ω) along the (q, q, 0) direction. Calculated dispersion curves are shown at several temperatures in the ferromagnetic and paramagnetic phases together with the experimental results of Lynn [2]. To estimate errors, we fitted each constant-q scan several times by cutting off the tail at slightly different ω max to get an average ω 0 ; these error bars are found to be no larger than symbols. In this figure, filled symbols indicate modes that are clearly propagating (ω 1 /ω 0 < 1) while open symbols indicate that, even though there are peaks at ω 0 = 0, the peaks have widths ω 1 > ω 0 . The calculated result for T = 0.3T c is very close to that from the experiments and propagating modes exist for very small |q|. For T ≥ T c , our curves lie below the experiments's and soften with increasing temperatures, a property not seen in the experiments. One possibility deserving of further study is that our use of temperature and configuration independent exchange interactions, in particular those appropriate to the T = 0 ferromagnetic state, breaks down at high temperatures when the spin moments are highly non-collinear. In our simulations we have equal access to constant-q scans and constant-E scans; however, this is not the case in neutron scattering experiments. Because the dispersion curves of Fe are generally very steep, experimentalists usually perform constant-E scans. In Fig. 4 we show constant-E scans for several E values at T = 1.1T c based on simulations. Clearly, the constant-E scans have two peaks (symmetric about |q| = 0) that become smaller and wider and shift to higher |q| as E increases. Peaks in constant-E scans strongly suggest that SRO persists above T c [7].
The degree of magnetic SRO can be obtained directly from the behavior of static correlation function C k (r − r ′ , 0) (i.e. Eq. 2 with t = 0), which can be calculated from the Monte Carlo configurations alone. For T = 1.1T c we find a correlation length of approximately 2a (∼ 6 neighbor shells), indicative of only limited SRO. Thus, in general, extensive SRO is not required to support spin waves. Moreover, inspection of Fig. 3 for T = 1.1T c shows that the point q 0.77Å −1 , at which these peaks first correspond to propagating modes, is when their wavelength (λ ∼ 2a) first becomes the order of the static correlation length.
In summary, our SD simulations clearly point to the existence of spin waves in the paramagnetic state of BCC Fe and support the original conclusions of Lynn. Their signature is seen as spin wave peaks in dynamical structure factor in constant-q and constant-E scans. Detailed analysis of the constant-q scans shows that the propagating nature of these excitations is clearest at large |q|, in agreement with experiment. This is also consistent with the requirement that their wavelength be the order of, or shorter than, the static correlation length. While the inclusion of four shells of first-principles-determined interactions into the Heisenberg model makes our results specifically relate to BCC Fe, we have also found spin waves in a Heisenberg model containing only nearest neighbor interactions. In addition to elucidating the longstanding controversy regarding the existence of spin waves above T c , these simulations also point to the important role that inelastic neutron scattering studies of the paramagnetic state can have in understanding the nature of magnetic excitations, particularly when coupled with state-of-the-art SD simulations. | 2018-04-03T01:55:01.138Z | 2005-01-28T00:00:00.000 | {
"year": 2005,
"sha1": "35d2b9c5bdc97c4be2f06ab9a2a0f82962c62baf",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/cond-mat/0501713",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "35d2b9c5bdc97c4be2f06ab9a2a0f82962c62baf",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics",
"Medicine"
]
} |
15700932 | pes2o/s2orc | v3-fos-license | A feasible roadmap for developing volumetric probability atlas of localized prostate cancer
A statistical volumetric model, showing the probability map of localized prostate cancer within the host anatomical structure, has been developed from 90 optically-imaged surgical specimens. This master model permits an accurate characterization of prostate cancer distribution patterns and an atlas-informed biopsy sampling strategy. The model is constructed by mapping individual prostate models onto a site model, together with localized tumors. An accurate multi-object non-rigid warping scheme is developed based on a mixture of principal-axis registrations. We report our evaluation and pilot studies on the effectiveness of the method and its application to optimizing needle biopsy strategies.
Introduction
Prostate cancer is the most prevalent male malignancy and second leading cause of cancer death in men. Though most prostate cancers are slow growing; there are cases of aggressive prostate cancers [1]. The important tools for better outcome of clinical treatment to rescue patients with prostate cancer include early detection and personalized diagnosis [2]. At present, diagnosis of prostate cancer heavily relies on the pathological examination of stained tissue samples acquired by multiple while almost random biopsies. Experiment results show that one out of five cancers will be missed by the existing needle biopsy protocols. The unsatisfactory clinical diagnosis leads to the fact that the best clinical treatment time window may be missed because of undetected cancer while many patients with dormant tumors are over-treated.
Here we report a feasible roadmap for developing a volumetric probability atlas of localized prostate cancer using optically-imaged surgical specimens [3] and image/graphics processing methods [4]. This master model contains a precise probabilistic map of localized prostate tumor distribution and the corresponding anatomic structure of a prostate site model. Base on the developed statistical atlas and visualization technique, we can better understand the spatial distribution of prostate cancer with various grade, uncover the mechanism responsible for tumor behavior; and propose an atlas-informed biopsy sampling strategy.
The construction of a volumetric probability atlas of localized prostate cancer includes the following major steps: (1) Raw data collection and pre-processing; (2) Individual model reconstruction; (3) Non-rigid registration; (4) Site model construction; (5) Probabilistic atlas development.
We construct the master model from 90 surgical specimens. We propose an enhanced selforganizing scheme to decompose a set of object contours, representing multi-foci tumors, into localized tumor elements. We apply a mixture of Principal-Axis Registration (mPAR) scheme to align individual prostate models into the site model. Based on accurately mapped tumor distribution, a standard finite normal mixture (SFNM) is used to model volumetric cancer probability density, whose parameters are estimated using K-means and/or Expectation-Maximization (EM) algorithms and the Minimum Description Length (MDL) criterion. We report our evaluation and pilot studies on the effectiveness of the method and its application to optimizing needle biopsy strategies.
Method
In this section, we describe the major methodological principles and development effort. Threedimensional digital and optical imaging transforms serial slices of surgical specimen into a computer-synthesized display that facilitates visualization of underlying spatial relationships. Given this digital information, the development of improved computer graphics and visualization has made it possible to study organs and disease patterns in locations that have previously been difficult to evaluate quantitatively.
Raw data collection and pre-processing
All the raw data sets are supplied by the experienced pathologists with computer-aided method to digitize the cross-sectional sequences of real prostatectomy specimens removed due to prostate cancer. We have digitized the cross-sectional sequences of 200 whole mount prostatectomy specimens removed due to prostate cancer provided by the AFIP. In each case the areas of localized tumor have been delineated by an experienced pathologist using computer-aided methods. All the raw data sets need to be pre-processed including data format converting, splitting a group of tumor contours into tumor elements, etc.
In this project, all the raw data representing prostate structures and tumors are given in the format of object contours outlined by the experienced pathologists using optically-imaged surgical specimens and computer-aided methods. The contours of prostate structures have been classified into anatomic objects by the data provider already. Specifically, the contours of prostate capsule are given as class 1, the contours of seminal vesicles are given as class 3, the contours of urethra are given as class 6, and other irrelevant objects are not given a class number. All the tumor contours are given together, as class 5, without any additional information on their multi-foci nature.
Individual model reconstruction
For individual model reconstruction and cancer analysis, there is an urgent need in our research to decompose class 5, a group of contours representing multifoci tumors, into localized tumor elements by a self-organizing method. The decomposition is based on the following assumptions: a tumor contour in level K can only be linked to the contours at the adjacent level K+1 or K-1. Elementary matching is a 1-to-n (n >= 1 is an integer) matching in which a relatively bigger contour at K level can be linked with n smaller contours at K+1 or K-1 level. Then, there may be a tree-type matching at the two adjacent levels. If n = 1, the matched tumor element is a column. If n ≠ 1, the matched tumor element is a tree or mesh. For a given contour, we use three criteria to search for the most suitable contours at the adjacent levels: maximum area overlap; minimum center distance; and shape similarity (e.g., correlation coefficient).
We denote source contour by Csource, candidate contour by Ccandidate, and matched contour by Cmatched, and separating parameter by dseparating that depends on the tumor shape trend and property of tissue. Shape trend can be estimated by the points of Cmatched. The points are selected from the nearest side of the Csource to the Ccandidate center. The selected points form a curve which can be extended to the level of Ccandidate in many ways (e.g., a polynomial curve fitting). Based on contour center distance, we can decide whether a Ccandidate is a Cmatched. Moreover, when the area overlap between Ccandidate and Csource is significant (e.g., >55%), then Cscan Ccandidate is a Cmatched. Furthermore, due to the continuity of organ growth, shape similarity can be used to decided whether a Ccandidate is a Cmatched.
When n = 1, we use area overlap to first select Ccandidate, we then use shape similarity to find Cmatched among the candidates. If there is no candidate, we use center distance to select Ccandidate. If no Cmatched is found, then Csource is the terminal of an element. When n ≠ 1, we first use contour center distance to eliminate the contour(s) that cannot be a Ccandidate. We then use shape similarity to each of the remaining Ccandidate, and select the one with the best shape fit.
From 200 surgical specimens, each case consists of 10-14 slices with 4 µm sections at 2.5 mm intervals, and was digitized at a resolution of 1500 dots per inch. Contour extraction was performed by a pathologist followed by a semi-automatic contour refining algorithm using a snake model. The regions of interest (ROI) include the prostate capsule, urethra, seminal vesicles, ejaculatory ducts, surgical margin, prostate carcinoma, and areas of prostatic intraepitrelial neoplasia. For accurate object reconstruction [5], contour interpolation was performed to fill the gaps between one start and one goal contours. Let be an intermediate contour, instead of using linear or shape-based interpolation, we developed a 3-D elastic contour model to compute a 3-D force field between adjacent slices thus enabling a "pulling and pushing" metaphor to move the starting contour gradually to the final contour: where DS is the bilateral force field vector. The nonlinearity characteristics of the elastic contour model permit a meaningful interpolation result yielding a high quality representation of the smoothness nature of the object surface. Reconstruction of an object requires the formation of 3-D surfaces between the contours of successive 2-D slices. Instead of connecting the contours by planar triangle elements where the reconstructed surfaces are usually coarse and static, we developed a physical-based deformable surface model involving two major operations: (1) triangulated patches were tiled between adjacent contours with a criterion of minimizing the surface area, and (2) tiled triangulated patches were refined by using a deformable surface-spine model. Let v(s,r) be the parameterized surface, the associated energy Ɛ(v) can be given [ where P(v) is the potential of the external forces, and the internal forces are controlled by the coefficients of elasticity (w10, w01), rigidity (w20, w02), and twist resistance (w11). The surface formation is governed by a second order partial differential equation and is accomplished when the energy of the deformable surface model reaches its minimum. The nonlinear property of the deformable surface model will greatly improve the consistency of the reconstructed complex surface. Using advanced object-oriented 3-D graphics toolkits, interactive visualization is achieved and applied to computerized biopsy simulation. Through efficient picking and surface rendering capabilities, the system allows a user to manipulate simulated needles in the rendered surfaces, the position, orientation and depth of the simulated needle can be specified and recorded.
Non-rigid registration
The estimation of transformational geometry from two point sets is an essential step to medical imaging and computer vision [4]. The task is to recover a matrix representation requiring a set of correspondence matches between features in the two coordinate system. Assume two point where R is a rotation matrix, T is a translation vector, and Ni is a noise vector. Given { } p iA and { } p iB , Arun et al. present an algorithm for finding the least-squares solution of R and T, which is based on the decoupling of translation and rotation and the singular value decomposition of a 3×3 cross-covariance matrix [4].
The major limitation of the present method is twofold: 1) while feature matching methods can give quite accurate solutions, obtaining correct correspondences of features is a hard problem, especially in the cases of images acquired using different modalities or taken over a period of time and 2) a rigidity assumption is heuristically imposed, leading to the incapability of handling situations with non-rigid deformations. One popular method that does not require correspondences is the principal axes registration (PAR), which is based on the relatively stable geometric properties of image features, i.e., the geometric information contained in these stable image features is often sufficient to determine the transformation between images.
We first discuss the optimality of PAR in a maximum likelihood (ML) sense. The novel feature is to align two point sets without needing to establish explicit point correspondences. We then propose a somewhat different approach for recovering transformational geometry of non-rigid deformations. That is, rather than using a single transformation matrix which gives rise to a large registration error, we attempt to use a mixture of principal axes registrations (mPAR), whose parameters are estimated by minimizing the relative entropy between the two point distributions and using the expectation-maximization algorithm. We demonstrate the principle of the method for both rigid and non-rigid image registration cases.
As suggested by information theory, we note that the control point sets in two images can be considered as two separate realizations of the same random source. Therefore, we do not need to establish point correspondences to extract the transformation matrix. In other words, if we denote { } where v is the noise component (caused by misalignment).The probability distributions can be computed independently on each image without any need to establish feature correspondences, and given the two distributions of the control point sets in the two images, we can recover the transformation matrix in a simple fashion, as we now describe. From observation of the distributions, we can estimate R and T by minimizing the relative where D denotes the relative entropy measure. We have previously shown the relationship between the negative log joint likelihood and the relative entropy as where H denotes the entropy measure. Thus, minimizing Therefore, where the superscript t denotes matrix transposition, C()denotes the auto-covariance matrix where UA and UB are 3×3 orthonormal matrices and A Λ and B Λ are 3×3 diagonal matrices with nonnegative elements.Note that the transformation U consists of the orthonormal set of eigenvectors of C, and matrix Λ contains eigenvalues m λ of C for m = 1,2,3. Then, we assign where K is a 3×3 diagonal matrix with element = / m mB mA k λ λ , the right side of ML (10) which equals exactly the left side of ML (10). Thus, among all 3×3 orthonormal matrices, R defined by (12) that also includes a scaling matrix K, maximizes the joint log likelihood in (8). So far, we have verified the optimality of PAR techniques. However, because of its global linearity, the application of PAR is necessarily somewhat limited. An alternative paradigm is to model a multimodal control point set with a collection of local linear models. The method is a two-stage procedure: a soft partitioning of the data set followed by estimation of the principal axes within each partition. Recently there has been considerable success in using standard finite normal mixture(SFNM) to model the distribution of a multimodal data set, and the association of a SFNM distribution with PAR offers the possibility of being able to register two images through a mixture of probabilistic principal axes transformations [4]. Assume where is the Gaussian kernel with mean vector k µ and auto-covariance matrix Ck, and k α is the mixing factor which is proportional to the number of control points in cluster k. For each of the control point sets { } p iA and { } p iB , the mixture is fit using the expectation-maximization (EM) algorithm [6]. The E step involves assigning to the linear models contributions from the control points; the M step involves re-estimating the parameters of the linear models in the light of this assignment.
E-Step
M-Step µ µ that we have estimated in the previous step using the EM algorithm. Note that now we do need the correspondences between the two control (point) clusters for each k. These correspondences may be found, after a global PAR is initially performed, by using a site model approach or a dual-step EM algorithm to unify the tasks of estimating transformation geometry and identifying cluster-correspondence matches. This philosophy for recovering transformational geometry of the nonrigid deformations is similar in spirit to the modular networks in neural computation, under which the relative entropy between the two point sets reaches its minimum both globally and locally [4].
Site model construction
The development of statistical modeling and information visualization of localized prostate cancer will require an accurate graphical matching of individual surgical specimens, and volumetric visualization of probability mixture distribution. Based on 200 surgical specimens of the prostates, we have developed a surface reconstruction technique to interactively visualize the clinically significant objects of interest such as the prostate capsule, urethra, seminal vesicles, ejaculatory ducts and the different carcinomas, for each of these cases. Site model construction is performed by a coupled dynamic deformation system [7]. The axis of the surface from new contours connects all the surface patches to the spine through expansion/compression forces radiating from the spine while the spine itself is also confined to the surfaces. The dynamics is governed by the second-order partial differential equations from Lagrangian mechanics so that final shapes and relationships of the surface and spine are achieved with a minimum energy dynamic deformation. Let the strain energies of surface (Ɛsurface) and spine (Ɛspine) be the sum of controlled stretching and bending energies, where Ɛsurface is the thin-plate under tension variational spline and Ɛspine is a weighted sum of the tension along the spine (stretching energy) and the controlled rigidity (bending energy), the non-rigid motion in response to an extrinsic force f(x) follows the continuum mechanical equation [7] 2 ( ) t t δε µ γ δ where µ is the mass density function, γ is the viscosity function, and is the variational derivative of Ɛ representing the internal elastic force. After principal-axes based initial registration, external forces are introduced to both the surfaces and spines to be reconstructed or matched, such as fa as a function of the difference between the spine and the axis of the surface, radial force fb, and inflation or deflation force fc. The control over expansion and contraction of the surface around the spine, realized by summing the coupled forces, leads to the following dynamic system where c controls the strength of the expansion or contraction force. The surface will inflate where c > 0 and deflate where c < 0. Summing the above coupling forces in the motion equation associated with surface and spine, we obtain the following dynamic system describing the deformable surface-spine model: where f f f f f f ext spine = in our implementation) .We surface registration problem, we are interested in matching two surfaces by computing the deformation between them. We define the external forces to reflect the distance between the two surfaces under consideration: u v is the Euclidean distance of each point on the surface to the nearest point on the second surface. The final x surface and x spine are obtained when the energy of the deformable surface-spine reaches its minimum. To solve equation (29) of such a dynamic system, we have developed several force balance strategies to perform 3D model to model warping. We have also extended the surface-spine model to a deformable coupled-surface model where the "spine" is replaced by the coupled "surface" to generate a blended generic model [7].
Probabilistic atlas development
Based on an accurate multi-object and non-rigid registration of tumor distribution, a standard finite normal mixture is applied to model statistics of the cancer volumetric distribution, whose parameters are estimated using K-means algorithm as the initialize sites and using expectationmaximization algorithms, under the information theoretic criteria, to finalize the sites. The development of the biopsy site selection consists of two steps: (1) Using K-means algorithms to initialize the cluster centers of the cancer volumetric distribution model.
(2) Using EM algorithms to optimize through a probabilistic self-organizing map to achieve a maximum likelihood of cancer detection.
Step 1: K-means algorithms is one of the important issues in pattern classification to find a set of representative vectors for clouds of multimodality data sets. Pattern vectors of ndimensions may be considered as representing points within an n-dimensional Euclidean space. One of the most obvious means by which we may establish a measure of similarity among such pattern vectors is by means of their proximity to one another. The K-means algorithm is one of many clustering techniques that share the notion of clustering by minimum distance. For our purposes, we used the K-means algorithm to initialize the cluster centers of the prostate cancer distribution. Given the number of cluster centers of interest, the K-means algorithm can determine the initial locations of the cluster centers of the probability map of cancer distribution.
This parallel method initially takes the number of cluster centers of the interest equal to the final required number of clusters. In this step the final required number of cluster centers be chosen such that the points are mutually farthest apart. Next, it examines each component in the data sets and assigns it to one of the clusters depending on the minimum distance. The centroid's position is recalculated every time a component is added to the cluster and this continues until all the components are converged into the final required number of clusters.
Step 2: the expectation-maximization (EM) algorithm provides an iterative approach to compute maximum likelihood estimates in situations where, the given observations are either incomplete or can be viewed as incomplete [6]. The EM algorithm derives its name from the fact that on each iteration of the algorithm there are two steps, which are the expectation step and the maximization step. The expectation step uses the observed data set of an incomplete data problem and the current value of the parameter vector to manufacture data so as to postulate an augmented or so called complete data set. The maximization step consists of deriving a new estimate of the parameter vector by maximizing the log likelihood function to complete data manufactured in the E-step. Thus, starting from a suitable value for the parameter vector, the Estep and M-step are repeated on an alternate basis until converged.
After we acquired the initial cluster centers from the K-means algorithm, we have applied the EM algorithm to estimate the posterior Bayesian probabilistic class memberships of the data point with respect to each of the local classes. The EM algorithm can accurately and effectively classify the data points into correct classes. By combining these two procedures, it will help us to optimize the prostate needle biopsy site selection through a probabilistic self-organizing map, thus achieving a maximum likelihood of cancer detection.
In order to quantitatively investigate the tumor distribution, volume, and multi-foci in space, a statistical master model of localized prostate cancer will be required to relate individual graphical models to a global probability distribution. Based on 200 reconstructed and registered computer models representing the prostate capsule and internal structures, each of these cases has been automatically aligned together. By labeling the voxels of localized prostate cancer by "1" and the voxels of other internal structures by "0", we generated a 3-D binary map of the prostate that is simply a mutually exclusive random sampling of the underlying spatial probability distribution of cancer occurrence. We have summarized all these binary maps and normalized the result to obtain a 3-D histogram of the cancer distribution that can be modeled by a SFNM: 1 ( , , ) ( , , , ), where k μ and k Σ are the mean vector and covariance matrix of the kth component, k Fig. 1 shows the major steps of algorithm pipeline, with transparent graphical models reconstructed by the proposed computer algorithm. The regions of interest (ROI) include the prostate capsule, urethra, seminal vesicles, ejaculatory ducts, surgical margin, prostate carcinoma, and areas of prostatic intraepitrelial neoplasia. The results are very consistent with the pathologists' visual inspection and judgment. Fig. 2 shows the graphic user interface for deformation based object surface reconstruction, with typical examples. Fig. 3. Shows mPAR based non-rigid registration. The result of initial registration using mPAR is shown as the leftside contour sets. It can be seen that the principal axes of two multi-foci objects have been aligned fairly accurate (right-side contour sets). The two models before registration are shown as middle contour sets. Fig. 4 shows the improved model fusion by thin-plate spline and dynamic deformation in site model construction. Applying dynamic deformation system to the initial state, active object will converges to the targeted object. Fig. 5 shows mapping multi-foci tumors of individual models into the site model. Fig. 6 show the statistical atlas of localized prostate tumors reconstructed from 90 surgical specimens. As we have discussed before, the ultimate goal of case collection and 3-D matching is to create a master model of localized prostate cancer representing a spatial probability distribution. Using the proposed method, we have for the first time estimated the possible distribution that may reveal the important disease patterns of the prostate cancer. It should be noticed that multi-centricity pattern is clearly shown in the model and the spatial distribution is not uniformly random. | 2014-09-14T19:03:52.000Z | 2014-09-14T00:00:00.000 | {
"year": 2014,
"sha1": "83bd48c7bc3bd3cc135c9a19199b94b077c68435",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "83bd48c7bc3bd3cc135c9a19199b94b077c68435",
"s2fieldsofstudy": [
"Engineering",
"Medicine"
],
"extfieldsofstudy": [
"Computer Science",
"Biology",
"Mathematics"
]
} |
57761838 | pes2o/s2orc | v3-fos-license | Recalcitrant anal and genital pruritus treated with dupilumab.
Chronic anogenital pruritus can significantly impair affected patients' quality of life by disrupting their sleep, mood, sexual function, and personal relationships. Although a significant portion of these patients can be managed with hygiene measures, topical therapy, oral anti-pruritics, and allergen avoidance after patch testing, guidelines to treat patients who do not respond to standard therapy have yet to be established. We describe the therapeutic response of a case of anogenital pruritus recalcitrant to multiple topical and systemic therapies. Treatment of this patient with dupilumab, an interleukin-4 receptor alpha blocker, resulted in clinical remission at 1 year from the initiation of the therapy, without significant adverse effects.
Introduction
Anogenital dermatoses can cause significant itching, burning, and stinging that can greatly impair patients' quality of life (Cather et al., 2017). Patients who are affected by anogenital skin disease experience even greater disruptions in sleep, mood, sexual function, and personal relationships than patients with dermatologic conditions elsewhere on their body (Cather et al., 2017;Malakouti et al., 2017;Ryan et al., 2015). Despite this, a large portion of these patients remain undertreated (Meeuwis et al., 2012). Many patients with anogenital symptoms respond to management with improved hygiene and topical treatments (Guerrero and Venkatesan, 2015), but guidelines for systemic treatment of patients who are resistant to these first-line measures have yet to be established.
We describe the therapeutic response of a case of anogenital itch recalcitrant to multiple topical and systemic therapies. The patient was eventually treated with dupilumab, a biologic agent that binds to the alpha subunit of the interleukin-4 (IL-4) receptor to modulate the signal of IL-4 and IL-13 (primary interleukins involved in type 2 helper T-cell response), which resulted in the clinical remission of the disease. The patient has maintained adequate clinical response at 1 year after the initiation of dupilumab without significant adverse effects.
Case
A 62-year-old Caucasian man with a history of asthma and allergic rhinitis presented with a 2-month history of itching in the anal area. The patient described constant itching throughout the day that would also wake him from sleep. The patient had previously applied topical hydrocortisone 2.5% cream and over-the-counter antifungal cream to the affected area, which slightly reduced the itch but did not prevent recurrence or resolve the itch. During a physical examination, pink erythema of the perianal skin was observed, but no scale or papules were noted.
The patient's medical history was also significant for attention deficit hyperactivity disorder and depression, which were well controlled on stable doses of bupropion and methylphenidate. His asthma was also well controlled on a stable regimen of albuterol, budesonide/formoterol, and montelukast. The patient also showed mild grade 1 anterolisthesis of L5 on S1 on pelvic magnetic resonance imaging that was related to bilateral L5 spondylolyses.
Over the next 2 years, the patient was treated with various topical and systemic therapies in combination, without significant resolution of his symptoms. Topical therapies applied included corticosteroid treatments (triamcinolone 1% cream, desonide 0.1% cream, fluocinonide 0.05% ointment, and hydrocortisone 2.5% cream), tacrolimus ointment, propylene-glycol free lidocaine 5% ointment, doxepin 5% cream, capsaicin 0.006% cream, and antifungal treatments (clotrimazole cream, miconazole, nystatin powder, and econazole cream). Systemic therapies used by the patient included anti-histamine (cetirizine, hydroxyzine, and doxepin), antibiotic (cephalexin, fluconazole, and rifampin), intramuscular triamcinolone, budesonide, and gabapentin treatments. Although many of these therapies provided transient relief of the patient's symptoms, he experienced breakthrough itching despite strict adherence to the treatment regimens.
During this time, the patient's itch worsened and spread to the genital area. Repeat physical examinations revealed pink papules on the base of the patient's penis, shaft, glans, and scrotum and demonstrated intense perianal erythema with overlying edematous papules ( Fig. 1A-C). The patient underwent thorough diagnostic testing to identify the etiology of the itch. Biopsy tests of the perianal area and head of the penis both revealed subacute spongiotic dermatitis, which ruled out lichen planus.
Food allergy testing revealed allergies to legumes, soy, peas, lentils, chickpeas, cruciferous vegetables, lettuce, avocado, onion, egg, and aged foods. The patient also followed a fragrance-and propylene glycol-free diet and used products only on the Contact Allergen Management Program list with strict avoidance of known allergens for several months, but with minimal improvement in his symptoms.
After 2 years of persistent itching in the anogenital area, the patient initiated mycophenolate mofetil 2 g daily for suspected atopic dermatitis around the anal area, given his history of mild atopy. After 4 months, the dose was increased to 3 g daily, and the patient reported some improvement in itching with mild breakthrough pruritus. However, the patient experienced an increased number of upper respiratory infections while on the medication and was concerned about its immunosuppressive effects.
The patient was switched to dupilumab with a 600-mg loading dose and 300 mg every 2 weeks for his symptoms. During reevaluation 1 month later, the patient reported a 95% resolution of the itching, and a physical examination revealed a complete resolution of his perianal dermatitis (Fig. 2). At the time of follow-up 12 months after initiating dupilumab therapy, the patient continued to report that the itching was well controlled, with occasional breakthrough itching managed by propylene glycol-free topical lidocaine cream up to two times per month. The patient denied experiencing any significant adverse effects since initiating dupilumab treatment.
Discussion
A wide range of conditions affect the genital skin, including lichen sclerosus, lichen simplex chronicus, atopic dermatitis, psoriasis, allergic contact dermatitis, and irritant contact dermatitis, but diagnosis can be difficult (Chan and Zimarowski, 2015). These common skin conditions often have a different morphology in the genital area compared with other areas of the body due to friction, heat, and occlusion in the genital area (Drummond, 2011). Thus, dermatologic conditions of the genitalia may present similarly both clinically and histologically and often require extensive testing and empiric treatment (Chan and Zimarowski, 2015).
Most cases of anogenital pruritus arise from fecal contamination of the perianal area and are exacerbated by trauma such as repeated scratching (Markell and Billingham, 2010;Siddiqi et al., 2008). However, a close physical examination may reveal secondary causes of anogenital pruritus, such as hemorrhoids and fissures, inflammatory skin disease, or infection.
Patients who are suspected to have allergic triggers for anogenital pruritus or itch that is recalcitrant to topical therapies should undergo patch testing to evaluate allergic contact dermatitis. Approximately half of patients with anogenital pruritus have at least one positive reaction with patch testing, with 20% of these clinical reactions clinically relevant for their itch (Bauer et al., 2000;Warshaw et al., 2008). Patients with allergic contact dermatitis of the anogenital area can often be managed with a withdrawal from allergic triggers and symptomatic treatment (Siddiqi et al., 2008). However, these patients often have other concomitant anogenital diseases that may require further intervention (Trivedi et al., 2018).
One often overlooked cause of idiopathic anogenital pruritus, particularly in older patients, is lumbosacral radiculopathy (Berger et al., 2013;Cohen et al., 2005). Spinal trauma or degenerative disc disease can cause severe itching in the genital area in the absence of a primary rash. The papular rash that was observed in the genital area of our patient suggested that his anogenital itch was not solely due to a neuropathic cause but more likely was caused by an inflammatory disease process.
Genital dermatoses are often difficult to treat due to the unique nature of this environment. Genital skin is thin, sensitive, and often occluded and may have increased absorption of topical treatments (Farage and Maibach, 2004). Additionally, genital skin demonstrates an exaggerated response to irritants compared with other areas of the body (Britz and Maibach, 1979). Localized atopic dermatitis is commonly managed with topical corticosteroid treatments, but application of higher-potency steroid ointments to the genital area may result in treatment-related adverse effects, such as skin atrophy and increased risk of a secondary infection (Johnson et al., 2012).
Systemic therapy should be considered for atopic dermatitis recalcitrant to topical therapy or with significant quality of life impairment (Simpson et al., 2017). Systemic immunosuppressant treatments, such as azathioprine, cyclosporine, methotrexate, and mycophenolate mofetil, have traditionally been used off-label to treat these patients, but they often require frequent laboratory monitoring for potentially serious side effects. Due to the immunosuppressive nature of these systemic medications, patients such as the man described in this case are at an increased risk of infection.
Dupilumab is a human monoclonal antibody that targets IL-4 receptor and is approved for the treatment of moderate-to-severe atopic dermatitis in adults. To our knowledge, dupilumab has not previously been described in the medical literature as an effective treatment for anogenital pruritus. This biologic treatment inhibits IL-4-and IL-13-mediated inflammatory responses and has been shown in phase 3 trials to significantly decrease disease severity and risk of skin infections in patients affected with atopic dermatitis when given subcutaneously 300 mg every 2 weeks (Fleming and Drucker, 2018;Simpson et al., 2016). No laboratory monitoring is required for this medication.
Side effects reported with dupilumab to date are minor, including slightly elevated rates of conjunctivitis (4%-5%) and injection-site reactions (8%-14%) over placebo (Simpson et al., 2016). Dupilumab is a relatively safe alternative for patients who experience significant pruritus that is thought to be inflammatory in origin. Because of our patient's history of atopic disease and visible skin inflammation despite allergen avoidance, his anogenital pruritus likely had a component of underlying atopic dermatitis, which was treated with dupilumab.
Conclusions
Anogenital itching is a burdensome condition that significantly impairs affected patients' sleep, sexual function, personal relationships, and overall quality of life. However, guidelines for the treatment of these conditions are lacking. Pruritus in the anogenital area requires testing for systemic, gynecologic, neurologic, and dermatologic causes and can often be difficult to treat. Patients with recalcitrant anogenital itching or significant quality of life impairment should be considered for systemic therapy. This case demonstrates the efficacy of dupilumab in a patient with recalcitrant anogenital pruritus. | 2019-01-22T22:30:21.146Z | 2018-10-04T00:00:00.000 | {
"year": 2018,
"sha1": "c68df1cacb05691b9834ee97969a38d7ecf359ad",
"oa_license": "CCBYNCND",
"oa_url": "https://doi.org/10.1016/j.ijwd.2018.08.010",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "c68df1cacb05691b9834ee97969a38d7ecf359ad",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
142138544 | pes2o/s2orc | v3-fos-license | Investigating Ilami EFL Performance in Observing Within-Word Rules Based on Their Gender
Since in EFL contexts both teachers and students are less exposed to authentic language with a native pronunciation, presenting rules for standard American pronunciation appears to be overarching. Accordingly, providing both learners and teachers with an effective phonological profile is inevitable to achieve a standard American pronunciation. From among the hindrances to achieve a standard pronunciation lack of such a profile seems to be of critical importance. Considering lack of research on the foregoing obstacle in Iranian context, the present study aimed at presenting within-word rules of standard American pronunciation for Ilami EFL teachers in segmental phonology. The current study extracted, first, the within-word rules of standard American pronunciation from authentic sources. The profile developed in the study included 23 within-word rules. Additionally, the present study investigated the roles that might be played by Ilami EFL teachers' gender, in their observing of within-word rules developed in the study. Finally, the study determined if the phonological rules of within-word features of speech were affected by the gender of Ilami EFL teachers. To this end, among the accessible population in the study (all male and female EFL teachers in Ilam province), 40 teachers (20 males and 20 females) were randomly surveyed on interviews for their observing of 23 rules by reading a topically unrelated corpus. Chi-Square tests were conducted to analyze the data. Statistical analysis was run using Statistical Package for Social Sciences (SPSS). The results showed no significant relationship between the gender of Ilami EFL teachers and their observing of within-word rules. © 2015 The Authors. Published by Elsevier Ltd. Peer-review under responsibility of Academic World Research and Education Center.
However, considering the overarching role played by pronunciation in speech intelligibility, the need for practical training in teaching pronunciation is inevitable.
In today's interconnected world, knowing English has been one of the inevitable parts of our lives. The way we speak gives some clues about us to people. Learners with good pronunciation in English are more likely to be understood. The ability to have a fluent, accurate, and intelligible communication is the ultimate goal of learning a language. In English as-a-foreign-language contexts in which learners are not exposed to an authentic language, access to a standard phonological profile is indispensable. To put it another way, to have access to a phonological rules is a helpful source for both teachers and learners through which they can improve their communicative competence. Pronunciation is paying attention to particular sounds of language (segments) and the aspects of speech beyond the level of the individual sounds, (suprasegmentals) such as intonation, stress, phrasing, timing, rhythm suprasegmental aspects of language which are best learned as an integral part of language.
2.Literature Review
Traditional approaches to pronunciation have often focused on segmental phonemes, as they are relate in some way to letters in writing. Many adult learners find pronunciation one of the aspects of English to acquire, and need explicit help from the teacher (Fraser, 2000). Surveys on student needs consistently show that our learners feel the need for pronunciation work in class (Willing, 1989).According to Goodwin, in (Celce-Murica,2001) lack of intelligibility can be ascribed to both segmental and suprasegmental features.
3.Statement of the problem
If the English language learner is not equipped with good pronunciation, running an effective oral communication is encumbered. While it seems few studies have been done on segmental and suprasegmental features of speech, no study has specifically developed phonological rules for EFL teachers. And no research has examined the relationship between variable of gender and observing within -word rules developed in the present study. Therefore, it seems conducting research projects that can extend this issue and investigate the relationship between foregoing variable and observing the rules in the study is overarching. In order to conduct a new insight into the literature, the current study not only attempts to provide twenty three rules for within-word pronunciation but also to constitute the existence or non-existence of the relationship between the above mentioned variable and observing within-word rules of segmental phonemes.
"Pronunciation is an integral part of foreign language learning since it directly affects learner's communicative competence as well as performance." (Gilakjani, 2012).
Both segmental and supra segmental phonology are important aspect of every language. Therefore, identifying significant pronunciation features (segmental aspect of speech) that cause problems to learners should be taken into account in EFL teaching.
Speech communication is the result of active cooperation between segmentals namely phonemes, allophones and suprasegmental aspects of speech: stress, intonation, and rhythm. Suprasegmental aspects of speech are necessary for speech communication and play a vital role in speech intelligibility.
By developing segmental rules, the current study helps to describe the interaction between speech sounds of language. The rules for segmental phonemes developed in the study include twenty three rules within the boundary of a simple word. They provide a basis for analyzing the data collected in the study. Many adult learners find pronunciation one of the aspects of English to acquire, and need explicit help from the teacher. (Fraser 2000) Surveys on student needs consistently show that our learners feel the need for pronunciation work in class (Willing 1989). According to Goodwin, in celce-murica (2001) lack of intelligibility can be ascribed to both segmental and suprasegmental features. A well-organized speech is the result of following some rules. But these rules shouldn't be so much complex so that the learner find language acquisition difficult. Celce-Muricia &Goodwin (1991)
Definition of Technical Terms
Profile Webster dictionary (2012) defines the word profile as "a representation of something in outline.
Segmental phonemes
The vowels and consonants of language are known as the segmental phonemes of language. (Richard et al.1992).
EFL Teachers
According to (Celce-Murica, 2001) teachers of English as a foreign language are non native speakers of English and are expected to serve as the major model and source of input in English for their students whose oral communication needs mandate a high level of intelligibility and therefore require special assistance with pronunciation.
Design
The methodology of choice in the current study is survey in nature. Survey methods are some of the core methods for collecting and analyzing data in sociology David de vaus (2013). The researcher relies upon questionnaires, interviews, mail, or telephone to obtain primary data and to communicate measurement. (Gebremedhin and Tweeten, 1994) Thus, employing a survey method the current study aimed, first, to determine to what extent the participants observe the rules developed in the study and, additionally to generalize the findings of this study to similar population in different contexts.
Participants
The accessible population for the current study included male and female English teachers in Ilam city. Forty Ilami EFL teachers (twenty male and twenty female teachers) were randomly selected. Participants who attended the interview were high school English teachers. Table 1 depicts the number and distribution of the participants based on their gender and their observing of segmental rules within words.
Instrument
For the purpose of the survey study, a corpus containing 23 topically unrelated sentences was utilized. The prepared data was made of sentences each one contained a latent rule behind it. To reduce the possibility of choosing an option in a multiple choice question, oral reciting of the prepared data was preferred. Each participant was asked to read the sentences and words to see if s/he observes the rules or not. In the case of hesitation while reading the items demanding connected speech, the participants were asked for re-reading. The rules within the study all were extracted from valid sources.
Validity and Reliability
The reliability of instrument (a corpus made up of non-topically related sentences) was estimated by Cronbach's alpha. Reliability was established for the corpus at α=.82. The content validity of the instrument was determined by experts.
Data Analysis
The rules were extracted from valid sources. For each participant a mark was recorded by ticking yes or no for each item read by the participant. To analyze the data, the software SPSS (Statistical Packages of the Social Sciences) was run.
Statistical Analysis
As mentioned earlier, SPSS was used to carry out an in depth analysis of the accumulated data. Based on the research question of the study, the main statistical procedure employed included Chi square tests.
Research Question: Is there any relationship between the gender of Ilami EFL teachers and observing the phonological rules within words?
Research Hypothesis: There is no relationship between the gender of Ilami EFL teachers and observing the phonological rules within words.
The cross-tabulation results of the participants' gender and their observing of within-word phonological rules is illustrated by table 2. Note: G-WWR= Gender-Within-Word Rules The number of the female participants who did/did not observe the within-word rules was recorded as 209 and 251 respectively. The number of male participants who did/did not observe the within-word rules was divided as 225 and 235 respectively. Chi-square test was used to check the relationship between gender of Ilami EFL teachers and their observing of the phonological rules within words. Table 3 shows the results. As can be seen, there was no statistically significant relationship between the gender of Ilami EFL teachers and the phonological rules within words with the p-value of .322 at the .05 level. Therefore, the null hypothesis is supported. In other words, the female participants and the male participants were found to have the same performance regarding the phonological rules within words. Accordingly, it can be decided that gender is not an effective factor.
Conclusions
The prime goal of the current study was to provide Ilami EFL teachers with within-word phonological rules in the segmental area of pronunciation. The present study also aimed at getting new insights into the variable of gender that might influence English pronunciation of Ilami EFL teachers and the existence or non-existence of any relationship between the variable of gender and observing the phonological rules within words in segmental speech area. Data were collected through an interview (mainly aimed at asking the participants to read sentences or words) whose validity and reliability was approved through Alpha Cronbach and expert judgments respectively. The researcher adopted Chi-Square tests to determine the relationships. The following presents the discussion of the findings.
Discussion
The research questions sought to find any significant relationship between the gender of Ilami EFL teachers and their observing of the phonological rules within words. The results of Chi-square test showed that observing the phonological rules within words is not affected by the gender of Ilami EFL teachers and the null hypotheses for the research questions was supported.
Supporting the null hypotheses for the segmental aspects of speech in the study and the occurrence of no relationship between the variable of gender and observing phonological rules of within words may be a good indicator of neglecting pronunciation (specially segmental aspects of speech ) as one of the important aspects of language learning .This in itself, shows that in an EFL context like Ilam in Iran, teachers are seeking just the ways to make students translate and memorize content vocabularies without any agony for correct and standard pronunciation or even using the words in sentences and connected speech. In an EFL context like where the researcher selected to conduct the project, working on vocabularies and reading comprehensions is the center of attention rather than speaking and pronunciation proficiency. And this in turn, is because the text book material designers/developers have overlooked pronunciation. Consequently, teachers have ignored this important aspect of language learning as it has slightly been dealt with in text books. In fact the reason for marginizing pronunciation is that In Iran as an EFL context English language is considered as a pass to get higher degrees, and/or promotions.
The literature on pronunciation suggests a need for developing textbook materials to be taught as pre-service courses for teachers. Thus, the need for authentic sources which are simply understood and equipped both teachers and learners with a standard pronunciation is obvious. | 2019-05-02T13:06:53.418Z | 2015-06-24T00:00:00.000 | {
"year": 2015,
"sha1": "4139552c915c2dddcfa37452304643df94015096",
"oa_license": null,
"oa_url": "https://doi.org/10.1016/j.sbspro.2015.06.022",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "75cbb9e2122af768ae8681274f518b65df5a0559",
"s2fieldsofstudy": [
"Linguistics",
"Education"
],
"extfieldsofstudy": [
"Psychology"
]
} |
18857458 | pes2o/s2orc | v3-fos-license | Effect of 17α-methyltestosterone (MT) on oxidation stress in the liver of juvenile GIFT tilapia, Oreochromis niloticus
The normal dose of 17α-methyltestosterone (MT) used in fish farming was 60 mg/L, and now the analysis of residual androgens was carried out in waste water obtained from the Beijing area, which could be detected in levels ranging from 4.1 to 7.0 ng/L. For the purpose of aquatic early warning, the present study clearly demonstrated that chronic exposure by higher concentration of MT than environmental relevant concentrations could trigger oxidative stress response to juvenile tilapia by modulating hepatic antioxidant enzyme activities and gene transcription. Some antioxidative parameters (T-GSH, GSH/GSSG and MDA) were significant decreased under 0.5 mg/L MT exposure at 7 and 14 days. Some antioxidant enzymes (SOD, CAT and GST) and transcriptional changes (sod and cat) were revealed significant decreases for MT treated groups at 7 days. Total antioxidant capacity was significant increased only in 5 mg/L MT exposure groups, but GR activities were not affected all through the whole exposure period. Almost all of the antioxidant enzymatic genes detected in the present study were showed significant increments for MT exposure both at 14 and 21 days, and the genotoxicity profile of antioxidant enzymatic genes were revealed dose-dependent manner. This study presented evidence that MT could result in oxidative stress response in the early stages of GIFT tilapia. Electronic supplementary material The online version of this article (doi:10.1186/s40064-016-1946-6) contains supplementary material, which is available to authorized users.
Background
17α-Methyltestosterone (MT), an artificial androgenic compound, is often used to induce masculinization of both secondary sex characteristics and gonads in aquatic filed studies (Homklin et al. 2009;Golan and Levavi-Sivan 2014;Shen et al. 2015). For example, male tilapia and Yellow catfish has grown faster than females, and MT-immersion for sexually immature hatched larvae has been used to produce mono-male groups. MT also induced organic impairment (Seki et al. 2004) associated with detoxification and antioxidant defense systems in the form of whole cytochrome P450 (CYP) biotransformation (Kim et al. 2014). Some artificial compounds targeted oxidative defense system via hormone biosynthesis and catalytic mechanism of CYP (Mimeault et al. 2006;Ibrahim and Harabawy 2014), and androgenic compounds directly induced antioxidant enzymatic activities and genotoxicity (Larsson et al. 2002).
Open Access
*Correspondence: zhengy@ffrc.cn; chenjz@ffrc.cn 1 Freshwater Fisheries Research Center, Chinese Academy of Fishery Sciences, No. 9 Shanshui East Rd., Wuxi 214081, Jiangsu, China Full list of author information is available at the end of the article ROS-induced oxidative stress has been considered to contribute to abnormal development during embryogenesis, more and more evidence showed oxidative stress could be an important pathogenic mechanism of neurological and developmental deficits in both animal and human.
Taken effects on fish antioxidant defense system following common freshwater pesticides (atrazine) exposure for example, usually the detected endpoint contains antioxidant enzymatic activities and genotoxicity, which had been performed in zebrafish Danio rerio (Jin et al. 2010), common carp Cyprinus carpio (Chen et al. 2015), and so on. Liver was one of the main target organ (Salaberria et al. 2009;Jin et al. 2012;Kroon et al. 2014), but hepatic transcripts was not always affected by androgenic compounds when faced to oxidative stress (Albertsson et al. 2010). Nile tilapia, Oreochromis niloticus is sensitive to the oxidative stress of pollutants and can be treated as a kind of ideal material for toxicity experiment (Meng et al. 2014a, b). Our previous study showed that hepatic SOD, CAT and GPx activities and their transcripts were increased in Nile tilapia under methomyl exposure (Meng et al. 2014a, b). GIFT strain of Nile tilapia ("GIFT tilapia" for short), a tropical species, are suitable for culture in warm waters, and very sensitive to aquatic environmental factors Gabriel et al. 2015). The normal dose (MT) used in fish farming was 60 mg/L (Rivero-Wendt et al. 2013), and now the residual MT could be detected in waste water obtained from the Beijing area of China (4.1-7.0 ng/L; Sun et al. 2010). A decrease in egg laying rate of female Japanese quails (Coturnix cotumix japonica) and the fertility of male Japanese quails when exposed to 50-110 mg/L of MT for 3 weeks (Homklin et al. 2012). The minor deficit of the former studies only used antioxidative enzymatic activities to perform the study (Meng et al. 2014a, b;Ma et al. 2015;Gabriel et al. 2015), and fish genotoxicity may be more sensitive to pollution (Chakravarthy et al. 2014). We know MT has the potential to induce oxidative stress, the main purpose of the present study was to investigate the hepatic genotoxicity (transcriptional) and antioxidant enzymatic signature (post transcriptional) of freshwater GIFT tilapia O. niloticus juveniles responding to 0.5, 5 mg/L MT exposure. The present study will also detect other antioxidant parameters to further testify that hepatic antioxidant defense system was impaired following MT exposure.
Experimental design
Fertilized eggs of GIFT tilapia, O. niloticus were obtained from Freshwater Fisheries Research Center of the Chinese Academy of Fishery Sciences, Yixing. One-month old O. niloticus juveniles were used in the experiment and which were acclimatized in the aquarium facility with dechlorinated tap water at 25 ± 1 °C, with 14 h:10 h light/dark cycle. The experimental fish were offered feed once a day, and the feed purchased from Jiangsu Zhe Ya Food. Co. Ltd, China. Fish (from 4.04 to 4.97 g) were randomly selected for exposure experiments. Throughout the experimental period, water samples were taken before and after each water change, and the experimental conditions were as follows: pH, 7.1 ± 0.5 U; dissolved oxygen (tested by YSI 556MPS, USA), 7.16 ± 0.16 mg/L; total phosphate, 2.16 ± 0.17 mg/L; total nitrogen and ammonia nitrogen (by Nessler's reagent spectrophotometry), 0.52 ± 0.15 and 0.44 ± 0.06 mg/L respectively; total water hardness (ICP-OES, Optima 7000, PerkinElmer, USA), 194.3 ± 13.0 mg/L CaCO 3 .
The GIFT tilapia juveniles (n = 360) were assigned to nine groups (n = 40 per aquarium). MT was purchased from Sigma-Aldrich (St Louis, MO, USA). One out of three group fish were exposed to 0.5 mg/L MT and 5 mg/L MT respectively, and the last were reared in water without MT treatment in triplicate. Fish were exposed to test solutions for 21 days and all of the exposure solutions were replaced every 48 h with the fresh exposure solutions of the same concentration during the exposure experiment. The control group was also kept for 21 days with changing fresh water without MT every 48 h. There were no statistically significant differences in bodyweight or length in the exposure experiment. During the experiment, no fish mortality was observed. From the initial exposure day, sampled the water once 2 days both in the control groups and the experimental groups in triplicate.
Fish sampling
All fish liver samples of the exposure and control groups were collected once a week. In each group per sampling point, fish liver were sampled for gene expression (n = 6) and biochemical analysis (n = 6) respectively. Particularly samples for gene expression studies were homogenized using Trizol reagent (Invitrogen, USA), frozen in liquid nitrogen and stored at −80 °C immediately until utilization.
Determination of oxidative stress
For biochemical analyses, liver from 6 individuals per group at every sampling point was washed with ice-cold physiological salt water (0.86 % NaCl) thoroughly, then dried the surface with absorbent paper, weighed and killed unconsciously by a sharp blow on the head. Whole liver samples were homogenized on ice with cold 0.86 % physiological salt water (1:9, w/v), and then centrifuged at 2, 500 r/min at 4 °C for 10 min. The supernatant was analyzed for the activity assays of CAT, GPx, GR, GST, SOD and the content of MDA, GSH, the total protein and the total antioxidant capacity using the commercial kits purchased from Nanjing Jiancheng Bioengineering Institute in triplicate (Nanjing, China). The experiment were quantified spectrophotometrically with a Power-Wave XS2 (BioTek instruments Inc, Vermont, USA).
The total protein content (recorded at 595 nm) was determined using Coomassie Brilliant Blue G-250 staining (Bradford 1976). MDA content was measured by assaying the decomposition product of polyunsaturated fatty acid hydroperoxides was determined by the TBA reaction as described by Luo et al. (2006). According to the directions, the mixture was heated at constant temperature at 95 °C for 40 min, cooled by running water and centrifuged at 3500 r/min for 10 min. The absorbance of the supernatant was recorded at 532 nm. GSH (the total glutathione) content was quantified using the method of reacting with 5,5-dithiobis-2-nitrobenzoic acid (DTNB) (Beutler and Kelly 1963) and the generated yellow compound's absorbance was recorded at 420 nm.
SOD activity was measured through the method of WST-1 by inhibiting of nitroblue tetrazolium reduction at 450 nm (Huang et al. 2007). The final concentration consisted of 50 mM sodium phosphate buffer, 0.1 mM EDTA, 0.01 mM cytochrome c, 0.05 mM xanthine, and 0.005 mM xanthine oxidase. The reaction was initiated when xanthine oxidase was added to the enzyme extract at 25 °C. One unit of SOD activity is defined as the amount of enzyme required to inhibit the oxidation reaction by 50 % and is expressed as U/mg protein. CAT activity was determined by measuring hydrogen peroxide based on the production of its stable complex with ammonium molybdate at 405 nm (Góth 1991). The reaction system consisted of 50 mM sodium phosphate buffer (pH 7.0) and 19 mM hydrogen peroxide. The reaction was quantified at 25 °C by measuring the disappearance of H 2 O 2 . GPx activity was assayed with the spectrophotometer by surveying the decrement of the glutathione's enzymatic reaction at 412 nm. One unit (U) of CAT and GPx activity is defined as the amount of enzyme consuming 1 μmol of substrate or generating 1 μmol of product per minute and refereed per milligram soluble protein (U/mg protein). GR activity was determined by monitoring the glutathione-dependent oxidation of NADPH at 340 nm (Schaedle 1977). GST activity was measured using 1-chloro-2,4-dinitrobenzene (CDNB) as a substrate (Zhang et al. 2004), and the enzyme activity was determined by monitoring changes in absorbance at 412 nm. The assay contains 100 mM sodium phosphate buffer (pH 6.5), 60 mM glutathione (GSH), and 60 mM CDNB (dissolved in ethanol). One unit of GST activity was calculated as the amount of enzyme catalysing the conjugation of 1 μmol of CDNB with GSH per minute at 25 °C.
RNA extraction, reverse transcription (RT) and qRT-PCR
Total RNAs were extracted from all fish liver of GIFT tilapia juveniles from MT exposure and the control groups with Trizol reagent (Invitrogen, USA) and further treated with RNase-free DNase I (Fermentas, Canada). To check for genomic DNA contamination and to verify the total RNA quality, we loaded the total RNA in a 1 % agarose gel with EtBr (Sigma-Aldrich, USA) staining and checked the 18/28S ribosomal RNAs integrity together with the normal check using the spectrophotometric method (NanoDrop 1000, Thermo Scientific, USA). After RNA quality was determined, the cDNAs were synthesized from 3 μg DNase I-treated total RNA using the M-MLV First Strand cDNA Kit (Invitrogen, USA) with the oligo(dT) 12-18 primers in a 20 μL final volume according to the instruction manual. The cDNAs were used for cloning genes and carried out analysis of gene expressions after normalization.
The qRT-PCR was performed by CFX96 Real-Time PCR System (Bio-Rad, USA) with SYBR (TaKaRa, Japan). After the normalization of each cDNA samples, the qRT-PCR reactions were carried out with 1× SYBR Premix Ex Taq ™ , 0.4 μM of each primer, and 2.5 μL RT reaction solution in a final volume of 25 μL in triplicate. The reaction was initially denatured at 95 °C for 30 s, followed by 40 cycles of denaturation at 95 °C for 5 s and annealing at 60 °C for 30 s. A melt curve analysis was performed at the end of each PCR thermal profile to assess the specificity of amplification.
β-Actin was the most stable reference gene under exposure of MT in our study with the selecting method described in Zheng et al. (2014) (the detail was not revealed). The qRT-PCR primers for β-actin, sod, cat, gpx1, gr and gst are mentioned in Additional file 1: Table S1. Each transcript was analyzed on six individuals per each sampling point per each group. The changes of expression levels of these antioxidant genes after MT exposure were calculated by the 2 −ΔΔCt method with the formula, F = 2 −ΔΔCt , ΔΔCt = (C t, target gene − C t, β-actin ) MT − (C t, target gene − C t, β-actin ) control (Livak and Schmittgen 2001).
Statistical analysis
All the experimental data are shown as the mean ± standard deviation of the mean (SD). Data were tested for normality of distribution (Shapiro-Wilk test) and homogeneity of variance (Levene's test) prior to analysis. The data were dealt with one-way ANOVA analysis followed by the LSD test (Ahmad et al. 2006) with SPSS Statistics 18.0 (SPSS Inc., Chicago, IL USA), with P < 0.05 indicating an significant difference. Data that did not conform to assumptions of normality and homoscedasticity were transformed (lg) and then analyzed as narrated above.
Antioxidant parameters following MT exposure
The antioxidant parameters (T-GSH, GSH/GSSG, T-AOC and MDA) were showed in Fig. 1. T-GSH, GSH/ GSSG and MDA were significant decreased in 0.5 mg/L MT exposure groups at 7 and 14 days. T-GSH and MDA were significant decreased and increased for 0.5 and 5 mg/L MT exposure groups respectively at 21 days (Fig. 1A, D), while GSH/GSSG only showed significant increment in 5 mg/L MT exposure groups (Fig. 1B). T-AOC revealed significant increments in 5 mg/L MT exposure groups all through the exposure time (Fig. 1C).
Antioxidant enzymes
The activities of the antioxidant enzymes (SOD, CAT, GPx, GR and GST) were revealed in Fig. 2. SOD, CAT and GST activities showed significant decreases for MT exposure at 7 days, while GPx activities showed significant increments (by 115 and 131 % for 0.5 and 5 mg/L MT exposure groups respectively, Fig. 2C). GPx activities showed significant increments for MT exposure both at 14 days (0.5 mg/L) and 21 days (0.5 and 5 mg/L), while SOD activities was significant decreased for 5 mg/L MT exposure groups ( Fig. 2A). CAT activities demonstrated significant decreases for MT exposure both at 14 days (0.5 mg/L) and 21 days (0.5 and 5 mg/L). CAT activities (increased by 131 %) only showed significant increments for 5 mg/L MT exposure at 14 days (Fig. 2B), while GST activities only showed significant increments for 0.5 mg/L MT exposure at 14 and 21 days (Fig. 2E). GR activities were not affected all through the whole exposure period (Fig. 2D).
Genotoxicology
The gene expression profiles of the antioxidant enzymes (sod, cat, gr, gpx1 and gst) were revealed in Fig. 3. sod and cat showed significant decreases for MT exposure at 7 days (Fig. 3A, B), while gst showed significant increments (Fig. 3E). Except for sod and gpx1 transcripts at 14 days (Fig. 3A, C), all of the antioxidant enzymatic genes detected in the present study showed significant increments for MT exposure both at 14 and 21 days, and the genotoxicity profile of antioxidant enzymatic genes revealed dose-dependent manner. gr and gpx1 transcripts were not affected at 7 days (Fig. 3C, D).
Discussion
SOD and CAT comprise the first-line defense against oxygen toxicity and serve as early indicators of exposure to pollutants that trigger oxidative stress. GSH prevents free radical damage and helps detoxification by conjugating with chemicals, and GST is an important phase II detoxification enzyme in the form of conjugation with glutathione to produce less toxic and more water soluble compounds. Androgenic compounds directly induced antioxidant enzymatic activities and transcriptional genotoxicity (Larsson et al. 2002). Previous studies have used Nile tilapia as chemical test model to further testify the mode of action in the metabolic mechanism of xenobiotic pollutants (Meng et al. 2014a, b). The present study presented hepatic sensitive biomarkers of juvenile GIFT tilapia following MT exposure in the form of antioxidant enzyme activities and transcripts. The antioxidative index (Fig. 1) in 5 mg/L MT exposure groups was significant higher than those in 0.5 mg/L MT exposure groups, while some antioxidative enzymatic activities (SOD, GPx, GST) in 5 mg/L MT exposure groups was significant lower than those in 0.5 mg/L MT exposure groups at 14 days. The genotoxicity data showed that these detected parameters presented the dose-dependent manner at 14 days (except for sod, gpx1) and 21 days. To conclude, genotoxicity was more sensitive than enzymatic parameters based on the data observed in the current study (Chakravarthy et al. 2014).
SOD, CAT and GST activities showed almost the same tendency in the present study, which provided inconsistent data to the study demonstrated as different response under chlorpyrifos exposure (Jin et al. 2015). CAT and GPx can act cooperatively as scavengers of H 2 O 2 and other hydroperoxides. Vieira et al. (2012) reported the decreased CAT activity was concomitant with the stimulated SOD and GPx activity in goldfish under acute toxicity of manganese exposure. The present study showed the reverse tendency between SOD, CAT activities with GPx activity for treated groups at 7 days, especially in treated groups at 14 days in the present study, which demonstrated that CAT activity could be compensated by a high decrease of the GPx activity (Atli and Canli 2011). GST activity was also significantly induced at 5 mg/L MT groups, which in agreement with the study observed in Leuciscus cephalus exposed to heavy metals (Hermenean et al. 2015). The decreased total GSH observed in the present study was the same as methylmercury-exposed in rainbow trout (Mozhdeganloo et al. 2015), while different from waterborne lead exposure for tilapia (Kaya and Akbulut 2015). The alteration of GSH content and metabolism in different studies suggest that GSH has a key role in oxidative-induced toxicity caused by MT. The reduction in the levels of MDA in the present study is not in agreement with the study performed by Mozhdeganloo et al. (2015). The decrease in the GSH/GSSG ratio for 0.5 mg/L MT exposure groups in the present study, implied the oxidation of GSH to GSSG with the signal of increased detoxification of ROS (Guzmán-Guillén et al. 2013), which was the same as tilapia liver under spinosad exposure (Piner and Üner 2013).
The transcriptional changes in these genes in the liver could be good biomarkers for stress levels in O. javanicus exposed to iprobenfos (Woo et al. 2009), and this study indicates that transcripts of the detected antioxidant enzymes were significant increased except for sod and cat. The down-regulation of sod and cat was the same as the study performed in juvenile Jian carp challenging against dietary choline (Wu et al. 2014). However, the up-regulate sod, cat and gr mRNA levels, which suggesting an adaptive mechanism against stress in Jian carp . The transcriptional and functional responses of antioxidant enzymes are inversely correlated in GIFT tilapia when exposed to MT, which was demonstrated in common carp exposed to organochlorine pesticides (Karaca et al. 2014). The dose-dependent pattern revealed in the present study was the same as the study of benzo[a]pyrene on marine medaka Oryzias melastigma (Kim et al. 2014), atrazine exposure on zebrafish (Jin et al. 2010), methomyl on Nile tilapia (Meng et al. 2014a, b). This study indicates that although MT stimulates adaptive increases in the expression of some antioxidant enzyme genes, it also induces oxidation and the depletion of most activities of antioxidant enzyme (Fig. 2a, b except for 5 mg/L MT at 14 days, Fig. 2e except for 0.5 mg/L MT at 14, 21 days) and GSH content (Fig. 1a) due to the increase of ROS production (Meng et al. 2014a, b;Mukhopadhyay et al. 2015). To conclude, genotoxicity are reliable environmental biomarkers for MT induced oxidative stress in tilapia juveniles, while SOD, CAT, GPx, GST, T-AOC (5 mg/L MT) also be sensitive for MT exposure. Therefore, the useful biological indicators of environmental MT contamination revealed for the aquatic ecosystem (Kayode et al. 2014).
Conclusion
Antioxidative parameters (T-GSH, GSH/GSSG and MDA) and antioxidant enzymes (SOD, CAT and GST) significantly decreased under MT exposure, and the genotoxicity profile of antioxidant enzymatic genes revealed as the dose-dependent manner. The current study presented evidence that MT could result in oxidative stress response in the early stages of GIFT tilapia. | 2016-05-04T20:20:58.661Z | 2016-03-15T00:00:00.000 | {
"year": 2016,
"sha1": "469110ff15061063f87360339b5452931303c4f1",
"oa_license": "CCBY",
"oa_url": "https://springerplus.springeropen.com/track/pdf/10.1186/s40064-016-1946-6",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "469110ff15061063f87360339b5452931303c4f1",
"s2fieldsofstudy": [
"Biology",
"Environmental Science"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
30559229 | pes2o/s2orc | v3-fos-license | The prospective study of cardiac MRI in the diagnosis and treatment of cardiac sarcoidosis associated with atrioventricular block
Background Sarcoidosis is systemic granulomatous disease of unknown etiology. Most patients present with pulmonary involvement, but association of cardiac sarcoidosis (CS) is critical factor determining prognosis. Recently, cardiac magnetic resonance imaging (CMR) and 18Ffluoro-2-deoxyglucose positron emission tomography (18F-FDG PET) can detect cardiac lesions in asymptomatic sarcoidosis patients. On the other hand, Conduction abnormalities and ventricular tachycardia are the most common arrhythmias with CS. With the experience of CS pointed out accidentally in PET medical examination at earlier stage got us to recognize the usefulness of CMR, we conducted a study to figure out real-world prevalence of CS in advanced atrioventricular block (AVB) cases and to assess the usability of “aggressive” diagnosis of CS by cardiac MRI.
Background
Sarcoidosis is systemic granulomatous disease of unknown etiology. Most patients present with pulmonary involvement, but association of cardiac sarcoidosis (CS) is critical factor determining prognosis. Recently, cardiac magnetic resonance imaging (CMR) and 18Ffluoro-2-deoxyglucose positron emission tomography (18F-FDG PET) can detect cardiac lesions in asymptomatic sarcoidosis patients. On the other hand, Conduction abnormalities and ventricular tachycardia are the most common arrhythmias with CS. With the experience of CS pointed out accidentally in PET medical examination at earlier stage got us to recognize the usefulness of CMR, we conducted a study to figure out real-world prevalence of CS in advanced atrioventricular block (AVB) cases and to assess the usability of "aggressive" diagnosis of CS by cardiac MRI.
Methods
Between June 2009 and May 2013, one thousand four hundred eighty-seven patients (nine hundred fifty-two males) underwent CMR. We underwent CMR to advanced AVB cases that have been the adaptation of pacemaker implantation of five years if at all possible. CMR was performed by using 1.5-T magnet MR systems (Intera Achieva, Philips Medical Systems) with 5element cardiac coils. In cases with temporary pacing, CMR scan were performed just behind extraction temporary pacing lead with confirmation of spontaneous beat, which followed by pacemaker implantation. We excluded cases over 85 years old or in severe circulatory failure with AVB.
Results
Sixty-six AVB cases of all fourteen hundred eighty-seven CMR cases underwent CMR for five years. On the other hand, in one hundred fifty-four cases, a pacemaker was newly implanted for AVB and one hundred twenty-five were under 85 years old. Among these cases, CMR scan were performed in fifty (40% of one hundred fifty-five) cases. Thirteen cases (20% of sixty-six) indicate the pattern of CS with late Gd enhancement (LGE). Nine of thirteen cases were diagnosed as CS with criteria and FDG-PET scan. Five cases received steroid therapy and three cases showed improvement of AVB. Especially, pacemaker implantation was able to avoid in one case. Unfortunately one case did not receive steroid therapy at early phase and developed congestive heart failure. In addition, FDG-PET is more useful than Ga scintigram for assessment activity of CS.
Conclusions
CMR in advanced AVB has been accompanied with difficulty because of the bradycardia. Even if only few cases were found as CS in all cases newly pacemaker implanted, its impact on the therapeutic strategy is enormous. In diagnosis of CS, sign of positive LGE and positive PET have important implications. CMR is useful as screening tool for CS and should be performed as far as possible in advanced AVB cases.
• Research which is freely available for redistribution
Submit your manuscript at www.biomedcentral.com/submit Figure 1 The effect of PSL administration to cardiac sarcoidosis, which prevent pacemaker implantation. | 2018-05-08T18:30:47.675Z | 2015-02-03T00:00:00.000 | {
"year": 2015,
"sha1": "39b2d9999bc87fca777bd20a0a153f11eff0c4a0",
"oa_license": "CCBY",
"oa_url": "https://jcmr-online.biomedcentral.com/track/pdf/10.1186/1532-429X-17-S1-P373",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "952bfdd1dd4cb778ca4ae0204ec553503e4cdca3",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
135031065 | pes2o/s2orc | v3-fos-license | Erosive Potential in Sub-basins of the Lower Itapecuru River in the State of Maranhão , Northeastern Brazil
Potencial erosivo das sub-bacias hidrográficas do Baixo Curso do Rio Itapecuru, Estado do Maranhão, Nordeste do Brasil R E S U M O O objetivo desta pesquisa foi estimar a perda de solo por erosão laminar em dez sub-bacias hidrográficas (SBHs) localizadas no Baixo Curso do Rio Itapecuru, com base em diferentes cenários de uso e cobertura da terra nos anos de 2005, 2010 e 2015. Com este propósito optou-se pela utilização da Equação Universal de Perda de Solos – EUPS, que integra os seguintes fatores: erosividade da chuva (R), erodibilidade do solo (K), fator topográfico (LS) e fator de uso e conservação do solo (CP). Os resultados evidenciaram que R anual equivale a 11.314,5 MJ mm ha ano, com maiores efeitos nos meses de março e abril. Os valores de K estão relacionados com as tipologias de solos das SBHs, onde predominam Plintossolos e Argilossolos, respectivamente com contribuições estimadas de 0,0429 e 0,030 t ha MJ -1 mm . O fator LS revelou que predomina relevo plano e suave, com declividades variando entre 0o e 5o, estendendo-se por 93% da área de estudo. O padrão de CP indicou que as áreas verdes foram predominantes nos anos de 2005, 2010 e 2015, e que, entre os 10 anos, as principais alterações foram constatadas para as SBHs 1, 2 e 10. Quanto a EUPS, a classe “Muito Baixa” foi a mais representativa em toda série temporal. No entanto, devido as mudanças no Fator CP, principalmente nas SBHs 1 e 2, evidenciou-se a ampliação das áreas susceptíveis à erosão laminar, devido ao aumento das classes “Moderada” e “Moderada a forte”, desencadeada pelas alterações nos padrões da paisagem. Os resultado obtidos são significativo para gerenciamento ambiental e priorização das ações de conservação ambiental das SBHs. Palavras-chave: erosividade, erodibilidade, relevo, conservação do solo, EUPS. Revista Brasileira de Geografia Física v.10, n.04 (2017) 1027-1045. Soares, L. S; Castro, A. C.; Lopes, W. G. R.; Silva, E. V.; Araújo, G. M. C.; França, V. L.; Santos, P. V. C. J. 1028 .
Introduction
The management and conservation of soil and water in a river basin requires the understanding of the dynamics of erosive processes to which this planning unit is subjected.Erosion is "the process of the detachment and accelerated dragging of soil particles caused by the action of water and wind" (Bertoni and Lombardi Neto, 2012, p. 68) and is considered one of the greatest problems of land degradation throughout the world (Devatha et al., 2015).Erosion constitutes a huge risk to the extensive territory of Brazil (Guerra et al., 2014).According to Vitte and Mello (2007, p. 130-131) "erosion problems in Brazil are the result of the combination of a rapid process of land occupation and technification, fragile soils and a climatic regimen that is propitious to the intense occurrence of this phenomenon."Erosive processes have diverse impacts on environmental components, with negative effects on soil fertility (Prasannakumar et al., 2012), the silting of rivers (Demarchi and Zimback, 2014), floods and gullies (Vieira, 2008), overflows (Zhou et al., 2008), changes in the landscape pattern (Shi et al., 2013), changes in water quality (Santos and Hernandez, 2013) and socioeconomic problems (Guerra et al., 2014).
Erosive potential can be measured with different methods, such as the Universal Soil Loss Equation (USLE) proposed by Wischmeier and Smith (1978).This equation is the most widely used model throughout the world and provides useful information for the adequate planning of soil and water conservation.The application of the USLE on the scale of a river basin is facilitated by the use of a geographic information system (Oliveira et al., 2012).The equation has been employed in the international literature addressing erosive potential in river basins in order to suggest strategies for soil management and conservation (Irvem et al., 2007;Shi et al., 2012), environmental recovery (Stipp et al., 2011) and the management of natural resources (Beskow et al., 2009;Bezerra and Silva, 2014).According to Bezerra and Silva (2014, p. 195), "the study of the risk of soil loss constitutes one of the elements that can serve as the basis for the planning of river basins and defining goals, objectives and actions to be developed in studies and environmental plans addressing water resources".
Information generated in the modeling of soil loss due to sheet erosion is fundamental to the environmental management of a river basin and assists in the understanding of interactions triggered by erosive processes.The results allow the mapping of areas that are more susceptible to erosion and should be prioritized in control and conservation measures aimed at ensuring the sustainability of a given unit of analysis.However, there are no publications with information on the estimate of soil loss due to sheet erosion in the Itapecuru River basin in the state of Maranhão in Brazil.
The area selected for the present study consists of ten sub-basins located in the municipalities of Bacabeira, Rosário and Santa Rita that make up part of the industrial and port expansion of the state capital, São Luís.Increasing pressure on natural resources is projected for upcoming years, especially if environmental planning strategies are not drafted and implemented with the aim of maintaining the landscape as well as conserving vegetation, soil and water.
The purpose of the present study was to estimate soil loss due to sheet erosion in ten subbasins located in the lower course of the Itapecuru River based on different land use and coverage scenarios in the years 2005, 2010 and 2015 in order to contribute information to the process of environmental planning in the region.
Study area: Itapecuru River basin
The Itapecuru River basin covers 53,216.84km 2 , which corresponds to 16% of the area of the state of Maranhão (NUGEO, 2011).The basin is surrounded to the south and east by the Parnaíba River basin over the Itapecuru Hills, Azeitão Mesa and other small rises, to the west and southwest by the Mearim River basin and to the northeast by the Munim River basin (IBGE, 1998).According to Alcântara (2004), the different altitudes allow the classification of the Itapecuru River into upper, middle and lower courses (Figure 1).
The present study was focused on ten subbasins (SWDs) located in the lower course of the Itapecuru River, covering an area of 421.6 km 2 and situated in the municipalities of Rosário, Bacabeira and Santa Rita in the state of Maranhão, Brazil.The estimated population in the three municipalities is 93,227 inhabitants: 16,553 in Bacabeira, 41,694 in Rosário and 35,980 in Santa Rita (IBGE, 2015).Geographically, the sub-basins are located in the microregion denominated Itapecuru Mirim in the northern portion of the state approximately 50 km from the state capital São Luís and limited by the following coordinates: UTM 598658/574822 East and 9678715/9653145 North (Figure 2).The main accesses are through roadways BR-135 and BR-402, which interlink the municipalities to the state capital.Production in the region is mainly related to industrial agriculture, civil engineering, the mechanical metal industry and the service industry (FSADU, 2013).Among these enterprises, only some agricultural, livestock and extraction activities are historically linked to the economic base of the municipalities of Bacabeira, Rosário and Santa Rita (IMESC, 2014).With the implantation of the work site for a large petrochemical enterprise and the creation of the industrial district of Bacabeira in 2008, the basis of the economy was altered and new economic activities emerged in the region.
According to the Thornthwaite moisture index, the climate is humid (LABGEO, 2002).Mean annual rainfall (1975Mean annual rainfall ( -2015) ) in the region is 1998.8mm.Two well-defined seasons occur.The rainy season spans from January to July and the dry season spans from August to December (INMET, 2015).
The soils in the area under the influence of the sub-basins are dystrophic argilluvico plinthosol, concretionary petric plinthosol, dystrophic red-yellow argisol and dystrophic yellow latosol (IBGE, 2007).The topography is characterized as flat to mildly undulating, corresponding to a dry flat surface on which plateaus, low hills with somewhat convex tops (sometimes nearly mesas molded in sedimentary rock) and shallow valleys are located (FSADU, 2008).
The calculation of the USLE was based on a database developed for the present study in a geographic information system.The modeling of the database of such a system consists mainly of the definition of information planes, which are also denominated levels or layers.Information planes vary in number, type of format and theme in accordance with the needs of each task or study (Câmara et al., 2001).For annual soil loss (A), information planes were generated for each variable in the equation.Rainfall erosivity (R) expresses the capacity of rainfall in a given location to cause erosion in an unprotected area.This factor was determined by the sum of monthly erosivity indices, which were determined based on the recommendations proposed by Lombardi Neto and Moldenhauer (1992), to give the annual R value.
For the calculation of R, mean total monthly and annual precipitation data from a 40year historical set (1975 to 2015) were used.These data were acquired from the meteorological station of the Brazilian National Meteorology Institute (INMET, 2015) located in the city of São Luís.
Soil erodibiltiy (K) is the property of the soil that represents its susceptibility to erosion and is defined as the amount of material removed per unit of area when other determinant factors are constant.K is a quantitative value determined experimentally on plots of land and is expressed as soil loss per unit of rainfall erosivity index (Bertoni and Lombardi Neto, 2012).The information plane for K was created based on a pedological map of the state of Maranhão (IBGE, 2007), on which soil classes were correlated with erodibility values found in the scientific literature (Table 1).
The recommendation proposed by Bertoni and Lonbardi Neto (2012) The following land use/coverage classes were used for the mapping: human occupation (high, medium and low), vegetation (high, medium and low), exposed soil and agriculture.The CP values were attributed based on adaptations of studies by Stein et al. (1987), Brito et al. (1998), Tomazoni et al. (2005) and Ribeiro and Alves (2007) (Table 2) The information planes of slope length (L) and steepness (S) were obtained separately, but subsequently combined to facilitate the application in the USLE, thereby composing a topographic information plane (LS) using the equation proposed by Bertoni and Lombardi Neto (2012): = 0.00984 0.63 1.18 in which: (LS) = topographic factor; (L) = slope length in meters; (S) = steepness in degrees.
LS is an important factor in the equation, as greater slope length and steepness leads to the carrying of soil particles at a greater velocity and with greater force (Baptista, 1997).The percentages of the steepness classes were obtained through the creation of a clinographic map of the sub-basins.The values used for LS in the different steepness classes were attributed based on the study by Kok et al. (1995) (Table 3).Table 3. Mean LS values per steepness class.
Steepness class
LS factor 0 -5° 0.5 5 -15° 3.5 15 -30° 9 > 30° 16 Source: Kok et al. (1995) After the determination of the information planes for rainfall erosivity (R), soil erodibility (K), management and conservation practices (CP) and the topographic factor (LS), all components of the USLE were converted into Raster files with the description of each pixel in cells dimensioned in quadrants measuring 20 x 20 meters.For each quadrant, soil loss (A) was estimated by multiplying the factors that compose the equation using the Raster Calculator tool of the ArcGis program, version 10, of the Environmental Systems Research Institute.
The results of the USLE were categorized on different levels of susceptibility to erosive processes, following the recommendation proposed by Ribeiro and Alves (2007) (Table 4).The results of the mapping were converted into percentage values to facilitate the identification of changes having occurred in the sub-basins of the lower Itapecuru River between 2005 and 2015.
Results and Discussion
Rainfall erosivity (R) R values ranged from 0.0057708 MJ mm ha -1 year -1 in October to 3161.046MJ mm ha -1 year - 1 in April.Mean annual R was 11,314.5 MJ mm ha - 1 year -1 , which was the value incorporated into the raster for the estimation of the USLE.As the R potential is related to the incidence of precipitation, this component of the erosive process exerts greater influence in rainier months and its effects are abruptly attenuated in dry months (Figure 3).River (1975River ( -2015)).
According to Oliveira et al. (2012), R values in Brazil range from 1672 to 22,452 MJ mm ha -1 year -1 .In the mapping conducted by these authors, the zone corresponding to the lower Itapecuru River basin had R values between 10,000 and 12,000 mm ha -1 year -1 , which is in agreement with the finding in the present investigation.Soares (2010) reports a similar trend in the Bacanga River basin, which is located 40 km from the study area, in which the annual R value was 10,714.05mm ha - 1 year -1 , with the highest potential in April (3,613.59MJ mm ha -1 year -1 ) and the lowest in October (0.007335 MJ mm ha -1 year -1 ).
According to Machado et al. (2014), high annual precipitation does not necessarily lead to greater erosivity, as the capacity of water to cause erosion is linked to the concentration of rains in a given period of the year.Rain occurs in a concentrated manner at specific times of the year in tropical regions (Guerra, 2012), which aggravates erosive processes in these periods.The rainfall data for the lower Itapecuru River basin indicate that greater concentrations of rain occur in March, April and May (Figure 3) and the potential for erosivity to result in higher A values is therefore greater in this period.
According to Hoyos, Waylen and Jaramillo (2005), high R values are expected in the tropics due to kinetic energy and the intensity of convective rains.Thus, it is important to understand precipitation dynamics, since rainfall is the driving force of erosion (Oliveira et al., 2012).In the lower Itapecuru River basin, the Intertropical Convergence Zone is the main climatic factor responsible for the incidence of rainfall.Thus, understanding the behavior of this meteorological system in detail is fundamental to the precise modeling of erosive processes in the region.
Soil erodibility (K)
The spatialization of soil erodibility (K) is directly related to the pedological mapping of the lower Itapecuru River basin, since this variable is an intrinsic property of each class of soil.According to the Brazilian Institute of Statistics and Geography (IBGE, 2007), the main groups of soils in the sub-basins are Plinthosols, Ultisols, Latosols and Gleysols (Figure 4).
The predominant soil types are Plinthosols and Ultisols, which jointly cover an area of 330.48 km 2 , totaling 79% of the surface of the sub-basins of the lower Itapecuru River.Latosols and Neossolos respectively represent 14% and 7% of the mapped areas (Table 5).Following the pedological survey, K was calculated by crossing soil type in the sub-basins with K values extracted from the literature.Figure 5 displays the information plane for erodibility.
As different soil types have different degrees of proneness to erosive processes, understanding the K factor is fundamental.According to Bertoni and Lombardi Neto (2012, p. 61), "erosion is not the same in all soils.Physical properties, especially structure, texture, permeability and density, as well as chemical and biological characteristics exert different influences on erosion".
Plinthosol has the greatest erodibility and covers the largest portion of the sub-basins of the lower Itapecuru River (57%).Hence, such areas undergo greater effects of erosive processes under adverse conditions (Table 5).SWD 1 and SWD 2 have greater natural resistance to erosion due to the predominance of Latosols, which have a low K value in the study area.Another important aspect is the need to calibrate K factors for the lower Itapecuru River basin.Although the use of estimates in the composition of the USLE is valid and widely described in the literature, the identification of K values on a local scale enables a more detailed understanding of erosive processes.Topographic factor (LS) According to Minella et al. (2010), with the techniques available in the geographic information system and the ease of obtaining numerical elevation models, it is possible to estimate the LS factor in a less labor-intensive manner, taking into account geo-morphological features, which are determinants with regard to hydrological processes.The determination of the LS factor in the present study was based on Kok et al. (1995), who report mean steepness values per class as a function of established relationships between watershed length and gradient.
A flat, smooth relief predominates, with slopes ranging from 0 to 5º extending over 93% of the sub-basins (329.3 km 2 ).However, SWD 6, SWD 7, SWD 9 and SWD 10 have higher percentages of slopes between 5 and 15° (Table 6 and Figure 6) and therefore have the potential to exhibit a greater influence of the relief on the intensity of erosion.According to Coutinho et al. (2014, p. 6), "areas with steeper slopes can generate a greater outflow velocity, thereby reducing the volume of water stored in the soil and subjecting the basin to degradation processes due to erosion".Thus, areas with steeper slopes should be prioritized in soil conservation actions aimed at preserving vegetation.Figure 7 displays the information plane for the topographic factor of the USLE.
Figure 5. Information plan of soil erodibility (t ha MJ -1 mm -1 ) for calculation of USLE in sub-basins of lower Itapecuru River.CP values were attributed to each class mapped in the sub-basins based land use/coverage patterns in the area investigated (Figure 8).These values served as the basis for the information plane of the CP factor (Figure 9).Tables 7, 8 and The high and medium vegetation classes were predominant, covering 77.76% of the surfaces of the sub-basins in 2005, increasing to 80.5% in 2010 and dropping to 78.6% in 2015.Low vegetation accounted for 4.85% of the land coverage in 2005, increasing to 5.81% in 2010 and 7.62% in 2015.The largest changes in vegetation occurred in SWD 1 and SWD 2.
Exposed soil accounted for 13.14% of the study area in 2005.The largest areas were found in SWD 6, SWD 2 and SWD 10.In 2010, the total area of exposed soil diminished to 9.33%, but the inverse occurred for SWD 2, in which the area increased from 9.27% to 29.33%.In 2015, the main changes occurred in SWD 1 and SWD 4, with an increase in exposed soil accompanied by a reduction in areas of vegetation.
Low occupation percentages were found for the three years of reference, totaling 2.20% of the surface of the sub-basins in 2005, 2.71% in 2010 and 2.94% in 2015.The largest proportions in the three years were found in SWD 1 and SWD 10 due to connections with the main expansion areas of the municipalities of Bacabeira and Rosário.
The percentages for agriculture were the lowest among the land use/coverage classes, corresponding to 0.41% in 2005, 0.48% in 2010 and 0.29% in 2015.The pattern evidenced for agriculture was characterized by small polygons associated with areas of high and medium vegetation and distributed randomly throughout the surfaces of the sub-basins, suggesting that that activity was developed merely as a form of subsistence.
As the land use/coverage pattern, which represents the CP factor, is one of the main agents that can lead to the short-term increase or reduction in erosive processes, it is evident from the standpoint of the conservation of soil and water resources that the maintenance of native vegetation constitutes a priority action in the environmental management of the sub-basins of the lower Itapecuru River.
According to Mohammad and Adam (2010), studies under different environmental conditions have demonstrated the positive effect of vegetal coverage on the reduction of erosion, as forests diminish the risk of surface runoff and the loss of soil, whereas land cultivation and deforestation create conditions that are favorable to erosion.Pacheco et al. (2014) postulated this same notion after conducting an experimental study in a small river basin located in northern Portugal, where the adequate use of land reduced soil loss by as much as 86% in comparison to areas in which inadequate land use occurred.
Soil loss due to sheet erosion (A)
Tables 10, 11 and 12 display the percentages of susceptibility classes regarding soil loss due to sheet erosion in 2005, 2010 and 2015.Figure 10 show the final maps on the erosive potential of the sub-basins in 2005, 2010 and 2015, respectively, which were created based on the interpolation of the information planes for rainfall erosivity, soil erodibility, the topographic factor and soil use/conservation.
The very low category was the predominant class based on the USLE, with soil loss up to 1 t ha -1 year -1 due to sheet erosion.This class accounted for 80.73% (339.02km 2 ) of the surface of the sub-basins in 2005, 82.39% (347.39 km 2 ) in 2010 and 81.29% (342.76 km 2 ) in 2015.These results may be attributed to the predominance of areas with gentle slopes and the significant presence of arboreal vegetation in the sub-basins.The low (1-10 t ha -1 year -1 ) and moderate to strong (100-500 t ha -1 classes -1 ) were also dominant.The low class accounted for 4.69% of the surface in 2005, 6.33% in 2010 and 7.43% in 2005.Areas with moderate to strong soil loss totaled 10.24% in 2005, 6.0% in 2010 and 7.53% in 2015.The moderate class (50-100 t ha -1 year -1 ) accounted for 2.75% of the total study area in 2005, 3.81% in 2010 and 2.49% in 2015.The strong and very strong classes corresponded to the smallest areas of the sub-basins.
In the study by Soares (2010), in which USLE values were estimated for two sub-basins located in the rural zone of the city of São Luís, which is 35 km from the lower Itapecuru River, the main soil loss classes were very low and low.
However, the author found an increase in the erosive potential between 1976 and 2008, associating the main changes to disorderly land occupation and a reduction in areas with vegetal coverage, which led to silting and changes in the quality of bodies of water.Lopes et al. (2011) found a similar pattern in the sub-basin of Varjota Creek in the state of Ceará, Brazil (70.73 km 2 ), in which 74% of the area had soil loss less than 11 t ha -1 year -1 , which corresponded to flatter and/or more vegetated areas.The authors found that areas with greater erosive potential were along the drainage lines of the creeks and in degraded regions.
The main changes between 2005 and 2015 occurred in SWD 1 and SWD 2 due to the increase in areas of greater erosive potential.Zones with soil loss less than 1 t ha -1 year -1 diminished from 65.31% to 56.18% in SWD 1 and 72.58% to 59% in SWD 2, representing losses of 9.14 km 2 and 13.57km 2 , respectively, of areas with lower erosive potential.The reduction in green areas in these sub-basins, accompanied by the increase in areas of exposed soil in the rainy season, contributed to the increase in soil loss due to sheet erosion.
Another aspect that merits attention regards the increase in areas of the moderate class in SWD 1 and the moderate to strong class in SWD 2. In SWD 1, areas with soil loss between 50 and 100 t ha -1 year -1 increased from 12.12% in 2005 to 17.23% in 2015.In SWD 2, the increase in erosive processes went from 11.43% in 2005 to 26.96% in 2015.Such increases corresponded to 2.0 km 2 and 8.33 km 2 in SWD 1 and SWD 2, respectively.In contrast, no substantial changes in erosion patterns were found in the other sub-basins within the study period.
Changes in the CP factor were related to changes in the landscape pattern of the sub-basins, especially excavating activities in SWD 1 and SWD 2 for the implantation of a petrochemical enterprise.The effects of environmental tensors on the erosive potential of soils are reaffirmed by Ruthes et al. (2014Ruthes et al. ( , p. 1100)): "Erosion is the result of the action of diverse phenomena that alter the normal conditions of a river basin.The uncontrolled artificialization of the environment is the major factor that accelerates this process, as the removal of vegetal coverage for the establishment of croplands, road construction, excavation activities, waterway constructions, etc. contributes decisively to the greater disaggregation and, consequently, greater transport of solid particles." A reduction in the water quality of creeks is another consequence of the potentiation of soil loss in SWD 1 and SWD 2. According to the FSADU (2014), the excavation activities of the petrochemical enterprise caused changes in the turbidity, total suspended solids, dissolved suspended solids, true color and transparency of bodies of water in these sub-basins, especially in the rainy season.Such changes are indicators of an increase in particulate and dissolved matter due to sheet erosion of the soil.The increase in this process is directly related to changes in the landscape pattern (CP factor).
The use of the USLE as an environmental diagnostic and predictive tool for areas with greater erosive potential in sub-basins is of fundamental importance to the conservation of soil and water resources.The implementation of management measures for areas of risk can help avoid erosive processes as well as minimize costs related to environmental recovery.The present findings underscore the need for actions directed at maintaining green areas, establishing orderly development, reforesting degraded areas and protecting areas in which the steepness of the slopes is greater than 15 degrees.
Conclusion
The mean erosivity observed (11,314.5 MJ mm ha -1 year -1 ) demonstrates that the study area has high erosive potential due to the effects of rainfall.March and April are the months that contribute most to annual erosivity, whereas September and October contribute least.The variability between the rainy and dry seasons demonstrates the need to determine existing relationships between rainfall patterns (volume and intensity) and erosivity in the region.A more detailed R model requires the installation of meteorological stations in the lower lower Itapecuru River basin to perform a diagnosis on a finer scale.
The pedological survey indicated the presence of Plinthosols, Ultisols, Latosols and Neossolos in the sub-basins.Areas deprived of vegetation over the soil that have greater erodibility (K factor) should be prioritized in conservation measures, as such areas are more susceptible to the effects of sheet erosion.
The topographic characteristics demonstrate the predominance of slopes between 0 and 5º, which minimizes the susceptibility of soil loss due to sheet erosion, as demonstrated by the low LS values.However, this aspect does not reduce the need to preserve green areas, as erosive processes are triggered by the combination of other factors of the Universal Soil Loss Equation Green areas were predominant in all three years of reference (2005, 2010 and 2015).However, these areas were replaced by exposed soil and occupied areas in sub-basins 1 and 2, thereby potentiating soil loss.Areas with vegetal coverage had lower CP values, indicating that vegetal coverage is the best way to control erosion, especially in areas at risk of degradation.
The "very low" sheet erosion class predominated in the period investigated, with an annual loss of less than 1 t ha -1 year -1 .However, sub-basins 1 and 2 exhibited increases in areas of moderate and moderate to strong risks of sheet erosion (losses between 50 and 500 t ha -1 year -1 ) caused by changes in landscape patterns, as demonstrated by the CP values.
The present findings are significant to environmental planning and the prioritization of environmental conservation actions in sub-basins.
The diagnosis of the erosive potential with the aid of the Universal Soil Loss Equation generated important data to be applied in measures directed at the sustainability of environmental resources in the lower Itapecuru River basin.
Figure 1 .
Figure 1.Location of Itapecuru River basin and altitudinal division into upper, middle and lower courses.
Figure 2 .
Figure 2. Location of sub-basins of lower Itapecuru River.Production in the region is mainly related to industrial agriculture, civil engineering, the was used for C and P (management and conservation practices), with the combination of the two factors on a single information plane, denominated CP.The CP values were the only information planes of the USLE to vary among the years 2005, 2010 and 2015, enabling the evaluation of erosive processes in the sub-basins of the Itapecuru River in a ten-year period.For such, images from the Landsat-5 Thematic Mapper satellite from 2005, 2010 and 2015 were acquired from the Brazilian National Space Research Institute (INPE, 2015) and analyzed.
Figure 7 .
Figure 7. Information plane of topographic factor (LS) for calculation of USLE in sub-basins of lower Itapecuru River.
9 display the percentages of land use/coverage classes for 2005, 2010 and 2015, respectively.
Figure 8 .
Figure 8. Map of land use/coverage in sub-basins of lower Itapecuru River in 2005, 2010 and 2015.
Figure 9 .
Figure 9. Information plane of CP factor for calculation of USLE of sub-basins of lower Itapecuru River in 2005, 2010 and 2015.
Figure 10 .
Figure 10.Mapping of erosive potential of sub-basins of lower Itapecuru River in 2005, 2010 and 2015
Table 1 .
. Soil typologies in lower Itapecuru River basin, erodibility values and source of data.
Table 2 .
Classes of land use and coverage, CP values and source of data.
Table 4 .
Classes of soil loss due to sheet erosion.
Table 5 .
Distribution of soil classes in each sub-basin (SWD) of lower Itapecuru River.
Table 6
. Distribution of steepness classes in each sub-basin (SB) of lower Itapecuru River.
Table 7 .
Quantitative percentage (%) of land use/coverage classes in sub-basins of lower Itapecuru River for 2005.
Table 8 .
Quantitative percentage (%) of land use/coverage classes in sub-basins of lower Itapecuru River for 2010.
Table 9 .
Quantitative percentage (%) of land use/coverage classes in sub-basins of lower Itapecuru River for 2015.
Table 10 .
Classes of soil loss due to sheet erosion (%) in sub-basins of lower Itapecuru River in 2005.
Table 11 .
Classes of soil loss due to sheet erosion (%) in sub-basins of lower Itapecuru River in 2010.
Table 12 .
Classes of soil loss due to sheet erosion (%) in sub-basins of lower Itapecuru River in 2015. | 2018-12-07T17:06:24.489Z | 2017-05-11T00:00:00.000 | {
"year": 2017,
"sha1": "c46688d51e994a7fdc44df4770578ec6e1d35bc7",
"oa_license": "CCBY",
"oa_url": "https://periodicos.ufpe.br/revistas/rbgfe/article/download/234031/27464",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "c46688d51e994a7fdc44df4770578ec6e1d35bc7",
"s2fieldsofstudy": [],
"extfieldsofstudy": [
"Geography"
]
} |
212559393 | pes2o/s2orc | v3-fos-license | DNA computing based stream cipher for internet of things using MQTT protocol
ABSTRACT
INTRODUCTION
All devices present in different places around us, such as houses, buildings, cities, and even in our bodies, from the data perspective, can sense or generate data for various applications of our daily life such as health care, environmental monitoring, military and industry. When these devices communicate and share information among them over a distributed area through the internet, they constitute the Internet of things (IoT) application. Hence, an IoT device has the ability to communicate, upload, and download information through the internet without human intervention. In other words, the devices are capable of thinking and making a decision. Along with the rapid development of the IoT application, security in IoT is a crucial issue that includes threats aimed to exploit possible weaknesses [1,2]. In IoT, security is divided into two parts first, an authentication and authorization mechanism is required to ensure the security of the communication network that protects the network from any intruder device, which can send or receive information in the network. Secondly, the information itself should be secure also by means of encryption techniques. So on the basis of different cryptography algorithm, securing data device is possible. Cryptography is mainly used to secure information by sharing secret key over different devices. Two type of key are available symmetric and asymmetric key [3,4].
In symmetric, keys are used on both sides sender and receiver while, in asymmetric two different keys are used. IoT deals with real time data such as critical point, the size of data is an important metric too. For some application such as environmental monitoring sampling time is not very critical since data could be collected every minute or hours while in traffic monitoring or healthcare. When uploading or downloading small amount of data it will not require very high band width of internet and vice versa. Cryptography may change the data in type or size depending on the algorithm used such that the intruder cannot identify the original data. Therefore, the algorithm used for data encryption in IoT should be chosen carefully such that it would not overload the bandwidth or effect the real time application which can lead to a bad device performance. The typical security of IoT system can be classified into the following term: access control, authentication, privacy protection, communication security, data integrity and confidentiality, and availability [5].
LITERATURE SURVEY
Security is a critical issue in IoT application since the data is available over the internet; therefore more development is required in this field of research. Until now, there is no clear security platform for IoT. Ibrahim et al. [6] propose a DNA computing encryption algorithm which use amino acid coding to eliminate the one time pad limitation. Aieh et al. [7] Deoxyribonucleic acid (DNA) propose key sharing technique using Diffie-Hellman Cryptography symmetric algorithm. Also, an encryption technique has been propped by Anwar et al. [8]. Which uses symmetric key exchange, DNA computing hybridization, and one time pad technique Mektoubi et al. [9] propose base a mqtt protocol for secured communication of data and key exchanges in IoT network. Bhawiyuga et al. [10] propose an authentication token of mqtt protocol which has been implemented in a constrained device. Begum et al. [11] propose a hybrid cryptography algorithm using One Time Pad, RSA, and DNA computing for text hiding and protection for attackers. Huang et al. [12] propose a publish-subscribe pattern to preserve privacy in fog computing using (CoAP) application protocol. Andy et al. [13] discuss IoT an adequate implementation security mechanism. Wardana and Perdana et al. [14] propose an access control security system in IoT which uses mqtt protocol for communication and fog computing architecture.
IOT PROTOCOLS
IoT protocol is divided into four basic categories: application, service discovery, infrastructure, and other influential protocols. Table 1 is an open standard application layer protocol for the IoT focusing a message oriented environments. Its supports reliable communication via message delivery guarantees primitives including at most once, at-least-once and exactly once delivery [15,16].
MQTT PROTOCOL
The message Queuing Telemetry Transport (MQTT) protocol is a machine to machine M2M protocol, which runs over TCP/IP. It uses a publish/subscribe model between IoT nodes. A broker (cloud server) is the station where the publisher nodes send their messages in a specific topic, where the client node checks these topics. Nodes may subscribe in some topics and not in another. Also, other nodes can publish in specific topic. If for in instant, a node publish in a topic then each node subscribes in that topic would receive the message while other nodes whose not subscriber in that topic would not receive the message [18,19]. In this work, all messages which are transfers between IoT nodes have been encrypted in the publisher and decrypted in the subscriber side using One Time Pad (OTP) technique and DNA computing. Figure 1 shows a schematic diagram at mqtt protocol. Figure 1. Schematic diagram at MQTT protocol
One time pad
It is the most secured encryption techniques where each key is used once for each message. Each single piece of data is encrypted individually with a unique key. The disadvantage of this powerful method is that it requires a huge number of keys, therefore, Pseudo Random Number Generator (PRNG) could be used to generate the keys, but a key repetition is a problem [20]. In this work a Linear Feedback Shift Register (LFSR) has been used to generate a series of key according to the required polynomial and number of bits. These keys are joined to generate a single key with a length equal the length (in binary) of the original message. To improve the strength of the encryption algorithm a DNA computing has been used to encode the messages. The one time pad technique is easy to implement, through following steps of encryption. The original plain text message is as follows [21]: Message = mi = m1,m2,m3,...,mn, mi The key sequence by PRNG is: Pad = ki = k1,k2,k3,...,kn, ki Then the cipher text is as follows: To decrypt the cipher in the receiver side, the following function is used:
Genomic based cryptography
By improving the strength of the encryption, a DNA computing has been implemented. The Deoxyribonucleic Acid (DNA) is a biochemical macro molecule which contains genetic information necessary for the living beings. A genomics molecule consists of a two-stranded nucleotide that is obtained by two twisted single stranded DNA chains, hydrogen bonded together between bases A-T and G-C. The double helix stranded structure is configured by two single strands. Four kinds of bases are found in the strands: Adenine (A); Guanine (G); Thymine (T); and Cytosine (C) as show in Figure 2 DNA based cryptography algorithms have satisfactory results in terms of security and performance. Key features of DNA such as large storage capacity and uniqueness, provides more security to DNA based cryptography algorithm [22,23]. Tables 2 and 3 shows the DNA addition and subtraction rules where the addition rules are used in the encryption process and the subtraction rules in decryption process. Table 2. Addition operation for the DNA sequence Table 3. Subtraction operation for the DNA sequence
Linear feedback shift register (LFSR)
A random number generator has been used to generate a lot of keys, the n-length LFSR consists of n flip-flops 0, 1, 2… N-1, each can store single bit. Figure 3 shows a 16 bit LFSR, the characteristic polynomial is x 16 +x 15 +x 13 +x 4 +1 [24,25]. Keys generated by LFSR are a 16 bit length with each iteration. When it reaches the seed value, keys would be repeated again, the algorithm that generate the key sequence is applied first, then another algorithm is used to combine these 16 bit keys into a single binary key with the same size of the original binary plain text message.(after convert it into its ASCII code values). By doing so, each message would have a key value differs from other message depending on its size (bits length).
PROPOSED ALGORITHM
In this work, the message transfer between IoT nodes through MQTT protocol has been encrypted and decrypted using one time pad and DNA computing techniques. Messages (plain text) generated by the publisher node is encrypted and the receiver node (subscriber) decrypt the message retrain the original message, show a schematic diagram of the propose system in Figure 4.
RESULTS
The encryption works in the following steps: 1) Convert the plain text into a binary form. For example a message "hello world" is converted to: 1000000000001011100000000000101101000000000001010100000000000110100000000000100101000 000000011001010000000001110010100000000111100101000000011110010100000010111100101 2) Encode the binary sequence message such that each two bits denote a genome depends on their where A=00, T=01, C=10, G=11. Then the DNA message is: AAAATCCAAAAATCTTAAAATCGAAAAATCGAAAAATCGGAAAAACAAAAAATGTGAAAA TCGGAAAATGACAAAATCGAAAAATCTA 3) Generate a PRNG using the 16-bit LFSR which will generate an array with 16-bit binary of each element.
In this step, an algorithm is used to combine these numbers to generate a binary sequence with a length equal to the length of the original binary plain text message: 1000000000001011100000000000101101000000000001010100000000000110100000000000100101000 000000011001010000000001110010100000000111100101000000011110010100000010111100101 4) The binary key message is also encoded into a genome sequence in the same manner in step 2: CAAAAACGCAAAAACGTAAAAATTTAAAAATCCAAAAACTTAAAAAGACCAAAAGCTTAAA AGGACCAAAGGACCAATTGCTTAATCG 5) By using Table 2 (Addition rules) then the DNA sequence is: CAAATCAGCAAATCGCTAAATCCTTAAATCCCCAAATCTCTAAAACGACCAATGCTTTAAT 6) A new binary key is generating using LFSR with length equal the DNA sequence generated in steps above. Such that if any bit in this key is 0 then the corresponding genome is inverted (A=T & G=C): 11011100000000000110110000000000011010000000000011100000000000110001000000000001000 7) The final sequence is the cipher message that is sent by the publisher node, the decryption process is the reverse process of the encryption but instead of the Table 2 (Addition rules), Table 3 ( Subtraction rules) are used: CATATCTCGTTTAGCGAAATTCGAATTTAGGGGAATTGAGATTTTGCTCCATACGAAATTA
ALGORITHM IMPLEMENTATION RESULTS
In Figure 5
CONCLUSION
Information security is one of the most risky and challenge issues in IoT application which require more attention from the researchers. In this work a multi-level of data encryption has been applied. Encode the plain text message into a DNA sequence. Then apply DNA computing between the coded DNA message and the encoded DNA key by means of DNA computing rules. Also another key sequence generated by the LFSR with different seed value, and generates a key sequence this time with length equal to the length of the encrypted DNA message to generate the cipher DNA message. The final algorithm shows that the size of the cipher message is twice the original message. | 2020-01-30T09:15:32.600Z | 2020-02-01T00:00:00.000 | {
"year": 2020,
"sha1": "8d4dc8fa70542452cdb3c4e74ff27cc7e637316a",
"oa_license": "CCBYSA",
"oa_url": "http://ijece.iaescore.com/index.php/IJECE/article/download/21070/13591",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "05e2f622abc9cdf08480d0c3dc79885d9301fec0",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
3580012 | pes2o/s2orc | v3-fos-license | Revisiting the poverty of the stimulus: hierarchical generalization without a hierarchical bias in recurrent neural networks
Syntactic rules in natural language typically need to make reference to hierarchical sentence structure. However, the simple examples that language learners receive are often equally compatible with linear rules. Children consistently ignore these linear explanations and settle instead on the correct hierarchical one. This fact has motivated the proposal that the learner's hypothesis space is constrained to include only hierarchical rules. We examine this proposal using recurrent neural networks (RNNs), which are not constrained in such a way. We simulate the acquisition of question formation, a hierarchical transformation, in a fragment of English. We find that some RNN architectures tend to learn the hierarchical rule, suggesting that hierarchical cues within the language, combined with the implicit architectural biases inherent in certain RNNs, may be sufficient to induce hierarchical generalizations. The likelihood of acquiring the hierarchical generalization increased when the language included an additional cue to hierarchy in the form of subject-verb agreement, underscoring the role of cues to hierarchy in the learner's input.
Introduction
Speakers of a language can generalize from finite linguistic experience to sentences they have never heard or produced before. Although there are many possible ways to generalize from a set of sentences, language learners consistently choose certain generalizations over others. In the syntactic domain, learners typically learn generalizations that appeal to hierarchical structures rather than linear order. An influential explanation for this fact is that learners never entertain hypotheses based on linear order: they are innately constrained to assume that syntactic rules are structure-sensitive (Chomsky, 1980).
To test whether a structure-sensitivity constraint is necessary to account for the generalizations that human language learners make, we use recurrent neural networks (RNNs), which are not equipped with such an explicit pre-existing hierarchical constraint. 1 We simulate the acquisition of English subject-auxiliary inversion, the transformation that turns a declarative statement such as (1a) into a question such as (1b): (1) a. My walrus can giggle.
b. Can my walrus giggle? 1 In fact, RNNs are not just capable of using non-hierarchical structures but in fact appear to be biased in favor of linear structures over hierarchical ones (Christiansen & Chater, 1999 Linear rule: Move the linearly first auxiliary to the front of the sentence. While both rules account for common cases such as (1), they make different predictions for complex sentences such as (2): (2) My walrus that will eat can giggle.
Specifically, the hierarchical rule predicts the correct question (3a), while the linear rule predicts the incorrect question (3b): (3) a. Can my walrus that will eat giggle? b. * Will my walrus that eat can giggle?
Although such examples disambiguate the two hypotheses, Chomsky (1971) argues that they are highly infrequent, and thus children may never encounter them. Without these critical examples, according to Chomsky, children can only acquire the hierarchical rule by drawing on an innate constraint stipulating that syntactic rules must appeal to hierarchy. This argument, known as the argument from the poverty of the stimulus (Chomsky, 1980), has been challenged in a number of ways. Some have disputed the assumption that children never encounter critical cases such as (3a) (Pullum & Scholz, 2002). Others have questioned the assumption that an explicit hierarchical constraint is necessary for hierarchical generalization. One such approach has been to argue that the hierarchical rule can fall out of weaker or non-syntactic structural biases. For example, Perfors, Tenenbaum, and Regier (2011) showed that a learner whose task is to choose between an innately available hierarchical representation and an innately available linear representation will choose the hierarchical one; and Fitz and Chang (2017) argued that the hierarchical structure of questions is rooted in innately available structured semantic representations.
T Training set, test set T Generalization set IDENT QUEST No RC Input: the newt can confuse my yak by the zebra .
Output: the newt can confuse my yak by the zebra .
Input: the newt can confuse my yak by the zebra . Output: can the newt confuse my yak by the zebra ? RC on object Input: the newt can confuse my yak who will sleep . Output: the newt can confuse my yak who will sleep .
Input: the newt can confuse my yak who will sleep . Output: can the newt confuse my yak who will sleep ? RC on subject Input: the newt who will sleep can confuse my yak .
Output: the newt who will sleep can confuse my yak .
Input: the newt who will sleep can confuse my yak .
Output: can the newt who will sleep confuse my yak ? Table 1: Examples for each combination of a sentence type and a task. RC stands for "relative clause." A second approach has dispensed with pre-existing structural representations altogether. Lewis and Elman (2001) argued that an RNN trained to predict the next word can learn which questions are well formed, but this conclusion was convincingly called into question by Kam, Stoyneshka, Tornyova, Fodor, and Sakas (2008). The most immediate precursor to our work is Frank and Mathis (2007). Like Lewis and Elman, they used RNNs, but instead of modeling the wellformedness of the question alone, they followed the traditional framework of transformational grammar in modeling the generation of a question from a declarative sentence. 2 Their results were difficult to interpret because the network's generalization behavior depended heavily on the identity of the auxiliaries in the input sentence, and neither the linear hypothesis nor the hierarchical hypothesis predict such lexically dependent behavior. We significantly expand on their experiments, taking advantage of recent technological and architectural advances in RNNs that have shown promise in the acquisition of syntax (Linzen, Dupoux, & Goldberg, 2016).
To anticipate our results, of the six RNN architectures we explored, one of the architectures consistently learned a hierarchical generalization for question formation. This suggests that a learner's preference for hierarchy may arise from the hierarchical properties of the input, coupled with biases implicit in the network's computational architecture and learning procedure, without the need for pre-existing hierarchical constraints in the learner. We provide further evidence for the role of the hierarchical properties of the input by showing that adding syntactic agreement to the input increased the probability that a network would make hierarchical generalizations.
Experimental setup Languages
The networks were trained on two fragments of English, each consisting of a subset of all possible declarative sentences and questions. 3 We refer to the first fragment as the no-2 This is a simplification-a more psychologically plausible assumption would be that questions are generated from a semantic representation shared with the declarative sentence (Fitz & Chang, 2017). 3 The vocabulary of the fragments consisted of 66 words. The full context-free grammar characterizing the fragments, along with statistics about the generated sentences, can be found in the supplementary materials. agreement language. Examples of declarative sentences in this language are given in (4): (4) a. the walrus can giggle .
b. the yak could amuse your quails by my raven .
c. the walruses that the newt will confuse can high five your peacocks .
Each noun phrase in the language had at most one modifier, either a relative clause or a prepositional phrase. Relative clauses were never embedded inside other relative clauses. Every verb was associated with one of the auxiliary verbs can, could, will, and would. Since such modals do not show agreement, any noun, whether singular or plural, was allowed to appear with any auxiliary.
The second fragment, the agreement language, was identical to the no-agreement language, except that the auxiliaries in this language were do, don't, does, and doesn't. Subjects in this language agreed with the auxiliaries of their verbs: singular subjects appeared with does or doesn't, while plural subjects appeared with do or don't. Examples of declarative sentences in the agreement language are given in (5): (5) a. the walrus does giggle .
b. the yak doesn't amuse your quails by my raven . c. the walruses that the newt does confuse do high five your peacocks .
Both languages reused structural units; for example, the same prepositional phrases could modify both subject and object nouns. Such shared structure served as a possible cue to hierarchy because it is more efficiently represented in a hierarchical grammar than a linear one. Subject-verb agreement in the agreement language provided an additional cue to hierarchy; in (5c), for example, do agrees with its hierarchicallydetermined plural subject of walruses even though the singular noun newt is linearly closer to it. We therefore predict that hierarchical generalizations will be more likely with the agreement language than the no-agreement language.
Tasks
The networks were trained to perform two tasks: identity (returning the input sentence unchanged) and question formation. The task to be performed was indicated by a token at the Figure 1: Basic sequence-to-sequence neural network without attention.
end of the sentence-either IDENT for identity or QUEST for question formation. IDENT and QUEST served as endof-sequence tokens in both the input and output. Table 1 provides examples of these tasks on each of the three types of sentences in the languages: sentences without relative clauses, sentences with a relative clause on the object, and sentences with a relative clause on the subject. During training we withheld the question formation task for sentences with a relative clause on the subject (the shaded cell in Table 1); these are the only cases that directly disambiguate the linear and hierarchical hypotheses. The identity task was included in the training setup to familiarize the networks with the critical sentence type withheld from the question task; without such exposure, the networks could be justified in concluding that subjects cannot be modified by relative clauses, making it difficult to test such sentences.
Evaluation
We used two sets of sentences for evaluation, a test set and a generalization set. The test set consisted of novel sentences from the five non-withheld cases in Table 1. It was used to assess how well a network had learned the patterns in its training set. The generalization set consisted of sentences from the withheld case (the question formation task for sentences with relative clauses on their subjects). This set was used to assess how the networks generalized to sentence types from which they had not formed questions during training. The test and generalization set both contained 10,000 unique sentences and the training set contained 120,000 unique sentences.
Architectures
Here we give a very brief bird's-eye view of our architectures.
For a more precise description, including our hyperparameter values, see the supplementary materials.
For all experiments we used the sequence-to-sequence model (Botvinick & Plaut, 2006;Sutskever, Vinyals, & Le, 2014) illustrated in Figure 1. This network has two subcomponents called the encoder and the decoder, both of which are RNNs. The encoder processes the input sentence one word at a time to create a single vector representing the entire input sentence. The decoder then receives this vector (called the encoding) and, based on it, outputs one word at a time until it generates a special end-of-sequence token.
The encoder and decoder each possess a component called a recurrent unit which governs how information flows from one time step to the next. We tested three types of recurrent units: a simple recurrent network (SRN) (Elman, 1990), a gated recurrent unit (GRU) (Cho et al., 2014), and long shortterm memory (LSTM) (Hochreiter & Schmidhuber, 1997). For each type of recurrent unit, we experimented with adding attention to the decoder (Bahdanau, Cho, & Bengio, 2015); attention is a mechanism which gives the decoder access to intermediate steps of the encoding process. For each pair of an architecture and a language, we trained 100 networks with different random initializations, for a total of 1200 networks.
Test set
For the test set, all six architectures except the vanilla SRN (i.e., the SRN without attention) produced over 94% of the output sentences exactly correctly (accuracy was averaged across 100 trained networks for each architecture). The highest accuracy was 99.9% for the LSTM without attention. Using a more lenient evaluation criterion whereby the network was not penalized for replacing a word with another word of the same part of speech, the accuracy of the SRN without attention increased from 0.1% to 81%, suggesting that its main source of error was a tendency to replace words with other words of the same lexical category. This tendency is a known deficiency of SRNs (Frank & Mathis, 2007) and does not bear on our main concern of the networks' syntactic representations. Setting aside these lexical concerns, then, we conclude that all architectures were able to learn the language.
Generalization set
On the generalization set, the networks were rarely able to correctly produce the full question -only about 13% of the questions were exactly correct in the best-performing architecture (LSTM with attention). However, getting the output exactly correct is a demanding metric; the full-question accuracy can be affected by a number of errors that are not directly related to the research question of whether the network preferred a linear or hierarchical rule. Such errors include repeating or omitting words or confusing similar words. To abstract away from such extraneous errors, for the generalization set we focus on accuracy at the first word of the output. Because all examples in the generalization set involve question formation, this word is always the auxiliary that is moved to form the question, and the identity of this auxiliary is enough to differentiate the hypotheses. For example, if the input is my yak who the seal can amuse will giggle . QUEST, a hierarchically-generalizing network would choose will as the first word of the output, while a linearly-generalizing network would choose can. This analysis only disambiguates the hypotheses if the two possible auxiliaries are different, so we only considered sentences where that was the case. For the agreement language, we made the further stipulation that both auxiliaries must agree with the subject so that the correct auxiliary could not be determined based on agreement alone. Figure 2 gives the accuracies on this metric across the six architectures for the two different languages (individual points represent different initializations). We draw three conclusions from this figure: 1. Agreement leads to more robust hierarchical generalization: All six architectures were significantly more likely (p < 0.01) to choose the main auxiliary when trained on the agreement language than the no-agreement language. In other words, adding hierarchical cues to the input increased the chance of learning the hierarchical generalization. 2. Initialization matters: For each architecture, accuracy often varied considerably across random initializations. This fact suggests that the architectural bias is not strong enough to reliably lead the networks to settle on the hierarchical generalization, even in GRUs with attention. From a methodological perspective, this observation highlights the importance of examining many initializations of the network before drawing qualitative conclusions about an architecture (in a particularly striking example, though the accuracy of most LSTMs with attention was low, there was one with near-perfect accuracy). 3. Different architectures perform qualitatively differently: Of the six architectures, only the GRU with attention showed a strong preference for choosing the main auxiliary instead of the linearly first auxiliary. By contrast, the vanilla GRU chose the first auxiliary nearly 100% of the time. In this case, then, attention made a qualitative difference for the generalization that was acquired. By contrast, for both LSTM architectures, most random initializations led to networks that chose the first auxiliary nearly 100% of the time. Both SRN architectures showed little preference for either the main auxiliary or the linearly first auxiliary; in fact the SRNs often chose an auxiliary that was not even in the input sentence, whereas the GRUs and LSTMs almost always chose one of the auxiliaries in the input. In the next section, we take some preliminary steps toward exploring why the architectures behaved in qualitatively different ways.
Analysis of sentence encodings
A plausible hypothesis about the differences between networks is that linearly-generalizing networks used representations that contained linearly-relevant information whereas hierarchically-generalizing networks used representations that contained hierarchically-relevant information. To test this hypothesis, we analyzed the final hidden state of the encoder (E 6 in Figure 1), which we will refer to as the encoding of the sentence. In architectures without attention, this is the only information that the decoder has about the sentence; architectures with attention can use the intermediate encodings of sentence prefixes as well. We analyze the amount of information that these encodings contain about three properties of the input sentence: its main auxiliary, its fourth word, and the head noun of the subject (which, in the simple languages we used, was always the sentence's second word). Examples are shown in Table 2. Main auxiliary: The main auxiliary of a sentence can appear in many different linear positions but has a consistent hierarchical position. Therefore, a network whose encodings can be used to identify sentences' main auxiliaries must contain some hierarchical information. Fourth word: The fourth word of a sentence has a consistent role in a linear representation but not in a hierarchical one: the fourth word could be the main verb, the determiner on a prepositional object, or the auxiliary verb inside a subject relative clause. Therefore, a network whose encodings can be used to identify each sentence's fourth word must contain some information about linear order. Subject noun/second word: The head noun of the subject is always the second word of the sentence in our languages. Thus, this word can be reliably identified either from a linear representation (as the second word) or from a hierarchical representation (as the subject noun). Analysis: For each trained network, we trained three linear classifiers, one for each of these three properties of the sentence. Each classifier was trained to predict the word that filled the relevant role-main auxiliary, fourth word or subject noun/second word-from the final hidden state of the encoder. Each classifier's output layer had a dimensionality equal to the number of possible classes for that classifier's task: 4 for the main auxiliary, 28 for the fourth word, or 26 for the subject noun. The classifiers were trained on a training set and tested on a withheld test set (see the supplementary materials for details). Figure 3 shows the classification results on the test set. Classifiers trained to predict the main auxiliary from the encodings produced by the SRNs with attention performed only slightly better than chance; this might explain why the SRNs with attention generalized poorly to the withheld sentence type in the question formation task. Similar classifiers trained on encodings from the other architectures did well at this task. Since the identity of the main auxiliary is the only information required to perform well on our evaluation of the networks' performance on the generalization set based on the first word produced, these results suggest that the differences in performance stem not from inability to identify the main auxiliary but rather from a misinterpretation of the task as requiring fronting of the linearly first auxiliary.
We now consider the fourth word and subject noun classifiers. The classifiers trained on the encodings from both types of LSTMs as well as the GRUs without attention performed well at both tasks. Crucially, the classifiers trained on the encodings from the GRU with attention did poorly on these tasks. Recall that the main auxiliary could be successfully decoded from the encodings of this architecture. The GRU with attention therefore appears to use its encoding only for information that could not be straightforwardly obtained from linear order, such as the main auxiliary, rather than information that could be obtained from linear order even if, like the subject head noun, that information was hierarchically relevant. On the other hand, the fact that the GRU without attention and both LSTM architectures performed very well at all three tasks suggests that they used their encodings for both linear and hierarchical information. Thus, perhaps the better generalization ability of the GRU with attention arises not from a better ability to encode relevant hierarchical informationall four LSTM and GRU architectures have that ability-but Table 3: Analysis of output question types based on which auxiliary has been deleted (if any) and which auxiliary has been placed at the start of the sentence. Each number is the percent of GRU + attention outputs across all 100 random initializations that fit that category (the total sums to 65% because only 65% of the questions produced by the networks could be analyzed in that way). 1 st and 2 nd refer to the first and second auxiliaries in the input.
rather from an ability to ignore linear information (Frank & Mathis, 2007).
Comparing RNN Mistakes with Human Mistakes
We now return to the full questions produced by our networks and compare the networks' errors to the types of errors that humans make when acquiring English (Crain & Nakayama, 1987). We restrict ourselves to the GRU with attention networks as those were the networks that generally produced the correct auxiliary (see Figure 2). Subject-auxiliary inversion can be decomposed into two subtasks: placing an auxiliary at the start of the sentence and deleting an auxiliary within the sentence. Only 65% of the outputs that the 100 networks collectively produced could be interpreted as having been formed by inserting an auxiliary before the sentence and deleting zero or one of the auxiliaries in the sentence. Table 3 breaks down those results based on which auxiliary was preposed and which (if any) was deleted. 4 Two error types are by far the most common. In the first type, the network preposed the second auxiliary but did not delete either of the auxiliaries (could his newt who can giggle could swim from his newt who can giggle could swim). This error type is common among English-learning children (Crain & Nakayama, 1987) and is compatible with hierarchical generalization. In the other frequent error type, the network deleted the first auxiliary and preposed the second; for example, it might generate could his newt who giggle could swim from his newt who can giggle could swim. Such errors were never observed by Crain and Nakayama (1987) and are incompatible with a hierarchical generalization. In other words, though the networks' common error types overlapped with the common error types for humans, the networks also frequently made some mistakes that humans never would.
Conclusions and Future Work
Learners of English acquire the correct hierarchical rule for forming questions even though there are few to no examples in their input that explicitly distinguish this rule from the linear one. This fact has been taken to suggest that learners must be innately constrained to consider only hierarchical syntactic rules. We have investigated whether a learner without such a constraint can learn the hierarchical generalization without the critical disambiguating examples. Based on the behavior of one of the architectures we examined (GRU with attention), the answer to this question appears to be yes. The hierarchical behavior of this non-hierarchically-constrained architecture plausibly arose from the influence of hierarchical cues in the input, a conclusion supported by the fact that the additional hierarchical cue of agreement increased the likelihood that a network would induce hierarchical generalizations.
Our argument has focused on a strong version of the poverty of the stimulus argument which claims that language learners require a hierarchical constraint. However, there remains a milder version which only claims that a hierarchical bias is necessary. This version of the argument is difficult to assess using RNNs because, while RNNs must possess some biases (Mitchell, 1980;Marcus, 2018), the nature of these biases-which likely arise both from the network architecture and from the learning algorithm-is currently poorly understood. However, given the linear way in which they process inputs, it is plausible that all six architectures we used had a bias toward linear order but that the GRU with attention was the only one that overcame this linear bias sufficiently to generalize hierarchically. It is not clear why it was the only architecture to do so; we intend to examine the differences in behavior between the recurrent units in future work.
Two caveats are in order. First, our results only cover restricted fragments of English and may not generalize to the linguistic input that human language learners encounter. In future work, we will replace our artificial languages with a corpus of child-directed speech. Second, even if our findings do generalize to realistic language, we would only be able to conclude that it is possible to solve the task without a hierarchical constraint; humans certainly could have such an innate constraint despite it being unnecessary for this particular task. Figure 4 contains the context-free grammar used to generate the no-agreement language. 120,000 unique sentences were generated from this grammar as the training set, with each example randomly assigned either the identity task or the question formation task. If a sentence was assigned to the question formation task and contained a relative clause on the subject, it was not included in the training set.
Details of the Grammar
The agreement language was generated from a similar grammar but with the auxiliaries changed to do, does, don't, and doesn't. In addition, to ensure proper agreement, the grammar for the agreement language had separate rules for sentences with singular subjects and sentences with plural subjects, as well as separate rules for relative clauses with singular subjects and relative clauses with plural subjects. Figure 5a shows how frequent each sentence type was based on the types of modifiers present in the sentence and which noun phrases those modifiers were modifying. Figure 5b shows the same statistics for the agreement language. In general, for a given left-hand side in the grammar in Figure 4, all rules with that left-hand side were equally probable; so, for example, one third of noun phrases were unmodified, one third were modified by a prepositional phrase, and one third were modified by a relative clause. The one exception to this generalization is that intransitive sentences with unmodified subjects were rare in both languages. This is because we did not allow any repeated items within or across data sets, and since there were relatively few possible intransitive sentences with unmodified subjects, this uniqueness constraint prevented the unmodified intransitive case from being as common as the modified cases. The no-agreement language has roughly twice as many intransitive sentences with unmodified subjects as the agreement language does because there are twice as many possible sentences of that type for the no-agreement language than the agreement language, but otherwise the two languages are essentially the same in the distributions of their constructions.
Neither language exhibited recursion. This is because relative clauses and prepositional phrases could only modify matrix noun phrases but not noun phrases within relative clauses or prepositional phrases. Thus, both languages contained a finite number of sentences, though this finite number is very large (greater than 10 15 ), orders of magnitude larger than the number of sentences present in the training set (120,000). Figure 1 (reproduced here as Figure 6) depicts the basic sequence-to-sequence architecture underlying all of our experiments. Here we elaborate on the different components of this architecture.
Details of the Architecture
The network consists of two components, the encoder and the decoder, both of which are RNNs. The encoder's hidden state is initialized at E 0 as a 256-dimensional vector of all zeros. The network is then fed the first word of the input sentence, represented in a distributed manner as a 256dimensional vector (i.e. an embedding) whose elements are learned during training. The encoder uses this distributed representation of the first word, along with the initial hidden state, to generate the next hidden state, E 1 . The component that performs this hidden state update is called the encoder's recurrent unit. Each subsequent word of the input sentence is then fed into the network, turned into its distributed representation learned by the network, and passed through the recurrent unit along with the previous hidden state to generate the next hidden state.
Once all of the input words have been passed through the encoder, the final hidden state of the encoder is used as the initial hidden state of the decoder, D 0 . This hidden state and a special start-of-sentence token (also represented by a 256dimensional distributed representation that is learned during training) are passed as inputs to the decoder's recurrent unit, which outputs a new 256-dimensional vector as the next decoder hidden state, D 1 . A copy of this new hidden state is Figure 6: Basic sequence-to-sequence neural network without attention.
also passed through a linear layer whose output is a vector with a length equal to the vocabulary size. The softmax function is then applied to this vector (so that its values sum to 1 and all fall between 0 and 1). Then, the element of this vector with the highest value is taken to correspond to the output word for that timestep; this correspondence is determined by a dictionary relating each index in the vector to a word in the vocabulary. For the next time step of decoding, this just-outputted word is converted to a distributed representation and is then taken as an input to the decoder's recurrent unit, along with the previous decoder hidden state, to generate the next decoder hidden state and the next output word. Once the outputted word is an end-of-sequence token (either IDENT or QUEST), decoding stops and the sequence of outputted words is taken as the output sentence. At all steps of this decoding process, whenever a distributed representation is used, dropout (Srivastava, Hinton, Krizhevsky, Sutskever, & Salakhutdinov, 2014) with a proportion of 0.1 is applied to the vector, meaning that each of its values will with 10% probability be turned to 0. This practice is meant to combat overfitting of the network's parameters.
There are two main ways in which we varied this basic architecture. First was usage of an attention mechanism, depicted in Figure 7, which is a modification to the decoder's recurrent unit. The attention mechanism adds a third input (which we refer to as the attention-weighted sum) to the decoder recurrent unit. This attention-weighted sum is determined as follows: First, the previous hidden state and the distributed representation of the previous output word are passed through a linear layer whose output is a vector of length equal to the number of words in the input sentence. This vector is the vector of attention weights. Each of these weights is then multiplied by the hidden state of the encoder at the encoding time step equal to that weight's index. All of these products are then added together to give the attention-weighted sum, which is passed as an input to the decoder recurrent unit along with the previous output word and the previous hidden state.
Second, we also vary the structure of the recurrent unit used for the encoder and decoder. The three types of recurrent units we experiment with are simple recurrent networks (SRNs) (Elman, 1990), gated recurrent units (GRUs) (Cho et al., 2014), and long short-term memory (LSTM) units (Hochreiter & Schmidhuber, 1997). For all three of these types of recurrent units, we use the default PyTorch implementations, which are described in the next few paragraphs. The SRN concatenates its inputs, passes the result of the concatenation through a linear layer whose output consists of linear combinations of the elements of the input vector, and finally applies the hyperbolic tangent function to the result to create a vector whose values are mostly either very close to -1 or very close to +1. This hidden state update can be expressed with the following equation: where D i is the i th hidden state of the decoder, w i indicates the i th output word, W is a matrix of learned weights, b is a learned vector called the bias term, and [v 1 , v 2 , ...] indicates the concatenation of vectors v 1 , v 2 , .... If attention is used, this equation then becomes where A i is the i th attention-weighted sum. The GRU adds several internal vectors called gates to the basic SRN structure. Specifically, these gates are called the reset gate r t , the input gate z t , and the new gate n t , each of which has a corresponding matrix of weights (W r for r t , W z for z t , and two separate matrices W nw and W nD for n t ). The reset and input gates both take the previous hidden state and the previously outputted word (as a distributed representation) as inputs. The new gate also takes these two inputs as well as the reset gate as a third input. The next hidden state is then generated as the product of the input gate and the previous hidden state plus the product of one minus the input gate times the new gate. This can be thought of as the input gate determining which elements of the hidden state to preserve and which to change. The elements to be preserved are preserved through the term that is the product of the input gate times the previous hidden state, while the elements to be changed are determined through the term that is the product of one minus the input gate times the new gate; the new gate here determines what the updated values for these changed terms should be. Overall the GRU update can be expressed with the following equations (σ indicates the sigmoid function): Like the GRU, the LSTM also uses gates-specifically, the input gate i t , forget gate f t , cell gate g t , and output gate o t . Furthermore, while the other architectures all just use the hidden states as the memory of the network, the LSTM adds a second vector called the cell state c t that acts as another persistent state that is passed from time step to time step. These components interact according to the following equations to produce the next hidden state and cell state: c t = f t * c t−1 + i t * g t (11) For each pair of an architecture and a language, we trained 100 networks with different random initializations, for a total of 1200 trained networks. The networks were trained using stochastic gradient descent with the negative log likelihood objective function for 30,000 batches with a batch size of 5 (meaning that some training examples were seen more than once), a dropout rate of 0.1, and a learning rate of 0.01 (for the GRUs and LSTMs) or 0.001 (for the SRNs). All networks used 256-dimensional hidden states and trained 256dimensional vector representations of words. All parameter values were taken from a PyTorch tutorial on sequence-tosequence networks, 5 except that the learning rate for SRNs was lowered because these networks did not converge with the default learning rate.
Details of the Linear Classifiers
Each linear classifier consisted of a single linear layer which took as its input a 256-dimensional vector (specifically, the encoding of a sentence) and outputted a vector of dimension equal to the number of possible values for the feature used as the basis of classification (4 for the main auxiliary, 28 for the fourth word, or 26 for the subject noun). For example, since there are four auxiliaries, the main auxiliary classifier had an output of dimensionality 4. The chance baseline for each task is thus 1 n where n is the number of possible classes for that task. Each element in this output corresponded to a specific value for the feature being used as the basis for classification, and for a given input the element of the output with the highest value was taken as the classification for that input. The sentence encodings were randomly split into a training set (75% of the encodings), a development test set (5% of the encodings), and a test set (20% of the encodings), none of which overlapped. The weights of the classifier were trained on the training set using stochastic gradient descent, and training stopped when the cross entropy loss computed over the development test set ceased to improve. Classification accuracy was then determined based on the withheld test set. In addition to the information gleaned from the sentence encoding, the decoder may also access information about the input sentence through attention, but here we did not analyze the contribution of attention because it was not present in all architectures and because we wished to use an analysis method that could compare all six architectures. Table 3 categorized the 65% of GRU with attention outputs that could be viewed as taking the input, inserting an auxiliary at its start, and deleting either zero or one auxiliary from within the input. Here we show some examples of the remaining 35% of the outputs. Many of these outputs fall into certain common error types, including truncation of the out-put (6), repetition of words (7), and turning a relative clause into a prepositional phrase (8), but many outputs do not fall into one of those categories (9, 10, 11). (9) Other a. Input: the newt who some quails do accept doesn't comfort the ravens that do move . b. Output: do the newt who some quails doesn't accept doesn't comfort the ravens that do high five our yaks .
Examples of uncategorized errors
(10) Other a. Input: the yak who does applaud your newts doesn't entertain our walruses . b. Output: does the yak who the yak newts doesn't entertain our walruses ?
(11) Other a. Input: your vulture who does entertain our orangutan doesn't comfort my walruses above your newts . b. Output: does your vulture entertain some peacock upon our ravens my walruses ? | 2018-04-03T04:37:21.068Z | 2018-02-25T00:00:00.000 | {
"year": 2018,
"sha1": "dd9dc03c2f72724f91bc84d92e4e82160aac4523",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "714c9b0a90a2f931e7d3673952858de3e43fbb02",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science",
"Psychology"
]
} |
55534331 | pes2o/s2orc | v3-fos-license | Null surgery on knots in L-spaces
Let $K$ be a knot in an L-space $Y$ with a Dehn surgery to a surface bundle over $S^1$. We prove that $K$ is rationally fibered, that is, the knot complement admits a fibration over $S^1$. As part of the proof, we show that if $K\subset Y$ has a Dehn surgery to $S^1 \times S^2$, then $K$ is rationally fibered. In the case that $K$ admits some $S^1 \times S^2$ surgery, $K$ is Floer simple, that is, the rank of $\hat{HFK}(Y,K)$ is equal to the order of $H_1(Y)$. By combining the latter two facts, we deduce that the induced contact structure on the ambient manifold $Y$ is tight. In a different direction, we show that if $K$ is a knot in an L-space $Y$, then any Thurston norm minimizing rational Seifert surface for $K$ extends to a Thurston norm minimizing surface in the manifold obtained by the null surgery on $K$ (i.e., the unique surgery on $K$ with $b_1>0$).
Introduction
Heegaard Floer homology, introduced by Ozsváth and Szabó, produces a package of invariants of three-and four-dimensional manifolds [OS04c]. One example is HF (Y ), that associates a graded abelian group to a closed three-manifold Y . When Y is a rational homology sphere, rk HF (Y ) ≥ |H 1 (Y )| [OS04b]. If equality is achieved, Y is called an L-space. The name stems from the fact that lens spaces are L-spaces. More generally, all connected sums of manifolds with elliptic geometry are L-spaces [OS05b].
1.1. Knots in L-spaces with fibered surgeries. In an unpublished manuscript [Ber] Berge gave a conjecturally complete list of knots in S 3 admitting lens space fillings. The Berge conjecture roots in the classification of lens space surgeries on torus knots [Mos71], followed by notable examples of lens space fillings on non-torus knots [BR77,Ber91,BL89,FS80,Gab89,Gab90,Wan89,Wu90]. In recent years, techniques from Heegaard Floer homology were applied to give deeper insight on the fiberedness [Ni09], positivity [Hed07], and various notions of simplicity of knots in S 3 with lens space, or more generally L-space, surgeries [OS05b,HP13,Hed11,Ras07]. The theme of the present work, in part, is to study the analogous properties of such knots when S 3 is replaced by S 1 × S 2 . It is often convenient to view the problem from the perspective of surgery along a knot in an L-space. Note that a knot L ⊂ S 1 × S 2 on which Dehn surgery yields an L-space Y , induces a dual knot K ⊂ Y , the core of the surgery solid torus. By removing the interior of a neighborhood of K ⊂ Y and undoing the original Dehn surgery, it follows that K admits a surgery producing S 1 × S 2 . One way to obtain an example of a knot in an L-space with some S 1 × S 2 surgery is as follows. Start with a solid torus V = S 1 × D 2 with meridian µ. Let K ⊂ V be a Berge-Gabai knot, i.e., K has a non-trivial solid torus filling [Gab90]. Therefore, there is a slope α such that V = V α (K) is another solid torus with meridian µ = µ. Note that Dehn filling V along µ will give a lens space Z. Then K, when viewed as a knot in Z, has an S 1 × S 2 surgery; namely, Z α (K) has a genus one Heegaard splitting with the property that the meridians of the two solid tori coincide (this common meridian is µ ).
In [BBL16], Baker, Buck, and Lecuona proposed a classification of knots in S 1 × S 2 with a longitudinal surgery to a lens space. Cebanu proved that the complement of a knot in S 1 × S 2 that has a lens space filling, admits a fibration over the circle [Ceb12,Theorem 3.7.1]. More precisely, he first proved that any knot K in a lens space Y with some S 1 × S 2 surgery is Floer simple. Moreover, K, as a knot in the lens space Y , lies in the homology class of a simple knot with some S 1 × S 2 surgery. (See [Hed11] for the definition of a simple knot in a lens space.) Such a simple knot is a priori known to be fibered. Finally, he appealed to the fact that the complement of a Floer simple knot K in the lens space Y admits a fibration over S 1 if and only if the simple knot in the homology class of [K] has a fibered complement over S 1 [NW14,Corollary 5.3]. We point out that Cebanu proved his result by checking all simple knots in lens spaces admitting S 1 × S 2 surgeries are fibered, and therefore, his proof is specific to the case of a lens space (and not an L-space in general). Building up on the work of J. and S. Rasmussen [RR17], we give a novel proof of the more general case (obtained by replacing lens spaces with L-spaces).
Theorem 1.1. Suppose L ⊂ S 1 × S 2 is a knot with some L-space surgery. Then the complement of L in S 1 × S 2 admits a fibration over S 1 .
If we replace S 1 × S 2 with S 3 in Theorem 1.1, then we get the well-known result that a knot in S 3 which admits an L-space surgery is fibered [Ni07, Corollary 1.3].
A knot K ⊂ Y is Floer simple if rk HF K(Y, K) = rk HF (Y ). Floer simple knots in L-spaces often appear in the problem of L-space surgery. For example, if the p-surgery on a knot L ⊂ S 3 yields an L-space Y , then the dual knot of the surgery will be a Floer simple knot in Y provided that p is an integer greater than 2g(L) − 1 [Hed11,Ras07]. It turns out that a similar result holds in the case of S 1 × S 2 in place of S 3 : Proposition 1.2. If K is a knot in an L-space Y with some S 1 × S 2 surgery, then K is Floer simple.
Definition 1.3. Let K be a rationally null-homologous oriented knot in an oriented closed threemanifold Y , ν(K) be a tubular neighborhood of K, and ν • (K) denote the interior of ν(K). A properly embedded oriented surface F ⊂ Y \ ν • (K) is called a rational Seifert surface for K, if ∂F consists of coherently oriented parallel curves on ∂ν(K), F has no closed component, and the orientation of ∂F is coherent with the orientation of K. The knot K is rationally fibered if the complement of K in Y fibers over S 1 . In this paper, we often omit "rationally" when a knot is rationally fibered.
The above theorem does not hold for an arbitrary rational homology sphere Y . For example, we can choose a knot K ⊂ S 1 × S 2 with nonzero winding number, such that the complement of K is not a surface bundle over S 1 . Then any nontrivial surgery on K will be a rational homology sphere Y , and the null surgery on the dual knot K ⊂ Y is S 1 × S 2 , while K is not rationally fibered.
When K is a null-homologous knot in Y , Theorem 1.6 is just [Ni07,Corollary 1.4]. The idea of the proof of Theorem 1.5 is inspired from that of [OS04a,Corollary 4.5]; also, a similar idea is used to prove [Ni07,Corollary 1.4]. The heart of the argument lies in showing that, for an appropriately chosen Spin c structure, the plus version of Heegaard Floer homology of Y α is isomorphic to the hat version of knot Floer homology of K in its bottommost Alexander grading. This is achieved by comparing two exact triangles which differ at only one vertex, and the groups at these distinguished vertices are the two homology groups we aim to prove are isomorphic. See Section 2 for the relevant definitions.
Since we work with rationally null-homologous knots instead of null-homologous knots as in [Ni07, Corollary 1.4], we encounter new difficulties. One difficulty is that the null slope is not necessarily a framing, thus we do not directly have the exact triangles we want. To solve this problem, we use a trick from [OS11] to present the null-surgery as a Morse surgery on the connected sum of K and a knot in a lens space. A simple combinatorial argument (Corollary 5.2) shows that we can reduce the general case to this special case of Morse surgery. Another difficulty is that different Spin c structures over Y may intertwine in the maps of the exact triangles. To solve this problem, we need to carefully analyze the Spin c structures. A key technical result we use is Lemma 5.6, which controls the interwining of the Spin c structures.
Since Y α is a surface bundle over S 1 , its Floer homology in the specified Spin c structure is of rank one. Therefore, the knot Floer homology of K in its bottommost grading will be of rank one. That is, K is fibered. Following from the proof of Theorem 1.5, we get that: Theorem 1.6. Let K be a knot in an L-space Y with the null slope α. For a Thurston norm minimizing rational Seifert surface F of K, the extension F in Y α of F is also Thurston norm minimizing.
1.2. Fibered, Floer simple knots and the rational-valued τ invariant. In [OS03], Ozsváth and Szabó introduced an invariant τ (K) associated to a knot K ⊂ S 3 . (See also [Ras03].) In Section 2, we define this invariant for a knot in a rational homology sphere Y , analogous to the integer-valued invariant in the case Y = S 3 . The difference, in this more general setting, is that there will be as many τ invariants as the number of Spin c structures on Y . Moreover, since the invariant, by definition, is a function of the Alexander grading of the generators of CF K(Y, K), the values that τ takes will be rational.
In [Ni09], the first author defines an affine function which is basically one half of the first Chern class, shifted by an appropriate cohomology class. The knot Floer homology provides a function When Y = S 3 and h is a generator of H 2 (Y, K; Z) (e.g. represented by a Seifert surface for K), it follows that y(h) = g(K). If K ⊂ Y is fibered (e.g. when Y is an L-space and K admits an S 1 × S 2 surgery; c.f. Theorem 1.1), we get a contact structure ξ K compatible with the rational open book decomposition specified by (Y, K).
Proposition 1.7. Let K be a fibered, Floer simple knot in a rational homology sphere Y , endowed with a rational Seifert surface F . The following two equivalent statements hold: (1) The contact structure induced by the rational open book decomposition corresponding to the fibration of (Y, K) is tight. ( is the intersection number of the meridian µ of K with ∂F in ∂ν(K).
When Y = S 3 , Proposition 1.7 reduces to [Hed07, Items (2) and (4) of Proposition 2.1]. The main ingredient used in the proof of Proposition 1.7 is the non-vanishing of the Heegaard Floer contact invariant associated to K. Hedden and Plamenevskaya, in [HP13], introduced a contact invariant for a fibered knot K in a closed three-manifold Y . The invariant is the image of the generator of the homology of the bottom filtered subcomplex in the Heegaard Floer homology of Y under the natural map where −Y is the manifold Y with opposite orientation. To prove Proposition 1.7, it will be straightforward to check that the Heegaard Floer contact invariant associated to K is non-zero, and therefore, the contact structure induced by K is tight [HP13].
From the proof of Proposition 1.7, we get the following corollary that may be of independent interest: Corollary 1.8. Let K be a fibered, Floer simple knot in a rational homology sphere Y , endowed with a rational Seifert surface F . There exists a Spin c structure s on Y such that Combining Theorem 1.1 and Propositions 1.7, 1.2, we get the following theorem: Theorem 1.9. Let K be a knot in an L-space Y such that K admits an S 1 × S 2 surgery. Let also F be a minimal genus rational Seifert surface for K. The following two statements hold: (1) c(ξ K ) = 0, where c(ξ K ) is the Heegaard Floer contact invariant associated to the contact structure ξ K coming from the open book of (Y, K).
(2) There exists a Spin c structure s on Y such that K satisfies τ (Y, Indeed, it follows that for a fibered, Floer simple knot K in a rational homology sphere Y , the two conclusions of the theorem are equivalent. 1.3. Notation. We fix some notation that will be used throughout the paper. The singular homology and cohomology groups are all taken over the ring of integers Z, unless a different coefficient ring is specified. Unless noted otherwise, Y denotes a rational homology sphere. We let K be an oriented knot in Y , and M = Y \ ν • (K). We choose an oriented longitude λ ∈ H 1 (∂M ) whose orientation is coherent with the orientation of K. Let µ ∈ H 1 (∂M ) be a meridian of K with the property that µ · λ = 1 with respect to the orientation on ∂M induced by ∂ν(K). Let Y n denote the manifold obtained by Dehn filling M along the curve n · µ + λ. In particular, Y 0 denotes the filling of M (surgery on K) along λ. The null slope of K ⊂ Y is denoted α, and that the surgery on K with slope α is denoted Y α . Lastly, we often use the terms "longitude" and "framing": both refer to a slope at distance one from the meridian µ.
1.4. Organization. The rest of the paper is organized as follows. Section 2 provides background from Heegaard Floer homology. Section 3 proves Proposition 1.7. Section 4 proves Theorem 1.1 and Proposition 1.2. Section 5 is devoted to some preliminary lemmas, followed by the proof of Theorems 1.5 and 1.6. The final section addresses potential directions for future research. 1103976, DMS-1252992, and an Alfred P. Sloan Research Fellowship; F. V. was partially supported by an NSF Simons travel grant.
Background
In this section we provide the Heegaard Floer homology background en route to proving the main results of the paper.
2.1. Knot Floer homology. The primary goal of this subsection is to recall the construction of knot Floer homology. We start by briefly reviewing the construction of a doubly pointed Heegaard diagram for a knot K in a closed three-manifold Y [OS04a,OS11]. Throughout the subsection, we mainly use the notation of [OS11].
Let (Σ, α, β, w, z) be a doubly pointed Heegaard diagram for K ⊂ Y , in the following sense. Here, Σ is an oriented surface of genus g, α = {α 1 , · · ·, α g } is a g-tuple of homologically linearly independent, pairwise disjoint, simple closed curves in Σ, so is β = {β 1 , · · ·, β g }. The two points w and z lie on Σ − α 1 − · · · − α g − β 1 − · · · − β g . The curves α and β specify a pair of handlebodies U α and U β with common boundary Σ. We require that (Σ, α, β, w) is a Heegaard diagram for Y , and also that the knot K is the union of two arcs K α , K β , where K α ⊂ U α is an unknotted arc connecting z to w and is disjoint from the disks attached to α 1 , . . . , α g , and K β ⊂ U β is an unknotted arc connecting w to z and is disjoint from the disks attached to β 1 , . . . , β g .
Spin c structures on Y can be seen as homology classes of non-vanishing vector fields forming an affine space over H 2 (Y ). Two nowhere vanishing vector fields on Y are homologous if they are homotopic on the complement of a ball embedded in Y . From the combinatorics of the Heegaard diagram one can construct a function where T α and T β are two totally real half-dimensional tori in the symmetric product Sym g (Σ) which is endowed with an almost complex structure. The map s w sends an intersection point x to the homology class of a vector field. There is also a relative version Spin c (Y, K). It consists of homology classes of vector fields on the knot complement M which point outwards at the boundary; one has an analogous map s w,z : There is another equivalent definition of relative Spin c structure in the literature [OS08], where the boundary condition is that the vector field on ∂M is the (up to isotopy) canonical vector field tangent to ∂M . Let ξ ∈ Spin c (Y, K) be represented by the homology class of a vector field v. The Spin c structure [−v], denoted J(ξ), is called the conjugate of ξ. It is clear that Equivalently, a relative Spin c structure on (Y, K) is a nowhere vanishing vector field on Y that contains K as a closed orbit. Similar to the closed case, Spin c (Y, K) is an affine space over H 2 (Y, K).
There is a natural map which is equivariant with respect to the action by H 2 (Y, K). That is, letting be the map induced by the inclusion, we have for each a ∈ H 2 (Y, K) Given a doubly pointed Heegaard diagram (Σ, α, β, w, z) which represents a rationally nullhomologous knot K ⊂ Y and ξ ∈ Spin c (Y, K), Ozsváth and Szabó construct a (Z ⊕ Z)-filtered chain complex CF K ∞ (Y, K, ξ). The generating set is the subset (4) The differential counts certain pseudo-holomorphic disks connecting the generators with the boundary mapping to T α ∪ T β . The two basepoints w and z give rise to codimension 2 submanifolds {w} × Sym g−1 (Σ), respectively {z} × Sym g−1 (Σ) of Sym g (Σ). More precisely, the chain complex is endowed with the differential where π 2 (x, y) denotes the set of homotopy classes of Whitney disks connecting x and y, µ(φ) is the Maslov index of φ, #( M(φ)) is the count of holomorphic representatives of φ, n w (φ) = #φ ∩ {w} × Sym g−1 (Σ), and similarly for n z (φ).
Although, by construction, the chain complex depends on the choice of a doubly pointed Heegaard diagram and also a representative of ξ, Ozsváth and Szabó proved that its filtered chain homotopy type is an invariant of the triple (Y, K, ξ), as the notation suggests. Let CF K(Y, K, ξ) be the sub-quotient complex of CF K ∞ (Y, K, ξ) with i = j = 0, endowed with the induced differential ∂. Its homology, denoted HF K(Y, K, ξ), is trivial for all but finitely many ξ ∈ Spin c (Y, K).
The knot Floer homology HF K(Y, K) is a finitely generated abelian group (with an absolute grading) that decomposes as a direct sum HF K(Y, K, ξ).
2.2.
The rational-valued τ invariant. In this subsection we define the rational-valued τ invariant associated to a knot K in a rational homology sphere Y . Suppose that F is a rational Seifert surface for K. As in Subection 2.1, let (Σ, α, β, w, z) be a doubly pointed Heegaard diagram for There exists a unique affine map and In fact, we can define A as We refer to A as the Alexander grading. Note that A does not depend on the choice of a rational Seifert surface. The Alexander grading gives rise to a filtration F on CF (Y ) in the standard way, i.e. we let Positivity of intersections of J-holomorphic Whitney disks with the hypersurfaces determined by z and w ensures that F(m) is a subcomplex; that is, ∂F(m) ⊂ F(m) and hence F defines a filtration. We have the following finite sequence of inclusions where the finiteness of the sequence follows from the fact the number of intersection points x ∈ T α ∩T β is finite. Let ι m : F(Y, K, m) → CF (Y ) be the inclusion map, and let (ι m ) * be the induced map on homology.
Following [Hed08], we make the following definition of the rational-valued τ invariant.
Definition 2.1. Using the notation of Subsection 2.2, let K be a knot in a rational homology sphere Y , endowed with a rational Seifert surface F . Given s ∈ Spin c (Y ) and a ∈ HF (Y, s), define Note that the minimum is actually attained: see (8). It is straightforward to check that when Y = S 3 , τ (Y, K, s) agrees with the integer-valued τ (K) (defined in [OS03]).
Remark 2.2. When Y = S 3 , it is known that τ (K) gives a lower bound on the four-ball genus [OS03, Corollary 1.3]. Raoux, in [Rao16], has given a slightly different definition of τ (Y, K, s): she has studied various properties of the rational valued invariant; in particular, she proves a generalization of the genus bound result.
2.3. Heegaard Floer homology of large surgeries, and a relevant exact sequence. We start by reviewing the "large surgery formula" for a rationally null-homologous knot in Y . For a more detailed discussion, see [OS11]. Let K ⊂ Y be an oriented knot endowed with a framing λ. Let [K], as an element of H 1 (Y ), be of order p. For a fixed ξ ∈ Spin c (Y, K), let C ξ be the chain complex CF K ∞ (Y, K, ξ). There are two projection maps where here λ is thought as the push-off of K inside Y using the framing λ. Then the canonical projection maps of (9) may be written as . Since the (Z ⊕ Z)-filtered chain homotopy type of C ξ is an invariant of the triple (Y, K, ξ), the chain homotopy classes of the maps v + ξ , h + ξ are also invariants of the triple (Y, K, ξ). Let W n (K) be the cobordism obtained from turning around the two-handle cobordism from −Y to −Y n (see Section 1.3 for the definition of Y n ). It is easy to verify that where the generator is the class of the capped off rational Seifert surface in W n (K). As in [OS11, Proposition 2.2], there is a well-defined map that restricts a Spin c structure on the four-manifold to the knot complement. We point out that E Y,n,K depends on the choice of λ, a longitude for K. We remind the reader that we chose a longitude for K in the beginning of the subsection. Note that , where S is the core of the two-handle attached to Y in the cobordism W n (K). We orient S so that its boundary orientation is coherent with the orientation of K. When n is sufficiently large, the two-handle cobordism is a negative definite four-manifold, and therefore, the self-intersection number of S is negative.
The following theorem relates the Heegaard Floer complex of large surgeries on K ⊂ Y to the knot Floer complex associated to (Y, K).
Theorem 2.3. [OS11, Theorem 4.1] Let K ⊂ Y be a rationally null-homologous knot in a closed, oriented three-manifold, equipped with a framing λ. Then, for all sufficiently large n, there is a map with the property that for all t ∈ Spin c (Y n ), the chain complex CF + (Y n , t) is represented by the chain complex in the sense that there are isomorphisms with the property that the maps v + ξ and h + ξ correspond to the maps induced by the cobordism W n (K) equipped with x and y, respectively.
Throughout the proof of Theorem 1.5 we use a surgery exact triangle relating the Floer homologies of Y , Y 0 , and Y n . Before stating the sequence we make some notational conventions. Fix t ∈ Spin c (Y n ). We define where P D[λ] denotes the cyclic group generated by P D[λ] ∈ H 2 (Y n ). Correspondingly, we define , similarly. Note that Spin c structures on Y which are cobordant to a fixed Spin c structure on Y n form an affine space over the image of Let Y be a closed, oriented three-manifold, and K ⊂ Y be a rationally null-homologous knot endowed with a framing λ. There is a map such that for a positive integer n and t ∈ Spin c (Y n ), there is a long exact sequence Here, Theorem 2.4 is a generalization of [OS04b, Theorem 9.19] with an almost identical proof. In [OS04b, Theorem 9.19], the knot is assumed to be null-homologous. The proof starts with constructing a multi Heegaard diagram (Σ, α, β, γ, δ, w) with Σ a surface of genus g where (Σ, α, β, w), (Σ, α, γ, w), and (Σ, α, δ, w) describe Y , Y 0 , and Y n , respectively. Then appropriate maps will be defined to get the exact sequence as desired. In our case there will be [λ] -orbits of Spin c structures in the statement since the knot is not null-homologous. Also, the proof of [OS04b, Theorem 9.19] needs to be modified when we define the map Q. Let X be the four-manifold cobordism, specified by (Σ, α, γ, δ, w). For a given s ∈ Spin c (Y 0 ), there is a unique orbit [t s ] Yn , such that there is a Spin c structure s α,γ,δ ∈ Spin c (X) with s α,γ,δ | Y 0 = s, s α,γ,δ | Yn ∈ [t s ] Yn . In other words, fixing s ∈ Spin c (Y 0 ), there is a t ∈ Spin c (Y n ) with the property that there is a unique s α,γ,δ ∈ Spin c (X) that extends t, some unique Spin c structure on the manifold specified by (Σ, γ, δ, w), and any element of the orbit [t s ] Yn . This describes the map Q in the theorem.
In what follows, we will define F , the map relating Y n and Y . We will skip the definition of the other two maps in the exact sequence, and instead refer the reader to [Ceb12,Theorem 3.3.3] and [OS04b, Theorem 9.19].
Heegaard Floer homology is functorial with respect to cobordisms. Indeed, if W is a smooth, connected, oriented cobordism with ∂W = −Y 1 ∪ Y 2 which is equipped with a Spin c structure s with restriction t i = s| Y i for i = 1, 2, then there is an induced chain map .
The construction of f + W,s uses some auxiliary data like a Heegaard triple and an almost complex structure on Sym g (Σ), but the chain homotopy type of f + W,s is an invariant of the pair (W, s). If t 1 , t 2 have torsion first Chern classes, f + W,s is homogeneous of degree where χ and σ denote the Euler characteristic and the signature of the four-manifold W , respectively.
The map F in (12) is induced by 2.4. The evaluation of the first Chern class. A key step in the proof of Theorems 1.5, 1.6 and Proposition 1.7 is the evaluation of the first Chern class of a Spin c structure on a second homology class. Such an evaluation is often not that straightforward to compute, however, in certain cases it is fairly well understood. Let K be an oriented rationally null-homologous knot in a closed threemanifold Y , endowed with a framing λ and a rational Seifert surface F . Let also p be the order of [K] ∈ H 1 (Y ). We start by stating a lemma that studies the evaluation of the first Chern class of a relative Spin c structure with either the lowest or the highest Alexander grading, evaluated on the homology class [F, ∂F ]. Recall that B Y,K is the set of all relative Spin c structures in which the knot Floer homology is not zero.
Lemma 2.5. [Ni14, Proposition 6.4] Let K be an oriented rationally null-homologous knot in a closed three-manifold Y . Let also F be a minimal genus rational Seifert surface for K. Suppose that K, as an element of H 1 (Y ), has order p. Then Proof. This is a direct consequence of [Ni09, Theorem 1.1], where it is proved that This, together with Equation (6) will give us the result.
The next lemma studies the evaluation of the first Chern class of a specific Spin c structure on the two-handle cobordism W n (K), for some positive integer n, evaluated on the capped off Seifert surface. This will be of use in the proof of Theorem 1.5.
Lemma 2.6. Let Y be a rational homology sphere, K ⊂ Y be a knot of order p in H 1 (Y ), and F be a rational Seifert surface for K such that [F, ∂F ] represents the generator of H 2 (Y, K). Suppose that the null slope of K is a framing. Let F ⊂ W n (K), for some positive integer n, be the closed surface obtained by capping off ∂F with disks. Let ξ ∈ Spin c (Y, K) be a relative Spin c structure, and x ∈ Spin c (W n (K)) be a Spin c structure with E Y,n,K (x) = ξ. Then Proof. Let H ⊂ W n (K) be the two-handle attached to Y × [0, 1]. The natural map That is, E covers ε as a torsor map. Fix any ξ 0 ∈ Spin c (Y, K), let x 0 = E −1 (ξ 0 ), and let Assume a = x − x 0 ∈ H 2 (W n (K)), then ε(a) = ξ − ξ 0 , and one has It follows that Hence, it follows from (16) that Since H 2 (W n (K); Q) ∼ = Q, the square of any a ∈ H 2 (W n (K)) determines and is determined by | a, [ F ] |. So, using (13), we conclude that the degree of h η 0 is equal to the degree of h ξ 0 .
On the other hand, it is well known that h η 0 is chain homotopy equivalent to v ξ 0 after interchanging the roles of i, j. (17) Note that the quality of the c 2 1 will give us that The plus sign would imply that P D[S] = 0, a contradiction. The orientations of both ∂F and ∂S are coherent with respect to the orientation of K. Therefore, using that the null slope of K is a framing, F is obtained from F by gluing p copies of −S along the boundary components. Since the framing of the two-handle in W n is −n, and also that S is the core of the two handle attached to Y in W n , we have Thus, (17) implies that c 1 (x 0 ), [ F ] = −np. Now, using (15), we get that C = −(n + 1)p. This, together with (16) will give us the result.
Corollary 2.7. Using the assumptions of Lemma 2.6, let ξ i ∈ Spin c (Y, K) be a relative Spin c structures satisfying , Proof. By Lemma 2.6, 2 Since H 2 (W n (K); Q) = Q, it follows that x 1 − x 2 is torsion.
2.5. Heegaard Floer contact invariant associated to fibered knots. This subsection is devoted to defining the Heegaard Floer contact invariant associated to a fibered knot K in a closed three-manifold Y . We choose F = Z/2 as the coefficient ring for the Heegaard Floer homology, to avoid any sign ambiguities. We do not review many of the concepts and definitions but instead refer the reader to [Etn06] for a review of contact geometry, and to [OS05a] for the Heegaard Floer contact invariant in the case of fibered null-homologous knots. See [HP13] for more details. A fibered knot K ⊂ Y induces a rational open book decomposition and, therefore, a contact structure ξ K [BEVHM12]. Hedden and Plamenevskaya, in [HP13], studied the contact structure ξ K in terms of the knot Floer homology of K. (See also [OS05a].) More precisely, the "bottommost" filtered subcomplex in the filtration of CF (−Y ) induced by K has homology F, that is,
Fibered, Floer simple knots induce tight contact structures
In this section we give a proof of Proposition 1.7. Recall that the function y(h) in the statement of Proposition 1.7 is defined as where h ∈ H 2 (Y, K; Q), H is the affine function H : Spin c (Y, K) → H 2 (Y, K; Q) defined as Note that y is a rational-valued function.
Proof of Proposition 1.7. In order to show that the contact structure ξ K is tight, we will show that the Heegaard Floer contact invariant c(ξ K ) is non-zero [HP13]. By [Ras03, Lemma 4.5] and its proof, there exists a unique filtered chain complex C such that C is filtered chain homotopy equivalent to CF K(−Y, K), and C ∼ = HF K(−Y, K) as an abelian group. Here we use F = Z/2Z coefficients for the Heegaard Floer homology groups. Since K is Floer simple, the differential on C is zero. Consequently, the inclusion map ι : F(bottom) → CF (−Y ) induces on the homology level the inclusion map of a nontrivial subgroup of C to C . Thus the contact invariant is non-zero. In particular, ξ K is tight.
Without loss of generality we may assume that the rational Seifert surface F is of minimal genus. Since K is Floer simple, there exists s such that where the last two equalities follow from Lemma 2.5 and [Ni09, Theorem 1.1], respectively. This completes the proof.
We point out that to prove the second statement of Proposition 1.7, we do not use the fact that the rational Seifert surface is of minimal genus.
Remark 3.1. Hedden in [Hed10, Proposition 2.1] shows that for a fibered knot K in S 3 , statements (1) and (2) of Theorem 1.9 are equivalent, and both are equivalent to having ξ K being tight.
One key tool used in his proof is that there is only one unique tight contact structure ξ std on S 3 . Moreover, ξ std is detected by the contact invariant, that is c(ξ std ) = 0. This follows from the fact that the invariant associated to (S 3 , ξ std ) is equal to the generator of HF (S 3 ) ∼ = Z. For L-spaces, there could be multiple tight contact structures, and some of them might not be detected by the contact invariant. However, for lens spaces it is known that all the tight contact structures are distinguished by the Heegaard Floer contact invariant. See [GLS07, p.3] and [LM97]. In summary, for a fibered, Floer simple knot K in a lens space Y , endowed with a minimal genus Seifert surface F , the following equivalent statements hold: (1) ξ K is tight, (2) c(ξ K ) = 0, and (3) Remark 3.2. In [BBL16, Lemma 1.18], it is proved that for a knot L ⊂ S 1 × S 2 , the exterior fibers over S 1 if and only if L is isotopic to a spherical braid. Therefore, Theorem 1.1 states that if a knot L ⊂ S 1 × S 2 admits an L-space surgery, then its exterior fibers over S 1 . Equivalently, there is a fibration of S 1 × S 2 \ ν • (L) where the boundary of the fibers consists of the meridians of L. It can be proved that the contact structure compatible with the fibration is overtwisted unless the braid index of L, when L is viewed as a spherical braid in S 1 × S 2 , is one. In the lack of an application of this result related to the purpose of the paper, we will not present a proof here, however, it will be interesting to investigate whether or not such contact structures can be classified (e.g. via the Hopf invariant [Eli89]).
Knots in S 1 × S 2 with L-space surgeries
This section is devoted to the proof of Theorem 1.1 and Proposition 1.2. Let L and U be knots in S 3 such that U is the unknot, and the linking number between L and U is p > 0. Suppose that some surgery on the link L ∪ U results in an L-space Y , where the surgery slope on U is zero. Let K ⊂ Y be the dual knot of L (i.e. K is the core of the solid torus attached to S 3 0 (U ) \ ν • (L)). Let µ be the meridian of K. Let also {µ L , λ L } and {µ U , λ U } be the the meridian-longitude coordinates of L and U in S 3 , respectively. Set α = µ L .
If M is a rational homology S 1 × D 2 (e.g. the complement of a knot in a rational homology sphere), we say that M is semi-primitive if the torsion subgroup of H 1 (M ) is contained in the image of ι : H 1 (∂M ) → H 1 (M ), where ι is the map induced by inclusion.
Proof of Theorem 1.1. We first show that M = Y \ ν • (K) is semi-primitive. It is well known that H 1 (S 3 \ (L ∪ U )) is freely generated by µ L , µ U . Moreover, we have the equalities: Since M is obtained from S 3 \ ν • (L ∪ U ) by Dehn filling on ∂ν(U ) with slope λ U , we have Hence the torsion subgroup of H 1 (M ) is generated by µ L , which is contained in the image of ι. This shows that M is semi-primitive.
Since L ⊂ S 3 0 (U ) admits an L-space surgery, [RR17, Proposition 7.8] implies that M is a generalized solid torus in the sense of [RR17, Definition 7.2]. Now it follows from [RR17, Corollary 7.12] that Y fibers over S 1 .
We now turn to proving that K is a Floer simple knot in Y . We recall from Section 1 that K ⊂ Y is called Floer simple if rk HF K(Y, K) = rk HF (Y ).
Let Tor M ⊂ H 1 (M ) be the torsion subgroup. As in [RR17] we take the map where φ is the projection from H 1 (M ) to H 1 (M )/Tor M ∼ = Z, and the isomorphism is chosen so that φ(µ U ) > 0. Combining [RR17, Lemma 3.2 and Corollary 3.4] we get that: Proposition 4.1. Let K be a knot in an L-space Y that admits a non-trivial L-space surgery. If φ(µ) > ||M ||, where ||M || is the Thurston norm of a generator of H 2 (M, ∂M ), then K is Floer simple in Y .
Proof of Proposition 1.2. Using [RR17, Proposition 7.8], we conclude that any rational homology sphere obtained by surgery on K is an L-space. Hence in order to apply Proposition 4.1 to the knot K, we only need to check that φ(µ) > ||M ||. Let F be a minimal genus rational Seifert surface for K in Y . By Theorem 1.1, F is a fiber of a fibration of Y \ ν • (K) over S 1 . Since the α-surgery on K yields S 1 × S 2 , F must be a punctured two-sphere. (We remind the reader that α = µ L .) The number of punctures must be p, since L and U link each other p times. Therefore, χ(F ) = 2 − p. Consequently, ||M || = −χ(F ) = p − 2. It is just left to compute φ(µ). Since µ = µ L , we must have µ = nµ L + mλ L for some m = 0. As showed in the proof of Theorem 1.1, λ L = pµ U and H 1 (M ) = µ L , µ U | pµ L = 0 , hence φ(µ) = φ(nµ L + mλ L ) = φ(mλ L ) = |m|p.
Knots in L-spaces with null surgery to fibered manifolds
The focus of this section is on proving Theorems 1.5 and 1.6. Let K be a knot of order p in a rational homology sphere Y , endowed with a framing λ. As usual, let F be a minimal genus rational Seifert surface for K. Note that p is the intersection number of ∂F with µ. We define g(K) = − χ(F ) 2p + 1 2 to be the normalized genus of K. When K is null-homologous, that is when p = 1, this descends to the standard definition of the three-genus of a null-homologous knot in Y . The following fact about the connected sum of knots is elementary.
Lemma 5.1. Let K i be a knot in a rational homology sphere Z i , i = 1, 2. Then Moreover, K 1 #K 2 is fibered if and only if both K 1 and K 2 are fibered.
Proof. Let Z = Z 1 #Z 2 , and S ⊂ Z be a sphere which splits Z into a punctured Z 1 and a punctured Z 2 . We may assume S intersects K 1 #K 2 exactly twice. Set . The order of [K 1 #K 2 ] will be p = lcm(p 1 , p 2 ), the least common multiple of p 1 and p 2 .
to be a minimal genus rational Seifert surface for K i . We can isotope F i so that F i ∩ A consists of p i essential arcs in A. We may assume that (p 2 F 1 ) ∩ A = (p 1 F 2 ) ∩ A, since each of p 1 F 2 and p 2 F 1 consists of p 1 p 2 essential arcs in A. Here p 2 F 1 denotes the union of p 2 parallel copies of F 1 , and similarly for p 1 F 2 . Thus F = (p 2 F 1 ) ∪ (p 1 F 2 ) is a rational Seifert surface for K 1 #K 2 . We get On the other hand, let G be a minimal genus rational Seifert surface for K 1 #K 2 . We may assume that G is transverse to A. We may further assume G ∩ A consists of p essential arcs in A, otherwise we can compress G using the disk bounded by a circle in A and replace G with a new rational Seifert surface with genus smaller than or equal to the genus of G.
Each surface G i is a rational Seifert surface for K i . It follows that This proves (19). A careful look at the above argument will prove the second statement in the lemma. Suppose that both K 1 and K 2 are fibered. Let φ i : M Z i → S 1 be a fibration with fiber surface F i . The map φ p 2 1 : M Z 1 → S 1 is a fibration with fiber surface p 2 F 1 . Similarly, φ p 1 2 is a fibration of M Z 2 with fiber surface p 1 F 2 . We may assume φ p 2 1 |A = φ p 1 2 |A. Thus φ p 2 1 ∪ φ p 1 2 : M Z 1 ∪ M Z 2 → S 1 defines a fibration of M Z over S 1 . For the converse, suppose that K 1 #K 2 is fibered in Z. Let φ : M Z → S 1 be a fibration with G a fiber surface. Since G ∩ A consists of essential arcs, we may assume φ|A is a fibration. Therefore, φ|M Z i is a fibration for i = 1, 2.
Recall that the null slope of K is the unique isotopy class of the curve α in ∂M that generates the kernel of the map H 1 (∂M ; Q) → H 1 (M ; Q) induced by the inclusion map of ∂M into M . Note that the class of α, as an element of H 1 (∂M ), can be written as α = q µ + p λ, for some integers q and p > 0. Note also that p, the order of [K] in H 1 (Y ), is a multiple of p .
A Morse surgery on K is filling M along a curve m · µ + λ, for some integer m. It is a well-known fact that Dehn surgery on K with coefficient q /p can be realized as Morse surgery with coefficient m on the knot K#O p /r inside Y #L(p , r) where q = mp − r with 0 ≤ r < p , where O p /r is the image of K in L(p , r) when K is the unknot, Y = S 3 , and the lens space L(p , r) is obtained by performing p /r on the other component of the link in Figure 1. This follows from the Slam-Dunk move. See [CG88,p. 501]. Let α be the null slope on K#O p /r , then α is the framing with slope m. We point out that in order to make sense of the surgery coefficient in our setting we first need to choose a longitude λ for K. See Figure 1.
Corollary 5.2. Using the notation of this section, (a) If K#O p /r ⊂ Y #L(p , r) is fibered, then K is fibered in Y . (b) Let F be a minimal genus rational Seifert surface for K and F ⊂ Y α be the closed surface obtained by capping off ∂F with disks. Then there exists a minimal genus rational Seifert surface is obtained from F by capping off its boundary with disks.
Proof. The statement (a) follows directly from Lemma 5.1. Thus, we only need to prove (b). Let l = p/p be the number of components of ∂F . Then Similar to the proof of Lemma 5.1, a Thurston norm minimizing rational Seifert surface for K may be obtained by gluing F and lD p /r along p arcs. We call this rational Seifert surface surface F . Then The order of [K ] in H 1 (Y #L(p , r)) is p, that is equal to the order of [K] ∈ H 1 (Y ). Also, [µ] · [∂F ] = p. Combining these two facts we see that F is a minimal genus rational Seifert surface for K . By the discussion in the paragraph before this corollary, we have that the null slope of K is a framing. See Figure 1. Hence, ∂F has exactly p components. This observation, together with Equations (20) and (21) will give us that The main idea that will be used to prove Theorem 1.5 is to compare the exact triangle of Theorem 2.4 with another exact sequence that differs only in one term with (12). The rest of the effort will be devoted to prove that those terms are also isomorphic. In Section 2 we observed that for a relative Spin c structure ξ, C ξ = CF K ∞ (Y, K, ξ) is a chain complex. Moreover, every relative Spin c structure has an Alexander grading. We have the following short exact sequence where ξ ∈ Spin c (Y, K) is a relative Spin c structure with the least Alexander grading (see Equation (7)). We point out that h + ξ,1 is just the horizontal projection. Since j ≥ 1 (instead of j ≥ 0), we use a different notation for the horizontal projection from that of (9).
The goal of the next two lemmas is to replace the complexes in (22) with three other complexes so that, after taking homology, two out of three of the replaced terms will be the summands of the corresponding terms of (12).
Lemma 5.3. In the short exact sequence of (22), we have where ξ is a Spin c structure with the least Alexander grading.
Proof. We show that which in turn forces that i = 0 and j = 0. The last inequality, again, follows from Lemma 2.5.
Recall the natural map G Y,K : Spin c (Y, K) → Spin c (Y ) which associates an "absolute" Spin c structure to every relative one in the knot exterior. Recall also Ξ : Spin c (Y n ) → Spin c (Y, K), the map in Theorem 2.3.
Lemma 5.4. In the short exact sequence of Equation (22), we have where t ∈ Spin c (Y n ) is a Spin c structure with Ξ(t) = ξ + P D[µ], and n 0.
Proof. Using [OS11, Proposition 3.2], we get that as (Z ⊕ Z)-filtered chain complexes. This together with Theorem 2.3 will give us the first isomorphism. For the second isomorphism, we have where the last isomorphism follows from the identification in (10) together with Equation (3).
Similar to (11), define where A denotes the Alexander grading defined in (7). Also, define Proposition 5.5. Let K be a knot in an L-space Y , and F be a minimal genus rational Seifert surface for K. Suppose that the null slope α of K is a framing λ, and g = g(F ) > 1. Let ξ ∈ Spin c (Y, K) be a relative Spin c structure with the least Alexander grading. Let also t be a Spin c structure on Y n with Ξ(t) = ξ + P D[µ]. Then we have the isomorphism We will first prove a technical lemma that will be useful to prove the proposition. The assumptions are the same as those of Proposition 5.5. Recall from Subsection 2.3 that [t] Yn ⊂ Spin c (Y n ) is the [λ] -orbit that contains t, and [s t ] Y ⊂ Spin c (Y ) is the unique [λ] -orbit that is cobordant to [t] Yn in W n . As in Subsection 2.3, S denotes the core of the two-handle (attached to Y ) in W n , which is a surface with boundary. Proof. Viewing λ as a curve in M = Y \ ν • (K), its homology class represents an element in H 1 (M ), H 1 (Y ) and H 1 (Y n ). We will show that [λ] has order p in each of these three homology groups. Clearly [λ] has order p as an element of H 1 (Y ), since [λ] = [K] ∈ H 1 (Y ). Having assumed that the framing λ is the same as the null slope of K, the order of [λ], when viewed as an element of H 1 (M ), is also p. Suppose that the order of Since Y n is obtained by gluing a solid torus to M along n · µ + λ, we get that for some integer s. Since [λ] ∈ H 1 (M ) is a torsion element while [µ] is non-torsion, we must have s = 0. Hence r = p.
By the definition of ϕ, ϕ(t ) is cobordant to t in W n , so ϕ(t ) ∈ [s t ] Y . As for Ξ, note that Ξ(t) = ξ + P D [µ]. Every t ∈ [t] Yn has the form t + kP D[λ] for some integer k. Then, using . Using Equation (5), All three sets of Spin c structures in the lemma are affine spaces over [λ] ∼ = Z/pZ. Moreover, both maps ϕ and Ξ are equivariant with respect to the action of P D[λ]. Our conclusion then follows.
Proof of Proposition 5.5. We recall that F in the long exact sequence (12) is induced by Equation (14), which can be rewritten as Here x = x(t ) ∈ Spin c (W n (K)) is as in Theorem 2.3. Fix t ∈ [t] Yn , and let ξ = Ξ(t ). Under the identifications in Lemma 5.4, the maps v + ξ and h + ξ correspond to Spin c structures x and x + P D[S] on the two-handle cobordism W n , respectively. Note that the class [S] represents an element in H 2 (W n , ∂W n ).
Using the degree shift formula (13) we see that the difference of the degrees of f + W n (K),x(t ) and where [W n , ∂W n ] is the fundamental class of W n . Since H 2 (W n (K); Q) ∼ = Q, there exists a rational number r with the property that Let F be the capped off rational Seifert surface in W n . Then H 2 (W n (K); Q) is generated by [ F ]. Therefore, Assume that n 0. Since W n (K) is a negative definite four-manifold, we see that −k 2 P D[S] 2 > 0. Also, 1 − 1 k > 0 unless k = 1. So the right hand side of (28) is positive provided that k = 1. It is negative when k = 1 and g > 1. That is, when g > 1, v + ξ has degree lower than that of h + ξ , but higher than any of other terms in (24). In other words, the map in (24) has the form h + ξ + lower order terms.
Since Y is an L-space, h + ξ induces a surjective map in homology. Lemma 5.6 then implies that Using the exact sequence of (22), we get the short exact sequence where the direct sum on the second map is taken over all ξ ∈ [ξ] Y . It follows from (23) Proof of Theorem 1.5. We first deal with the case that α is a framing. Let g be the genus of a minimal genus rational Seifert surface for K. If g > 1, the assumption that Y α fibers over the circle together with [OS04d, Theorem 5.2] will give that Therefore, Proposition 5.5 implies that ξ∈Spin c (Yα), c 1 (ξ),[F,∂F ] =χ(F ) HF K(Y, K, ξ) ∼ = Z.
Using [NW14, Theorem 2.3], K is fibered. For the case g = 1, we need to use the twisted version of the exact triangle of (12). All the steps are analogous to the proof for the case g > 1. See [AN09] where the exact triangle is obtained for a null-homologous knot. Finally, using Theorem 1.1 for the case g = 0, the result follows.
If α is not a framing, by the paragraph before Corollary 5.2, Y α can be obtained by performing a Morse surgery on K#O p /r in the L-space Y #L(p , r). The previous case implies that K#O p /r is fibered. Hence, using Corollary 5.2, K is fibered.
Proof of Theorem 1.6. Similar to the proof of Theorem 1.5, we first deal with the case that α is a framing. Let F be a Thurston norm minimizing rational Seifert surface for K. Without loss of generality, we may assume F is of minimal genus. If g(F ) ≤ 1, F is a sphere or torus, hence must be Thurston norm minimizing. If g(F ) > 1, Lemma 2.5 implies that there exists ξ ∈ Spin c (Y, K) such that HF K(Y, K, ξ) = 0 and c 1 (ξ), [F, ∂F ] = χ(F ).
Proposition 5.5 implies that HF Hence F is Thurston norm minimizing by the adjunction inequality [OS04b, Theorem 7.1].
If α is not a framing, as before, Y α can be obtained by performing a Morse surgery on K#O p /r in Y #L(p , r). Let F be the minimal genus rational Seifert surface for K#O p /r as constructed in Corollary 5.2. Let also F be its extension to the m-surgery on K#O p /r in Y #L(p , r). From the previous case, we know that F is Thurston norm minimizing. Hence, using Part (2) of Corollary 5.2, F is also Thurston norm minimizing.
6. Directions for future research 6.1. Floer simple knots in L-spaces and fiberedness. Let K ⊂ Y be a knot in an L-space Y that admits some S 1 × S 2 surgery. We showed in Theorem 1.1 that the complement of K in Y fibers over the circle. Using [RR17, Proposition 7.8], we conclude that every Morse surgery on K (except for the one that results in S 1 × S 2 ) will result in an L-space. As pointed out in the introduction if Y = S 3 , then any knot with an L-space surgery will be fibered. For an arbitrary Lspace Y , however, this is not always the case. Lidman and Watson in [LW14] constructed examples of non-fibered knots in L-spaces with L-space surgeries. It is known that if a Floer simple knot K in an L-space is primitive, and the knot complement is irreducible, then K is fibered [BBCW12, Theorem 6.5] 2 . Recall that a knot K ⊂ Y is primitive if [K] ∈ H 1 (Y ) is a generator. Let [K] ⊥ denote the orthogonal complement of the homology class [K] ∈ H 1 (Y ) with respect to the linking form of Y . With having the notation of this section in place, we can reformulate that theorem as follows: Theorem 6.1. Let Y be an L-space, K ⊂ Y be a Floer simple knot with irreducible complement. If [K] ⊥ = 0, then K is fibered.
Note that we are replacing the primitiveness assumption by a criterion regarding the linking form of Y . We briefly review the classical notion of linking forms here. For a more detailed discussion, see [CM10], for instance.
Definition 6.2. The linking form of a closed three-manifold Y is the non-degenerate form lk Y : Tor Y × Tor Y → Q/Z on the torsion subgroup Tor Y of H 1 (Y ) defined by lk Y (a, b) = α · τ /n, where α is any 1-cycle representing a and τ is any 2-chain bounded by a positive integer multiple nβ of a 1-cycle β representing b.
If Y is surgery on a framed link L, then lk Y is computed from the linking matrix A of L, with framings on the diagonal, as follows. First use a change of basis to transform A into a block sum O ⊕ A. Here, O is a zero matrix and A is nonsingular. This corresponds to a sequence of handle slides in the Kirby diagram [GS99], transforming L into L O ∪ L A . Now following [Sei36], the linking form lk Y is presented by the matrix A −1 with respect to the generators of Tor Y given by the class of the meridians of the components of L A .
To see that Theorem 6.1 is a reformulation of [BBCW12, Theorem 6.5], we start by the following lemma about [K] ⊥ : Lemma 6.3. Suppose that K is a knot in a rational homology sphere Y and a ∈ [K] ⊥ . Then there exists a knot L in the complement of K, such that [L] = a and L bounds a rational Seifert surface which is disjoint from K.
Proof. Let p be the order of [K] in H 1 (Y ). There exists a rational Seifert surface F for K so that the intersection number of ∂F with the meridian of K is p. Let L ⊂ Y be a knot representing a. We may assume L is disjoint from K. Since lk Y ([K], a) = 0, the algebraic intersection number of L with F is a multiple of p. Performing connected sums of L with copies of the meridian of K, we can get a new knot L disjoint from K, so that L still represents a and the algebraic intersection number of L with F is zero. Hence any rational Seifert surface G for L has algebraic intersection number zero with K. Consequently, by removing the intersection points of G with K by adding tubes to G, we get a rational Seifert surface for L that is disjoint from K. Lemma 6.3 yields the following elementary characterization of primitive knots.
Proposition 6.4. Suppose that K is a knot in a rational homology sphere Y , M = Y \ ν • (K). Then the following three conditions are equivalent: (i) K is primitive.
Proof. (i)⇔(ii). By definition, K is primitive is equivalent to the condition that the map ι K : H 1 (K) → H 1 (Y ) is surjective. Using the Mayer-Vietoris sequence for the pair (Y, K), we see that the surjectivity of ι K is equivalent to H 1 (Y, K) = 0. We have H 1 (M ) ∼ = Z⊕Tor M . By the Universal Coefficients Theorem, Tor M is isomorphic to H 2 (M ), which is (by Poincaré duality) isomorphic to H 1 (Y, K). Hence K is primitive is equivalent to H 1 (M ) ∼ = Z. Using Proposition 6.5, we see that Conjecture 6.6 could be equivalently stated as a generalization of [BBCW12, Theorem 6.5] where the primitiveness assumption is replaced by the semiprimitiveness of the knot. 6.2. "Positivity" of knots in S 1 × S 2 admitting L-space surgeries. In another direction, it is known that for a knot K ⊂ S 3 with some L-space surgery, K is a strongly quasipositive knot. Let B n denote the braid group on n strands, with generators σ 1 , σ 2 , · · ·σ n−1 . A strongly quasipositive link is a link that can be realized as the closure of the braid word β = m k=1 σ i k ,j k , where σ i,j is of the form (σ i · · · σ j−2 )σ j−1 (σ i · · · σ j−2 ) −1 .
There is a weaker notion of positivity called quasipositivity where the braid word β is the multiple of arbitrary conjugates of positive generators in B n (whereas strongly quasipositive knots require these conjugates to be of a special form). That is, for quasipositive links, (σ i · · · σ j−2 ) in (32) is replaced by an arbitrary braid word. There is a more geometric, yet equivalent, definition of quasipositive links. Every such a link is a transverse C-link, that is, it arises as the transverse intersection of S 3 ⊂ C 2 with a complex plane curve f −1 (0) ⊂ C 2 , where f is a non-constant polynomial. Algebraic links of singularities form a proper subfamily of quasipositive links. See, for instance, [BO01,Hed10,Rud83]. For a non-null-homologous knot L ⊂ S 1 × S 2 with fibered exterior we know that L is isotopic to a spherical braid [BBL16, Lemma 1.18].
Question 6.7. Given a knot L ⊂ S 1 × S 2 that admits an L-space surgery, is there a notion of positivity for L as a spherical braid? | 2018-01-14T22:42:08.000Z | 2016-08-25T00:00:00.000 | {
"year": 2016,
"sha1": "be7fd79e876c8ff0c0bd7c428a47e72f1c49dabb",
"oa_license": "publisher-specific, author manuscript",
"oa_url": "https://doi.org/10.1090/tran/7510",
"oa_status": "HYBRID",
"pdf_src": "Arxiv",
"pdf_hash": "78a401bf36a6d6ed80c994d15cc2068a2c455589",
"s2fieldsofstudy": [
"Computer Science",
"Art"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
119398234 | pes2o/s2orc | v3-fos-license | Extending the photon energy coverage of an x-ray self-seeding FEL via the reverse taper enhanced harmonic generation technique
In this paper, a simple method is proposed to extend the photon energy range of a soft x-ray self-seeding free-electron laser (FEL). A normal monochromator is first applied to purify the FEL spectrum and provide a coherent seeding signal. This coherent signal then interacts with the electron beam in the following reverse tapered undulator section to generate strong coherent microbunchings while maintain the good quality of the electron beam. After that, the pre-bunched electron beam is sent into the third undulator section which resonates at a target high harmonic of the seed to amplify the coherent radiation at shorter wavelength. Three dimensional simulations have been performed and the results demonstrate that the photon energy gap between 1.5 keV and 4.5 keV of the self-seeding scheme can be fully covered and 100 GW-level peak power can be achieved by using the proposed technique.
Introduction
The successful operations of free-electron lasers (FELs) have opened up a new chapter to the exploration of chemistry, biology, material science as well as many other scientific frontiers due to its remarkable characteristics of high brightness, short pulse and completely transverse coherence.
Self-amplified spontaneous emission (SASE) [1][2][3] is currently the main operation mode of x-ray FELs and has been proved reliable and successful to provide high peak power, ultra-short light pulses with good spatial coherence. Nevertheless, SASE FEL has relative wide bandwidth and poor longitudinal coherence because the signal initiating the radiation starts from the electron shot noise [4]. It can be barely used in the scientific experiments requiring temporal coherence. At this circumstance, several seeded FEL schemes with external seed lasers have been developed to improve the longitudinal coherence. The simplest way is directly seeding a FEL with a high-harmonic generation (HHG) source [5]. While the HHG direct-seeding technique has been successfully demonstrated in the VUV wavelengths [6], it can hardly generate x-ray radiation due to the limitation of the HHG power at this wavelength range. An alternative way for seeding FEL at shorter wavelength is adopting high harmonic generation schemes such as high-gain harmonic generation (HGHG) [7,8] or echo-enable harmonic generation [9][10][11]. Unfortunately, the limitation of the energy modulation amplitude prevents the possibility of reaching very short wavelength in a single stage setup. More concerns also come from the noise amplification through the harmonic multiplication process, which may spoil heavily the generated x-ray radiation properties. It is therefore hardly to push the output wavelength to the sub-nanometer region by using external seeding methods.
Instead of seeding with external lasers, the self-seeding technique [12][13][14][15][16] implements a monochromator together with a bypass chicane to obtain a monochromatic light from SASE itself and then amplify it to saturation. The undulator is divided into two parts by the monochromator. The first undulator section is used to generate a normal SASE radiation pulse that is interrupted well before saturation. Then the monochromator is employed to purify the spectrum and provide a coherent seeding signal at short wavelength, while the bypass chicane is used to delay the electron beam and wash out the microbunchings formed in the first undulator section. After that the seeding signal and the electron beam are simultaneously sent into the second undulator section to interact with each other and produce an intense coherent x-ray pulse.
Self-seeding FEL can be distinguished as soft x-ray self-seeding and hard x-ray self-seeding depending on the chosen of monochromator materials for different photon energy ranges. Generally, the grating-based monochromator [13] can be applied for the photon energy range of 0.7keV to 1.5keV, while diamond-based [14] monochromator is suitable for the photon energy range of 4.5keV to 10keV.
Both the soft x-ray and hard x-ray self-seeding have been demonstrated at SLAC in recent years [15,16]. To date, self-seeding is serving as one of most reliable configuration to provide high peak power, ultra-short x-ray light pulses with extraordinary coherence both transversely and longitudinally.
However, there still exists a photon energy gap between 1.5keV-4.5keV that cannot be covered by present self-seeding schemes due to the lack of suitable materials for the monochromator. Previous studies showed a proper scheme to cover this photon energy gap by cascading the self-seeding with the HGHG [17]. However, in this scheme, a relative complex setup with separated seed amplifier and modulator is required to mitigate the beam quality degradation in the long modulator.
In this paper, we propose a novel method that combines the reverse undulator taper and harmonic generation techniques to extend the photon energy coverage of a self-seeding FEL. The proposed scheme utilizes a baseline configuration of a self-seeding FEL and does not require the installation of any additional hardwares in the undulator system. The proposed technique can be easily implemented at already existing or planned x-ray FEL facilities to generate x-ray radiation pulses with consecutively tuning wavelength from soft x-ray to hard x-ray region. This paper is organized as follows: the principles of the reverse taper enhanced harmonic generation technique are introduced in Sec. 2. Using a typical soft x-ray self-seeding parameters, three-dimensional simulation results are presented in Sec.
3. And finally some conclusions comments are given in Sec. 4. The schematic layout of the proposed technique is shown in Fig.1. The undulator system consists of three undulator sections and a bypass chicane. The electron beam from the linac first passages through a short undulator in the self-seeding section to generate a SASE pulse with low output intensity. This radiation pulse is then sent through the grating monochromator to select a monochromatic light which works as the seed laser for the following radiation processes. The bypass chicane is used to adjust the approaching trajectory of the electron beam and compensate the radiation time delay induced by the monochromator. Meanwhile the michrobunching formed in the former SASE undulator can be smeared out by the chicane. After that the electron beam and the selected monochromatic light pulse are sent into the second undulator with reverse taper, which can imprint strong coherent microbunchings on the electron beam without significantly degrading the beam quality. Eventually, the pre-bunched electron beam radiates full power in the third undulator section (harmonic generation section), tuned to the resonance of a target high harmonic of the seed.
Principles
The reverse undulator taper technique has been initially proposed for obtaining high degree of circular polarization in x-ray FELs [18], which has been experimental demonstrated at the LCLS recently [19]. Different from a normal undulator taper technique, the magnetic field of the reversed tapered undulator is increased along the undulator segments, producing fully microbunched electron beam at the fundamental wavelength while efficiently suppress the powerful linearly polarized background. We will show below that that the reverse taper technique together with a coherent seed can be used to generate coherent microbunhcing that contains amount of Fourier components at high harmonics of the seed.
For a FEL in the high gain linear regime, the bunching factor and the complex amplitude of the harmonic of the energy modulation as a function of the normalized FEL power ̂ can be simply calculated by (see the Appendix): where ̂= / , is the FEL power at the undulator length , is the Pierce parameter [20,21], is the electron beam power and ̂ is the normalized detuning parameter. While Eq. (1) has exactly the same form as Eq. (A13) in Ref. [18], ̂ is quite different due the initial monochromatic seed, which holds the ability to suppress the shot noise and helps to generate coherent microbunching. For a linear tapered undulator, the normalized detuning parameter is proportional to the taper strength : ̂=̂ (2) and where is the undulator period, ̂= 4 / , is the undulator parameter and 0 is the initial value of at the entrance of the undulator. Under the condition of a reverse tapered undulator with large negative detuning parameter, ̂< 0, �̂� ≫ 1, the bunching factor will change slightly with according to the Ref. [18], but the normalized radiation pulse energy will be suppressed significantly.
According to Eq. (1), the bunching factor is proportional to ̂ and the energy modulation amplitude is proportional to �̂. For a given value of bunching factor, the radiation induced energy spread growth in the reverse tapered undulator will be much smaller than that in a normal undulator. Thus the pre-bunched electron beam can be used again in the following undulator section for the harmonic generation.
To show the possible photon energy coverage of the proposed technique, here we adopt the practical parameters of a soft x-ray facility, as shown in Table. 1, to carry out some theoretical estimations. A 5 GeV electron beam with initial slice energy spread of 500 keV, peak current of 3 kA and normalized slice emittance of 0.6 mm-mrad is adopted to drive a soft x-ray self-seeding FEL with tunable central photon energy between 0.7keV and 1.5keV. In order to cover the photon energy range of 1.5-4.5 keV, we only need to tune the resonant wavelength of the third undulator section of the proposed scheme to the 2 nd (1.4-3keV) and 3 rd (2.1-4.5keV) harmonics of the seed. Based on the Xie's model [21], the calculated saturation power and photon energy coverage for different harmonics are shown in Fig. 2.
One can find that the output power for the proposed scheme is at GW-level over the photon energy range of 0.7-4.5 keV. The design of the monochromator is similar with that in Ref. [24]. The resolution of the monochromator is chosen to be 10 −4 to select a single spike in the spectrum. And about 3.8% x-ray power can be preserved due to the diffraction and reflection processes in the monochromator. The spectrum and the temporal distribution of the radiation pulses before and after the monochromator are shown in Fig. 3. One can find in Fig. 3 that the normalized spectrum bandwidth is reduced from 5 × 10 −3 to about 3 × 10 −5 by the monochromator. The peak power is reduced by about three orders of magnitude, from 250 MW to about 0.18 MW. Both the spectrum and temporal profiles become rather smooth after the monochromator. The monochromatic seed and the refreshed electron beam are then directly sent in to the subsequent reverse tapered undulator. As mentioned above, the reverse tapered undulator is introduced in our scheme to obtain a well bunched electron beam with suppression growth of the beam energy spread.
However, the saturation length in the reversed tapered undulator will increase correspondingly if the taper strength | | is too large. We need to optimize the taper strength to ensure that the saturation power is sufficiently suppressed while the saturation length does not increase obviously. Fig. 4 shows the simulation results of the saturation power and the relative saturation length variations as a function of in the second undulator section. It is clearly seen that the saturation power declines significantly when is smaller than -0.4, while the gain length will increase quickly when the reverse taper strength is smaller than -0.6. Thus we can reasonably conclude that the taper strength is appropriate to be chosen between -0.6 and -0.4 to achieve a suitable saturation length and a suppressed saturation power. In the following simulations, the taper strength is chosen to be -0.4. The evolution of the bunching factor along the reverse tapered undulator can also be theoretically calculated by using Eq. (1). Fig. 7 gives the calculation results with the simulation parameters. One can find that the calculation curve fit quite well with the three-dimensional simulation result. (c) (d) Fig. 9. Simulation results for the third harmonic generation: radiation peak power (a) and bunching factor (b) as a function of the undulator axis, radiation pulse (c) and corresponding spectrum (d) at saturation.
Finally, the pre-hunched electron beam is sent into the third undulator section which is resonant at a target high harmonic of the seed to lase. We carried out simulations for both the second and the third harmonic cases. The simulation results are illustrated in Fig. 8 and Fig. 9.
Conclusions and perspective
In summary, an easily-to-implement method has been proposed to extend the photon energy range of a soft x-ray self-seeding FEL. Theoretical analysis and numerical simulations are given and the results demonstrate that the reverse tapered undulator can provide modulated electron beam with strong microbunchings and small energy spread. This kind of electron beam can be directly sent into the following undulator for the generation of GW-level coherent radiation pulse at second and third harmonics of the seed, which can fully cover the photon energy gap between 1.5 keV-4.5 keV. The output peak power can be further enhanced to 100 GW by using the normal taper technique.
The proposed technique has a relatively simple configuration and can be easily implemented in the present soft x-ray self-seeding FEL facilities around the world, which will make the self-seeding scheme to be much more reliable and useful in generation of consecutive x-ray radiation pulses from soft x-ray to hard x-ray region. The proposed technique can also be applied to a hard x-ray self-seeding FEL to further extend the photon energy coverage, where no suitable monochromatic materials can reach.
It should be point out here that some practical limiting factors that might affect the performance of the proposed scheme, such as the effects from the microbunching instability in the electron beam, are (a) (b) not taken into account in this paper. Further investigations on these topics are ongoing.
APPENDIX
The detail deduction of reverse taper theory without external seed has been done in the appendix of Ref. [18]. Following the notations of Ref. [18], here we repeat the derivation process considering the external coherent seed.
The evolution of the electronic field of the amplified electromagnetic wave can be derived as: where ̂ is detuning parameter, the solution for the equation A1 can be expressed as: where are constant coefficents, are the solutions of eigenvalue equation: The evolution of the field amplitude and its second and third derivatives with respect to ̂ can be calculates by: The matrix elements of M(|0) are: .
The relation between the derivatives � ′ , � ′′ and bunching b() and the complex amplitude of harmonic of energy modulation () can be written as: where 0 is the normalizing factor of the field amplitude.
Different from Ref [18], in the proposed scheme, a monochromatic seed exits at the entrance of the reverse tapered undulator. The initial conditions should be written as | 2019-04-13T17:04:11.362Z | 2017-01-16T00:00:00.000 | {
"year": 2017,
"sha1": "7562565d76e15872ee663e2038bb1b6114fd9975",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1701.04194",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "7562565d76e15872ee663e2038bb1b6114fd9975",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
258352638 | pes2o/s2orc | v3-fos-license | Energy Tank-based Control Framework for Satisfying the ISO/TS 15066 Constraint
The technical specification ISO/TS 15066 provides the foundational elements for assessing the safety of collaborative human-robot cells, which are the cornerstone of the modern industrial paradigm. The standard implementation of the ISO/TS 15066 procedure, however, often results in conservative motions of the robot, with consequently low performance of the cell. In this paper, we propose an energy tank-based approach that allows to directly satisfy the energetic bounds imposed by the ISO/TS 15066, thus avoiding the introduction of conservative modeling and assumptions. The proposed approach has been successfully validated in simulation.
INTRODUCTION
Collaborative robotics is a fast growing field in the industrial setting and it allows direct robot and operator interface without traditional safeguarding. As a consequence of the introduction of human-robot collaboration technologies, great importance has been attributed to robot safety standards, which have been updated to address new co-working scenarios. In particular, the international ISO 10218-1-2011 (2011) and ISO 10218-2-2011ISO 10218-2- (2011 safety standards have identified specific applications and criteria where collaborative operations can occur and they have been integrated by the introduction of the technical specification ISO TS 15066:2016(2016. The safety regulations identify four collaborative operations which can be adopted depending on the requirement of the application concerned and the design of the robot system: Safety-rated Monitored Stop (SMS), Speed and Separation Monitoring (SSM), Power and Force Limiting (PFL) and Hand Guiding (HG). These modalities can be applied in combination in order to achieve higher levels of productivity, while still preserving safety of the human operators Pupa et al. (2021). However, in the industrial practice, PFL is typically adopted for tasks in which intentional or incidental contact between the human and the robot can occur. Safety is guaranteed by limiting the power and the force during the contact to values at which the risk of injuries is not expected and by imposing speed limits that guarantee safe humanrobot contacts. Current approaches for implementing the ISO/TS 15066 guidelines lead to a conservative behavior (e.g. low velocity) of the robot and, consequently, to poor performance of the collaborative cell. In Ferraguti et al. (2020), Control Barrier Functions (CBF) Ferraguti et al. (2022) have been exploited for enhancing the performance of a robot operating in a collaborative cell while satisfying the PFL velocity limit. However, the definition of the safe set is very conservative since it is a collection of ellipses and the overall CBF is non-smooth due to the abrupt activation and deactivation of the ellipsoids. Moreover, as stated in the regulation itself, the velocity limit computed according to the ISO/TS 15066 is based on conservative assumptions and this results in robot motions that are still very slow. Indeed, the original formulation of the PFL constraint is a limitation onto the exchanged energy, which is then turned into a velocity constraint under strong assumptions (i.e., the assumption of a fully inelastic contact, a two-body model after the impact). Recently, an energybased approach has been proposed in Lachner et al. (2021) to reduce the safety-related parameters to be determined in physical Human-Robot Interaction (pHRI) applications to a single energy value. However, the proposed architecture relies heavily on the precise knowledge of the dynamic model of the manipulator and, in particular, of its inertia matrix, which is not always available or known with sufficient accuracy.
To overcome these issues, in this paper we leverage energy tanks and passivity-based control in order to directly address the PFL energetic constraint. To this aim, we exploit the modulated energy-tanks presented in recent works Benzi and Secchi (2021); Benzi et al. (2022). These are energy storing elements which have proven effective in ensuring safety layers for collaborative applications, by passively implementing any desired dynamics. By simply controlling the power flow in the system, these techniques can, in fact, bound the energy of the interconnected system without requiring any knowledge of its model, while at the same time providing formal guarantees of robust stability.
Thus, the contribution of this paper is an energy tankbased control architecture designed to comply directly with the time-varying energetic bounds of the ISO TS 15066:2016(2016. This allows us to overcome the modeling assumptions underlying the conservative velocity limitation. At the same time, owing to the passivity-based nature of the formulation, our approach does not require the precise knowledge of the dynamic model of the robot, improving upon this limitation of Lachner et al. (2021 (2016): where f max ∈ R is the maximum contact force for the body region and k ∈ R is the related effective spring constant.
The energy transfer limit established in (1) is then used to compute the maximum velocity at which the robot can move into the collaborative workspace, while ensuring safety in case of collision. Let us define v rel ∈ R as the relative velocity between the robot and the human along the minimum distance direction between the threatened body part and the closest link of the robot.
In order to derive a relationship between the relative velocity and the contact force, the ISO/TS 15066 assumes a fully inelastic contact during the collision, in which the total kinetic energy of the two-body system is transferred to the human body part. The resulting balance is: where E ∈ R is the energy transferred and µ ∈ R is the reduced mass of the two body system, computed as: in which m h ∈ R is the mass of the human body area and m r ∈ R the mass of the robot, the latter computed as: being M ∈ R the mass of the moving parts of the robot, while m L ∈ R is the total payload. Both m h and k are tabulated in the Annex A of the ISO/TS 15066, for each body region. Thus, combining (1) and (2), safety in PFL operations is encoded by the following constraint on the velocity between the robot and a given human body part: 3. PROBLEM STATEMENT The velocity limit computed according to the ISO/TS 15066 and given by (5) is severely conservative, due to the assumptions underlying its formulation. In particular, we could list the following drawbacks related to the direct implementation of the ISO/TS 15066: (1) As showcased, e.g., in Khatib (1995) and Haddadin et al. (2010), the force exerted during a contact scenario f depends on both the robot configuration and the direction of the contact. Therefore, the computation of the effective robot mass m r in (4) is often inaccurate and can result in large over-estimations. (2) The assumption that the transient contact between a robot and a human body part results in a fully inelastic two-body collision is highly conservative. The actual transient contact scenario is, in most cases, an intermediate condition between a fully elastic and a fully inelastic collision. Thus, the value of v max computed through the model in (5) there is no formal analysis related to the stability of the system while in contact with the human, as well as while switching from interaction to free motion and vice-versa. Thus, instabilities might arise during the contact, threatening the safety of the human.
Points 1) and 2) in the previous list highlight that using the velocity limit (5) can be severely conservative. In the remainder of this section we discuss how, by applying the robot mass evaluation proposed by the ISO/TS 15066 through (4), we obtain in most cases a conservative value.
Let us consider a torque-controlled fully actuated n−DOF manipulator represented by the following Euler-Lagrange dynamic model: (6) where q(t) ∈ R n is the vector of joint variables, M(q) ∈ R n×n is the positive definite and symmetric inertia matrix, C(q,q) ∈ R n×n encompasses Coriolis and centrifugal effects and g(q) ∈ R n is the gravitational term. The vector τ (t) ∈ R n represents the controlled joint torques and the term J(q) T F e (t) ∈ R n represents the torque applied to the joints because of the external wrench F e (t) ∈ R m applied on the end-effector. Let us assume that the robot is carrying a negligible load, i.e. m L ≈ 0 in (4). Thus, according to the ISO/TS 15066, the effective mass of the robot can be computed as m r = M A possible solution would be to directly use the apparent mass in (4), instead of m r . However, the computation of m app requires the precise knowledge of the dynamic model of the robot, specifically of the inertia matrix M(q), which is not always available or known with sufficient accuracy. Therefore, in this paper we propose a control architecture which exploits directly the energetic constraint in (1) for satisfying the safety requirements of PFL in the ISO/TS 15066, via an energy tank-based formulation. In this way, we can avoid the conservative assumptions underlying the derivation of the velocity limit in (5), thus overcoming the drawbacks presented in points 1) and 2). Moreover, we provide a model-agnostic passivity-based framework in order to guarantee a robustly stable behavior of the overall system both in free motion and in contact with the human, thus addressing the issue described in point 3).
OPTIMIZED TANK-BASED CONTROL ARCHITECTURE FOR ENSURING VARIABLE BOUNDS ON THE KINETIC ENERGY
Consider a robot that needs to execute a task in a collaborative cell. Let 0 < H 1 ≤ H 2 ≤ · · · ≤ H N be the bounds on the energy transfer imposed by PFL in (1) for guaranteeing safety during contact with each of the N human body parts. Since the closest body part can change online, the bound to guarantee is time-varying.
Since the nature of the problem is energetic, we provide an energy-based solution exploiting the modulated energy tank Benzi et al. (2022); Benzi and Secchi (2021). In this section, we first provide a short background on modulated energy tanks. Then, we describe the control architecture to ensure a single bound on the kinetic energy, both in free motion and in contact phase. Finally, we extend it for switching among the different bounds imposed by PFL.
Background on Modulated Energy Tanks
The energy tank is an energy storing element, traditionally employed for storing the energy dissipated by the controlled system. It can be generally represented as where x t (t) ∈ R is the tank state, (u t (t), y t (t)) ∈ R × R is the power port of the tank, with associated energy function The tank stores/releases energy by means of the port (u t (t), y t (t)). In particular, this can be exploited for reproducing any desired port-behavior (Benzi et al. (2022)): where (u(t), y(t)) ∈ R n × R n is the I-O port of the system and a(t) ∈ R n is a modulating term, defined as with γ(t) ∈ R n being the desired value for the output y(t).
By embedding (11) in (9) we obtain ẋ t (t) = a(t) T u(t) y(t) = a(t)y t (t) = γ(t), which shows how the modulating term a(t) allows to control the energy flowing in the interconnected system to obtain any desired value for the port output y(t). A lower bound ε > 0 such that T (x t (t)) ≥ ε ∀t ≥ 0 must be enforced, in order to avoid numeric singularities in (12). Additionally, an upper tank energy bound has to be set, or practically unstable behaviors could be implemented.
The tank (13) has been proven to implement a passive exchange of energy as long as the tank is not empty (Benzi et al. (2022)), thus the following proposition holds. Proposition 1. (Benzi et al. (2022), Prop. 1). If T (x t (t)) ≥ ε ∀t ≥ 0, then the modulated tank (13) is passive independently of the desired value of γ(t).
Thus, as long as some energy is present in the tank, any desired port behavior can be passively implemented.
Tank-based Control Architecture for Energy Limitation
Consider the model of the robot (6) in operational space: where Λ(x) ∈ R m×m and S(x,ẋ) ∈ R m×m are, respectively, the inertia matrix and the centrifugal and Coriolis matrix in the operational space, with x(t) ∈ R m being the Cartesian pose (position and orientation) of the endeffector and F(t) ∈ R m the operational force vector. We decompose the latter into two terms, with F c (t) ∈ R m being the control forces due to the actuation and F e (t) ∈ R m being the external wrench, due to the interaction with the environment or the human. We can then compute the kinetic energy of (14) as It is well known that (14) is passive with respect to the pair (F(t),ẋ(t)), since F(t) Tẋ (t) =Ḣ(t) (see, e.g., Secchi et al. (2007)). Our initial goal is to limit H(t) to a fixed valuē H ∈ R during free motion (i.e., F e (t) = 0). Formally, we aim at bounding H(t) ≤H withH ≥ H(0). This can be accomplished by interconnecting (14) with the modulated energy tank (13) as shown in Fig. 1. Here, the input and the output of the modulated tank are the Cartesian velocityẋ(t) ∈ R m and the control forces F c (t) ∈ R m , respectively, while the goal is to implement the desired control action F des (t) ∈ R m . The control input is then provided to an optimization block, which computes the best passive approximation F opt (t) ∈ R m of the control action F des (t) considering the energy available in the tank and the desired behavior. The output F opt (t) is then used for modulating the energy flowing into/out of the energy tank according to (13) by setting γ(t) = F opt (t).
Since (14) is passive and the negative feedback interconnection is power preserving, the passivity of (13) would guarantee the passivity of the overall controlled system and, consequently, a stable behavior both during free motion and interaction Secchi et al. (2007). According to Prop. 1, passivity of (13) is guaranteed if T (x t (t)) ≥ ε. Thus, given the desired input F des (t), it is possible to find F opt (t) by solving the following optimization problem: -+ Fig. 1. Modulated energy tank control architecture. The optimizer provides the closest passive approximation F opt , which is then used to properly modulate, via the term a(t), the power flow at the ports of the tank. minimize The optimization problem (17) can be rewritten for a discrete time implementation, τ > 0 being the cycle time, as shown in Benzi et al. (2022), to obtain a convex optimization problem, suitable for real-time implementations: minimize Notice how, deploying this architecture, we only allow a set amount of energy to the robot for implementing the task in free motion, i.e. its kinetic energy will at most be equal to H(t) = H(0) + T (0) − ε. Thus, by a proper choice of T (0), we can guarantee a fixed boundH on H(t): Proposition 2. Consider the robot (14) interconnected with the tank (13) as in Fig. 1. If T (0) =H − H(0) + ε and T (t) ≥ ε ∀t ≥ 0, then H(t) ≤H ∀t ≥ 0 in free motion (F e (t) = 0). and in which the inequality comes from the assumption that T (t) ≥ ε ∀t ≥ 0. Then, by setting T (0) =H − H(0) + ε, we get H(t) ≤H, thus concluding the proof.
The condition on the lower bound can be enforced by synthesizing the control input F c (t) via the optimization problem (18). We can thus compute the best approximation of F des (t) complying with the energetic limitation.
The approximated input F c in (18), however, does not necessarily preserve the direction of the desired behavior F des . As, in this work, we focus on safely accomplishing industrial tasks, in which maintaining a given direction can be critical, we further modify the formulation as: Thus, by setting F c = αF des , we obtain the safe version of F des preserving the intended direction. As α = 0 is always admissible, the problem always possesses a solution.
Control Architecture for Switching Between Variable Energy Levels and Contact with the External Environment
The control architecture presented in Sec. 4.2 allows to bound the kinetic energy in free motion to a set value. On the other hand, when in contact with the external environment, the power injected through the port (F e (t),ẋ(t)), i.e. directly on the robot, can lead to uncontrolled variations of the kinetic energy. Indeed, according to Prop. 2, the maximum allowable amount of energy for the robot is already stored in the closed loop system, since T (0) − ε + H(0) =H. Thus, any uncontrolled additional energy injection on the robot (e.g., pushing the robot along its direction of motion) from external sources could violate the energy bound condition. On the contrary, uncontrolled external energy extractions (e.g., deceleration due to collisions) can degrade the overall performance, since the tank would not account for the energy loss. We can address this issue by properly re-routing the external energy in the tank, further augmenting the dynamics as follows: in which b is a variable damper that we activate only for dissipating the external energy injection whenever this would violate the energetic bound, as shown in Fig. 2 This is an emergency term which dissipates the external injection only when H(t) =H, and is thus rarely activated. The additional terms in (24) allow the tank to track the external energy flowing in/out of the robot. In case of injection, i.e., F e (t) Tẋ > 0, the corresponding amount of power is extracted from the tank. Conversely, any external extraction, i.e. F e (t) Tẋ < 0, is injected back into the tank. Proposition 3. Consider the robot (14) interconnected with the tank (13). If T (0) =H − H(0) + ε and T (t) ≥ ε ∀t ≥ 0, then H(t) ≤H ∀t ≥ 0, both in free motion and in contact with the external environment.
Thus, the kinetic energy bound can be ensured and external injections/extraction of energy are accounted for via the interconnection in (24). Since none of the terms in (24) depends upon the control input, these can simply be added into the optimization problem (23), preserving the convexity of the problem. In particular, we can utilize the architecture to comply directly with the energetic bounds imposed by PFL (1), by settingH = E max .
Nonetheless, the bounds introduced by PFL can often change during the robot motion, as the value of E max in (1) depends on the currently closest body region to the robot. Let us consider the previously introduced energy levels 0 < H 1 ≤ H 2 ≤ · · · ≤ H N . Each energy bound H i can be ensured at different times by modifying online (23) minimize α(k) ||αF des (k) − F des (k)|| 2 subject to τ αF des (k) Tẋ (k) + τ P ext + T (k − 1) ≥ ε(k), (28) where the term P ext = F e (t) Tẋ (t) + b(t)ẋ(t) Tẋ (t) encompasses the power flows due to the environmental interaction, and in which we vary online the lower bound ε(k) > 0 in order to change how much energy is made available to the robot. Let k i be the discrete time instant at which the closest body part to the robot changes. Correspondingly, the bound on the kinetic energy of the robot switches to H i . In order to address this switch, ε(k i ) is changed s.t. ε(k i ) = T (0) − H i + H(0). In this way, the result of Prop. 3 keeps on holding despite the varying energy bound, thus guaranteeing stability also during the switching.
In this way, we have build an energy-based control architecture capable of limiting the kinetic energy of the robot to different arbitrary bounds, both in free motion and in interaction phase. Moreover, this can be performed without knowledge of the dynamic parameters (i.e., the inertial and Coriolis terms) of the robot in (6).
SIMULATIONS
Simulations have been performed in order to validate the control architecture, using a KUKA LWR 4+ 7-DOF forcecontrolled manipulator in MATLAB, modeled as in (6), whose dynamic behavior is computed using the Robotics Toolbox Corke (1996) with a sampling time of 1ms. The robot is set to accomplish a simple motion task between two poses. The desired input F des is generated via a standard PD controller, for the sake of generality.
First, following the procedure in Sec. 2, we compute v max according to the TS. During the first part of the motion, the closest human body part is the chest. From the tables in ISO TS 15066:2016(2016, we retrieve the maximum force for this part (140N ), together with its stiffness (25N/mm), mass (40kg), and maximum energy (E max1 = 1.6J), under the assumption that the robot is carrying no payload. Using (5), the maximum velocity for the first motion according to the TS would be v max1 = 0.29m/s. In order to directly comply with the energetic bound of the TS instead, we leverage our architecture in Sec. 4 and approximate the control input F des by means of the optimization problem in (28). Since the robot is in free motion, P ext = 0W in this case. We initially set the tank lower bound to a conservative value of ε 1 = 3.4J. Then, as previously described, we initialize the tank as T (0) = ε 1 + E max1 = 5.0J. In this way, assuming H(0) = 0J (the robot is initially stopped), we can limit the kinetic energy of the robot s.t. H(t) ≤ E max1 . The scaled input F c = αF des is then applied to the robot.
During the second part of the motion, the threatened body part changes to the shoulders of the operator. Performing the previous procedure according to the TS for this body part, we obtain the new values for the maximum velocity v max2 = 0.37m/s and transferred energy E max2 = 2.5J. This switch can be managed in the architecture by simply changing the lower bound of the tank to ε 2 = T (0) − E max2 = 2.5J, thus allowing additional task energy. The results are showcased in Fig. 3, Fig. 4 and Fig. 5: during the first part of the motion, the robot follows the desired velocity profile as long as its kinetic energy remains below the safety bound E max1 , i.e., as long as the energy in the tank remains above ε 1 . Whenever the bound is reached, the input is approximated via (28), trading off performance in order to guarantee safety. From Fig. 5 in particular, two important conclusions can be drawn. Firstly, that the bound on the robot velocity is generally conservative, as during both parts of the motion task we manage to move at an higher speed value than the one imposed by v max1 and v max2 , while still complying with the energetic requirement of PFL. Secondly, that a velocity limit is not enough to guarantee the safety of the operator. Notice how, during the second part of the motion, the robot slows down after 7.5sec to a value lower than v max2 . This is because, due to postural conditions and to its selfmotions, the kinetic energy of the robot is already at the maximum allowed value E max2 . Since our architecture is energy-based, we can easily take these factors into account during the input computations. It is easy to see that, in this case, the direct application of the velocity bound would instead lead to an unsafe behavior, i.e., the energetic bound E max2 would not be respected.
CONCLUSIONS AND FUTURE WORKS
In this paper we developed a model-agnostic energy tankbased control architecture capable of complying directly with the time-varying energetic bounds of ISO TS 15066. The architecture is general purpose, and can be directly employed as a safety layer for any force/torque controlled robot. Future works aim at experimentally validating the architecture in a real collaborative scenario, providing a formal proof of stability and safety during bound-switching phases, and leveraging kinematic redundancy for reducing the apparent mass of the robot during the motion. | 2023-04-28T01:16:14.878Z | 2023-04-27T00:00:00.000 | {
"year": 2023,
"sha1": "1b22355d1822f7bda9bb9f7ae42cb37a951416ca",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "1b22355d1822f7bda9bb9f7ae42cb37a951416ca",
"s2fieldsofstudy": [
"Engineering",
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
249454167 | pes2o/s2orc | v3-fos-license | Plasmalogen Loss in Sepsis and SARS-CoV-2 Infection
Plasmalogens are plasma-borne antioxidant phospholipid species that provide protection as cellular lipid components during cellular oxidative stress. In this study we investigated plasma plasmalogen levels in human sepsis as well as in rodent models of infection. In humans, levels of multiple plasmenylethanolamine molecular species were decreased in septic patient plasma compared to control subject plasma as well as an age-aligned control subject cohort. Additionally, lysoplasmenylcholine levels were significantly decreased in septic patients compared to the control cohorts. In contrast, plasma diacyl phosphatidylethanolamine and phosphatidylcholine levels were elevated in septic patients. Lipid changes were also determined in rats subjected to cecal slurry sepsis. Plasma plasmenylcholine, plasmenylethanolamine, and lysoplasmenylcholine levels were decreased while diacyl phosphatidylethanolamine levels were increased in septic rats compared to control treated rats. Kidney levels of lysoplasmenylcholine as well as plasmenylethanolamine molecular species were decreased in septic rats. Interestingly, liver plasmenylcholine and plasmenylethanolamine levels were increased in septic rats. Since COVID-19 is associated with sepsis-like acute respiratory distress syndrome and oxidative stress, plasmalogen levels were also determined in a mouse model of COVID-19 (intranasal inoculation of K18 mice with SARS-CoV-2). 3 days following infection, lung infection was confirmed as well as cytokine expression in the lung. Multiple molecular species of lung plasmenylcholine and plasmenylethanolamine were decreased in infected mice. In contrast, the predominant lung phospholipid, dipalmitoyl phosphatidylcholine, was not decreased following SARS-CoV-2 infection. Additionally total plasmenylcholine levels were decreased in the plasma of SARS-CoV-2 infected mice. Collectively, these data demonstrate the loss of plasmalogens during both sepsis and SARS-CoV-2 infection. This study also indicates plasma plasmalogens should be considered in future studies as biomarkers of infection and as prognostic indicators for sepsis and COVID-19 outcomes.
INTRODUCTION
Sepsis has been a major threat to global health over the past several decades. In the United States, approximately one million individuals are diagnosed with sepsis annually, with mortality estimated between 12 and 25 percent (Mayr et al., 2014;Paoli et al., 2018). An estimated 20 percent of all deaths globally were attributed to sepsis (Rudd et al., 2020). The more severe septic shock has an estimated 38 percent mortality, and half of all Americans who die in the hospital are diagnosed with sepsis (Liu et al., 2014;Vincent et al., 2019). Sepsis occurs when an infection triggers a dysregulated host immune response, leading to systemic microcirculatory and immune dysfunction. This dysregulated inflammatory response in the microvasculature leads to direct damage of cells from reactive oxygen species and other inflammatory mediators, activation of the coagulation cascade, vasodilation, and tissue hypoxia with subsequent mitochondrial dysfunction. This complex system culminates in life-threatening organ injury and metabolic derangements (Chuang et al., 2006;Robertson and Coopersmith, 2006;Galley, 2011;Angus and van der Poll, 2013;Delano and Ward, 2016;Singer et al., 2016;Prauchner, 2017). Lipids and lipid-related signaling pathways have been investigated as mediators, potentially at the blood-endothelial interface during sepsis (Amunugama et al., 2021a). Specific lipids may also have prognostic value as biomarkers in sepsis (Meyer et al., 2017;Mecatti et al., 2018;Mecatti et al., 2020;Wang et al., 2020;Amunugama et al., 2021a). Additionally, a major cause of COVID-19 mortality is sepsis-associated acute respiratory distress syndrome (ARDS). Similar to sepsis, lipids have been investigated as important mediators and biomarkers in COVID-19 (Tanner et al., 2014;Aktepe et al., 2015;Villareal et al., 2015;Jean Beltran et al., 2018;Fernández-Oliva et al., 2019;Sviridov et al., 2020;Casari et al., 2021;Mesquita et al., 2021;Theken et al., 2021).
Plasmalogens comprise a significant fraction of the lipid content in the plasma, immune cells, and endothelium (Chilton and Murphy, 1986;Chilton and Connell, 1988;Kayganich and Murphy, 1992;Murphy et al., 1992;Bräutigam et al., 1996). There is considerable diversity in plasmalogen molecular species. In general, plasmalogens contain either phosphocholine or phosphoethanolamine at the sn-3 position of the glycerol backbone. The vinyl ether aliphatic group attached to the glycerol backbone predominantly contains sixteen and eighteen carbon groups. Recently we have also shown neutrophil plasmalogens contain vinyl ether groups that are greater than twenty carbons in length (Amunugama et al., 2021b). Plasmalogens have been suggested to have important roles in biological membranes, which are due, in part, to their unique packing in membranes compared to diacyl phospholipids (Han and Gross, 1990;Han and Gross, 1991). Plasmalogens have been shown to have roles in synaptic fusion, cholesterol efflux, lipid rafts, and transmembrane protein function (Glaser and Gross, 1994;Ford and Hale, 1996;Mandel et al., 1998;Pike et al., 2002). Plasmalogens likely have key roles in inflammation at several levels. Plasmalogens are plasma-borne antioxidants and have been shown to protect endothelium from oxidative stress (Vance, 1990;Zoeller et al., 1999). The vinyl ether bond of plasmalogens is susceptible to attack by reactive species, and this propensity suggests that these lipids can protect cells by scavenging reactive oxygen species Reiss et al., 1997;Zoeller et al., 1999;Zoeller et al., 2002;Dean and Lodhi, 2018). Additionally, plasmalogens have been shown to have a key role in macrophage phagocytosis (Rubio et al., 2018). Furthermore, plasmalogens are enriched with arachidonic acid and docosahexaenoic acid at the sn-2 position, and their metabolism by phospholipases leads to the mobilization of these fatty acids and their subsequent oxidation to bioactive eicosanoids and resolvins (Paul et al., 2019). Collectively, the roles of plasmalogens in membrane molecular dynamics, as antioxidants, and as precursors of bioactive lipids indicate they may be important in inflammation associated with disease and infection.
Plasma plasmalogen levels have been shown to decrease during inflammation such as during endotoxemia (Ifuku et al., 2012), Parkinson's disease (Dragonas et al., 2009;Fabelo et al., 2011), andlupus (Hu et al., 2016). Several of these previous studies (Dragonas et al., 2009;Hu et al., 2016) have suggested the loss of plasmalogens during Parkinson's disease and lupus is due to the associated oxidative stress. Surprisingly only one study has investigated plasmalogen loss during human sepsis, which also attributed plasmalogen loss to oxidative stress (Brosche et al., 2013). This study was limited to measuring dimethyl acetals as a measure of plasmalogen levels and was performed in a limited number of geriatric septic patients. In addition to sepsis, several investigations have emerged over the past 2 years demonstrating plasma plasmalogen levels in humans with severe COVID-19 are decreased (Schwarz et al., 2021;Snider et al., 2021). The loss of plasmalogens and other phospholipids enriched with arachidonic acid and docosahexaenoic acid as well as increased secretory phospholipase A 2 (Snider et al., 2021) in COVID-19 patients support an important role for plasmalogens as precursors of oxylipids.
We have previously shown the plasmalogen vinyl ether bond is targeted by neutrophil-derived HOCl (a product of myeloperoxidase activity) resulting in 2-chlorofatty aldehyde and 2-chlorofatty acid production (Albert et al., 2001;Thukkani et al., 2002;Anbukumar et al., 2010). Furthermore, increased 2-chlorofatty acid plasma levels associate with ARDScaused mortality in human sepsis (Meyer et al., 2017). 2-Chlorofatty acids are also elevated in the plasma and several organs in rats subjected to cecal slurry sepsis (Pike et al., 2020). Since plasmalogens are the precursors of chlorinated lipid production during sepsis and since limited molecular detail is known about human plasma plasmalogen loss during sepsis, in the present study we have investigated plasma plasmalogen levels in human sepsis patients. Furthermore, we have employed the rat cecal slurry sepsis model to identify both plasma plasmalogen loss as well as changes in liver and kidney plasmalogen levels during sepsis. Lastly, we examined changes in plasmalogen levels in plasma and lung in mice challenged with SARS-CoV-2. Collectively, these studies show the loss of plasmalogens during sepsis and SARS-CoV-2 infection with new detail into changes in plasma molecular species, as well as changes in organs in rodent models of sepsis and COVID-19.
Human Plasma Specimens and Analysis
Sepsis plasma samples were obtained from subjects admitted to the intensive care unit (ICU) with suspected infection and acute organ dysfunction (sepsis) at day 7 in the ICU. The cohort has been previously described (Reilly et al., 2018). The cohort study is approved by the University of Pennsylvania institutional review board (IRB protocol #808542), and all subjects or their proxies provided informed consent to participate. Control healthy plasma samples were obtained at Saint Louis University under IRB protocol 26646. Plasma samples were stored in aliquots to minimize freeze thaw cycles to two times or less.
Rat Cecal Slurry Studies
Rats were supplied from Envigo (Harlan-Indianapolis, IN, United States). All rats were young adult male Sprague-Dawley weighing between 270-330 g (8-12 weeks old). All animals were maintained in a temperature and humidity-controlled room with a 12 h light/dark cycle and unrestricted access to chow and water. Upon arrival to Saint Louis University, rats were acclimated to the environment for at least a week prior to experiments. All animal experiments were conducted with the approval of the Institutional Animal Care and Use Committee at Saint Louis University. Cecal slurry (CS) was prepared from cecal contents of donor male Sprague-Dawley rats as previously detailed (Pike et al., 2020). Prior to ip CS administration for sepsis studies, aliquots of CS were thawed quickly in warm water. Rats were administered 15 ml/kg CS or 15% glycerol vehicle control (ip) in a total volume of 20 ml/kg, with the remaining 5 ml/kg being sterile saline (B Braun Medical, Bethlehem, PA, United States). At the time of CS administration, animals were administered a concurrent 30 ml/kg dose of subcutaneous sterile saline. Eight hours following CS treatment, 25 mg/kg ceftriaxone (Hospira) in sterile saline was administered intramuscularly in the hind limb in a 1 ml/kg volume. A second subcutaneous 30 ml/kg dose of sterile saline was administered concurrently with the ceftriaxone in order to simulate treatment of human sepsis with crystalloid and antibiotics. 20 h following CS injection, rats were euthanized, and organs were collected, which were immediately frozen on dry ice. Blood was collected via cardiac puncture, and plasma was immediately prepared and then stored at −80°C. Plasma preparation and storage was achieved within 30-45 min of the blood draw. Plasma samples were stored in aliquots to minimize freeze thaw cycles to two times or less. Rats were euthanized by injecting 0.5 ml Somnasol (390 mg/ml sodium pentobarbital and 50 mg/ml phenytoin sodium), ip followed by thoracotomy.
Mouse SARS-CoV-2 Infection Studies
K18 mice (JAX strain 034860, human angiotensin converting enzyme 2 (hACE2 transgenic)) were supplied from the Jackson Laboratory (Bar Harbor, MA, United States). All mice were young adult females weighing between 25-30 g (~9 weeks old). All animals were maintained in a temperature and humiditycontrolled room with a 12 h light/dark cycle and unrestricted access to chow and water. Upon arrival to Saint Louis University, mice were acclimated to the ABSL-3 environment for at least a week prior to experiments. All animal experiments were conducted with the approval of the Institutional Animal Care and Use Committee at Saint Louis University. K18 mice were either mock infected or infected with 1 × 10 4 focus forming units (FFU) of the beta variant B.1.351 of SARS-CoV-2 intranasally (20 μl). The beta variant B.1.351 of SARS-CoV-2 was obtained from BEI Resources (#NR55282). Tissues and plasma were collected from euthanized mice three-or 4-days following infection. Tissue homogenates were prepared for analyses of either viral burden, cytokine mRNA, or lipids. SARS-CoV-2 viral burden was measured by focus forming assays (FFAs) using Vero E6 cells transfected with hACE2 and TMPRSS2 as we have previously described (Geerling et al., 2022). Inflammatory cytokine levels were measured via qRT-PCR using Taqman primer and probe sets from IDT as previously described (Geerling et al., 2021).
Lipid Analysis
Tissue and plasma lipids were extracted in the presence of internal standards (see Supplementary Table S1) by a modified Bligh-Dyer extraction as previously described (Bligh and Dyer, 1959;Maner-Smith et al., 2020;Pike et al., 2020;Amunugama et al., 2021b). Individual choline and ethanolamine glycerophosphospholipids were detected using selected reaction monitoring (see Supplementary Table S1 for transitions) with an Altis TSQ mass spectrometer equipped with a Vanquish UHPLC System (Thermo Scientific) with isotopomer corrections for each target molecular species compared to the respective internal standard. Lipids were separated on an Accucore TM C30 column 2.1 mm × 150 mm (Thermo Scientific) with mobile phase A comprised of 60% acetonitrile, 40% water, 10 mM ammonium formate, and 0.1% formic acid and mobile phase B comprised of 90% isopropanol, 10% acetonitrile with 2 mM ammonium formate, and 0.02% formic acid. Initial conditions were 30% B with a discontinuous gradient to 100% B at a flow rate of 0.260 ml/min. Plasmalogen molecular species were identified by acid lability and fatty acid aliphatic group identification under identical conditions employed using the TSQ mass spectrometer but using a Q-Exactive mass spectrometer with choline glycerophospholipids detected in negative ion mode.
Statistics
Student's t-test was used to compare two groups in rat CS and K18 mouse SARS-CoV-2 infection studies. Plasma concentrations were compared between healthy control subjects and sepsis subjects by Wilcoxon rank sum test.
Alterations in Plasma Plasmalogen and Diacyl Phospholipids in Human Sepsis
Human geriatric septic patients have previously been shown to have decreased plasma plasmalogen levels as determined by assessing dimethyl acetals of plasmalogens by gas chromatography. These analyses did not identify the lipid class Frontiers in Cell and Developmental Biology | www.frontiersin.org June 2022 | Volume 10 | Article 912880 (choline or ethanolamine) of the plasmalogen pool or the molecular species that decrease during sepsis. Additionally, we have previously shown plasma 2-chloropalmitic acid levels are increased in human sepsis and associate with ARDS-caused mortality (Meyer et al., 2017). 2-Chloropalmitic acid is derived from 2-chloropalmitaldehyde produced by the action of HOCl targeting the vinyl ether bond of plasmalogens (Albert et al., 2001;Thukkani et al., 2002;Anbukumar et al., 2010). Accordingly, we performed a detailed study of plasma plasmalogens in septic humans. The plasma specimens of patients in this study are from septic patients collected following 7 days in the ICU. The average age of these patients is 59.8 years. Interestingly, data shown in Figure 1A show levels of plasma plasmenylcholine (pPC) molecular species either were unchanged or increased in septic patients compared to control subjects. Since the control cohort age was younger than the sepsis group (Table 1), we also compared changes in plasma pPC levels between the sepsis cohort and an age restricted subgroup of the control subjects to test a cohort that was more closely aligned in age with the sepsis cohort ( Figure 1D). A similar pattern of either increased or unchanged levels of pPC was observed in the septic patients compared to the age restricted controls to sepsis. The two pPC FIGURE 1 | Loss of plasma plasmenylethanolamine (pPE) and lysoplasmenylcholine (pLPC) in human sepsis. Plasma was collected from 31 healthy humans (control) and 63 ICU patients with sepsis following 7 days in the ICU. Lipids were extracted and plasmalogen levels were quantitated as described in "Materials and Methods." Plasma plasmenylcholine (pPC) (A,D), pLPC (B,E), and pPE (C,F) are compared between the control and sepsis cohorts (A-C) and an age restricted control and sepsis cohorts (D-F). *, **, ***, and **** indicate p < 0.05, 0.01, 0.001, and 0.0001, respectively, for comparisons between cohorts. Mean and standard deviation values are indicated for each molecular species and condition. a The acute physiology and chronic health examination (APACHE) III score is displayed as median (interquartile range) due to a skewed distribution.
Frontiers in Cell and Developmental Biology | www.frontiersin.org June 2022 | Volume 10 | Article 912880 molecular species elevated in sepsis were 16:0-18:1 pPC and 18:0-20:4 pPC (x:y-x:y where x# of carbons and y# of double bonds in aliphatic groups at the sn-1 and sn-2 position, respectively). In contrast, significant decreases were observed with plasma 16:0 and 18:0 lysoplasmenylcholine (pLPC) in septic subjects with comparisons to both the unrestricted control group ( Figure 1B), as well as the age restricted control group ( Figure 1E). Furthermore, all plasma plasmenylethanolamine (pPE) molecular species in our targeted analyses were significantly decreased in the septic patient cohort in comparison to both the unrestricted control group ( Figure 1C), as well as the age restricted control group ( Figure 1F). In contrast to pPE, plasma levels of diacyl phosphatidylethanolamine (PE), as well as phosphatidylcholine (PC), were increased in the sepsis cohort in comparisons to both the unrestricted and age restricted cohorts (Figures 2A-D).
Alterations in Plasmalogen and Diacyl Phospholipids in Rodent Sepsis
To gain further insights into alterations in plasmalogens, as well as diacyl phospholipids, during sepsis we examined both plasma and tissue changes in these phospholipids in the cecal slurry (CS) rodent model of sepsis. Previous studies have demonstrated under the CS infection conditions followed by antibiotic treatment 8 h post infection employed in these studies, rats survive at least 20 h and have increased plasma 2-chlorofatty acid levels in comparison to vehicle treated rats (Pike et al., 2020). pPC was identified as the most abundant plasmalogen class in both control and sepsis rat plasma compared to pPE. ( Figures 3A,C). Plasma plasmalogen loss was observed in CS treated rats compared to vehicle injected rats. Plasma 16:0-18:2, 18:0-18:2, and 18:0-18:1 pPC levels were decreased in septic rats 20 h post infection ( Figure 3A). Similar to human sepsis, both 16:0 and 18: 0 pLPC levels were decreased in septic rats in comparison to control vehicle-treated rats ( Figure 3B). In contrast to human sepsis, the predominant species of plasma pPE levels were not significantly decreased in rat sepsis, however less abundant species such as 16:0-18:2, 18:0-18:2, and 18:0-18:1 pPE did significantly decrease ( Figure 3C). For the diacyl species, sepsis resulted in a decrease of only 16:0-20:4 PC in rat plasma ( Figure 3D). In stark contrast to the drop in plasma pPE levels, all diacyl PE levels were significantly increased ( Figure 3E).
Previously in this rodent model we identified the kidney and liver as primary sites of organ failure based on loss of permeability barrier function as assessed by Evans blue extravasation (Pike et al., 2020). Additionally, both liver and kidney levels of 2- chlorofatty acids were previously shown to be increased in this sepsis model (Pike et al., 2020). 2-Chlorofatty acids are produced as a result of neutrophil-derived HOCl targeting plasmalogens (Thukkani et al., 2002;Anbukumar et al., 2010). Accordingly, we examined plasmalogen levels in the kidney and liver of CS infected rats. In contrast to plasma, pPE is the predominant plasmalogen class in both rat kidney and liver compared to pPC (Figures 4, 5). Multiple pPE molecular species in the rat kidney were significantly decreased in septic rats, including the predominant 16:0-20:4 and 18:0-20:4 pPE species ( Figure 4C). Renal 16:0 pLPC was also significantly decreased in sepsis ( Figure 4B). Meanwhile, some less predominant renal pPC levels were increased ( Figure 4A). In contrast to changes in rat plasma and kidney plasmalogens, as well as in human plasma, several liver plasmalogens increased during rat sepsis. All pPC species significantly increased, including the predominant 16:0-20:4 pPC and 18:0-20:4 pPC species, in livers of CS elicited septic rats ( Figure 5A). 16:0-20:4 pPE and 18:0-20:4 pPE, among others, also were significantly increased in livers from septic rats compared to control rats ( Figure 5C). Further in contrast to changes in the plasma and kidney, there was no significant difference in pLPC levels in livers from septic rats compared to those of control rats ( Figure 5B). Diacyl species were measured in the kidney and liver as well. In the kidney, multiple species of diacyl PC and PE were significantly decreased ( Figures 4D,E).
Plasmalogens in SARS-CoV-2 Infected K18 Mice
Since plasmalogens have been shown to decrease in the plasma of humans with severe COVID-19 (Schwarz et al., 2021;Snider et al., 2021) and SARS-CoV-2 infection leads to a form of sepsisassociated ARDS, we investigated the role of airway infection with SARS-CoV-2 in K18-hACE2 transgenic mice. The human keratin 18 promoter (K18) in K18 mice directs human ACE2 expression in the epithelium, which is important as SARS-CoV-2 infections tend to begin in airway epithelia. Three days following nasal inoculation with SARS-CoV-2 a robust viral burden was observed in the lung ( Figure 6A), which is similar to findings by others (Zheng et al., 2021). The associated cytokine storm of SARS-CoV-2 infection was confirmed with increases in interleukin-1β (IL-1B), interleukin-6 (IL-6) and tumor necrosis factor-α (TNF-α) mRNA expression in lung tissue ( Figure 6B). These cytokine mRNAs were not detected in mock-infected lung (data not shown). 16:0-20:4 pPC and 18:0-20:4 pPC levels in the lung were selectively decreased in SARS-CoV-2 infected K18 mice ( Figure 6C). Additionally, both 16:0-20:4 pPE and 18:0-20:4 pPE, as well as 18:0-22:6 pPE, were decreased in the lung of SARS-FIGURE 3 | Loss of plasma plasmalogens and increases in diacyl PC and PE in septic rats. Rats were injected with cecal slurry (sepsis) (n = 5) or vehicle (control) (n = 6) and were subsequently treated with fluid replacement and ceftriaxone 8 h following cecal slurry injection as described in "Materials and Methods." Plasma was collected 20 h following cecal slurry or vehicle treatment, and lipids were extracted. Plasmalogen levels were quantitated as described in "Materials and Methods." Plasma pPC, pLPC, pPE, PC, and PE are shown in (A-E), respectively. *, **, ***, and **** indicate p < 0.05, 0.01, 0.001, and 0.0001, respectively, for comparisons between control and septic rats. Mean and standard deviation values are indicated for each molecular species and condition.
Frontiers in Cell and Developmental Biology | www.frontiersin.org June 2022 | Volume 10 | Article 912880 CoV-2 mice ( Figure 6D). As in rat tissues, pPE levels were higher than that of pPC in the mouse lung. We also assessed the major lung lipid, 1,2-dipalmitoyl-sn-glycero-3-phosphocholine (DPPC) in the lungs, which is the major phospholipid component of surfactant. Lung DPPC levels were not altered in SARS-CoV-2 infected mice ( Figure 6E). Changes in plasma plasmalogen levels were only modestly decreased in SARS-CoV-2 infected mice ( Figure 6F).
DISCUSSION
Plasmalogens are a lipid subclass characterized by a vinyl ether linked aliphatic group attached to the sn-1 position of glycerol, a fatty acid esterified at the sn-2 position and, in general, either phosphoethanolamine or phosphocholine at the sn-3 position. The sn-2 fatty acid of plasmalogens is enriched with arachidonic acid in many mammalian tissues and thus one role of plasmalogens has been described as a storage depot for arachidonic acid that is released during inflammation (Chilton and Connell, 1988;Ford and Gross, 1989;Braverman and Moser, 2012). The sn-1 vinyl ether is a target for reactive oxygen species leading to the release of free fatty aldehydes that subsequently can be metabolized to free fatty acids (Khaselev and Murphy, 1999;Stadelmann-Ingrand et al., 2001). The reaction of reactive oxygen species with the vinyl ether is a terminal event for ROS and thus is considered an antioxidant activity. Multiple studies have shown plasmalogens protect tissues and cells from reactive oxygen species and oxidative stress. Cells deficient in plasmalogens are susceptible to free radical-mediated toxicity Zoeller et al., 1988). Furthermore, supplementing cells with precursors to plasmalogens has been shown to protect cells from reactive oxygen species including during hypoxic damage to endothelial cells (Zoeller et al., 1999). Collectively, the abundance of arachidonic acid esterified to plasmalogens that can be mobilized for eicosanoid production and the susceptibility of the vinyl ether to oxidative stress suggest plasmalogens may have important roles in infection and inflammation. Plasma plasmalogen depletion has also been demonstrated in humans with Parkinson's disease, Alzheimer's disease, lupus and endotoxemia (Dragonas et al., 2009;Fabelo et al., 2011;Ifuku et al., 2012;Hu et al., 2016;Su et al., 2019). In the present study we provide further support for the involvement of plasmalogens in inflammation by providing molecular detail to changes in plasmalogen levels both in plasma and in organs during sepsis as well as SARS-CoV-2 infection. Previous studies showed the 16:0 dimethyl acetal derivative of plasmalogens containing a sixteen-carbon vinyl ether aliphatic group bound to the glycerol backbone are decreased 55% in plasma of twenty geriatric septic patients compared to agematched healthy subjects (Brosche et al., 2013). In this previous study, data for 18:0 dimethyl acetals were not reported and changes in 16:0 dimethyl acetal were from patient plasma collected within 24 h of severe sepsis diagnosis. FIGURE 4 | Alterations in kidney diacyl and plasmalogen phospholipids during rat sepsis. Rats were injected with cecal slurry (sepsis) (n = 5) or vehicle (control) (n = 6) as described in Figure 3. Kidneys were collected 20 h following cecal slurry or vehicle treatment, and lipids were extracted. Plasmalogen levels were quantitated as described in "Materials and Methods." Kidney pPC, pLPC, pPE, PC, and PE are shown in (A-E), respectively. *, **, ***, and **** indicate p < 0.05, 0.01, 0.001, and 0.0001, respectively, for comparisons between control and septic rats. Mean and standard deviation values are indicated for each molecular species and condition.
Frontiers in Cell and Developmental Biology | www.frontiersin.org June 2022 | Volume 10 | Article 912880 Human plasma pPC levels are~8-10 fold greater than pPE levels, and pPC is highly enriched in molecular species containing a sixteen-carbon vinyl ether aliphatic group bound to the glycerol backbone, suggesting the plasma plasmalogens that decreased in geriatric sepsis patients (Brosche et al., 2013) are from pPC pools. In contrast to this previous study, our findings from the MESSI cohort were from patient plasma collected 7d following ICU admission for sepsis. This difference in time for plasma collection prevents direct comparisons to the previously reported study (Brosche et al., 2013). However, in the present studies pPE molecular species containing 16:0 vinyl ether groups, as well as 16:0 pLPC, were decreased in the human sepsis cohort. Plasma pPE species containing 18:0 vinyl ether groups were also significantly decreased in septic subjects investigated in our study. Future studies should be directed at determining details of plasmalogen loss at 24 h and examine longitudinal changes in plasmalogen loss. It will also be interesting to compare changes in human plasmalogen molecular species at 24 h to the changes we observed in the rat plasma plasmalogen molecular species that changed 20 h post CS injection. Interestingly with rat sepsis, plasma plasmalogen loss at 20 h decreased in several pPC and pPE species as well as pLPC. A summary of levels of plasmalogen and diacyl species shows a general downward trend in plasmalogen levels in sepsis, excluding livers from of septic rats (Figure 7). In particular, this summary highlights the many differences in changes elicited during sepsis between plasmalogen and diacyl phospholipid levels depending on the tissue and particular phospholipid class. One of the more striking observations is the loss of pPE in plasma in contrast to increases in diacyl PE during sepsis in both humans and rats. The mechanisms responsible for plasma pLPC and pPE loss during sepsis are not known, but several mechanisms seem likely. One mechanism is that loss of plasmalogen is due to oxidative stress during sepsis. We have previously shown plasma 2-chlorofatty acid levels are elevated in human sepsis (Meyer et al., 2017;Amunugama et al., 2021b). Furthermore, in this rat sepsis model there are increased levels of 2-chlorofatty acid levels (Pike et al., 2020), which is derived from plasmalogens (Albert et al., 2001;Thukkani et al., 2002;Amunugama et al., 2021b). During sepsis the tissue plasmalogen pool or the specific plasmalogen molecular species targeted by HOCl has not been determined. In this respect it could be speculated that the impressive loss of plasma pLPC, which is overall a small pool of the total plasmalogen, could be responsible for the nanomolar levels of 2-chlorofatty acid observed during sepsis. It is also possible that the loss of plasmalogens is due to the activation of phospholipases. It has been suggested that phospholipase A 2 -mediated release of arachidonic acid from plasmalogens is important in the production of oxylipids in COVID-19 FIGURE 5 | Alterations in liver diacyl and plasmalogen phospholipids during rat sepsis. Rats were injected with cecal slurry (sepsis) (n = 5) or vehicle (control) (n = 6) as described in Figure 3. Liver was collected 20 h following cecal slurry or vehicle treatment, and lipids were extracted. Plasmalogen levels were quantitated as described in "Materials and Methods." Liver pPC, pLPC, pPE, PC, and PE are shown in (A-E), respectively. *, **, ***, and **** indicate p < 0.05, 0.01, 0.001, and 0.0001, respectively, for comparisons between control and septic rats. Mean and standard deviation values are indicated for each molecular species and condition.
Frontiers in Cell and Developmental Biology | www.frontiersin.org June 2022 | Volume 10 | Article 912880 Frontiers in Cell and Developmental Biology | www.frontiersin.org June 2022 | Volume 10 | Article 912880 9 (Schwarz et al., 2021;Snider et al., 2021). The phospholipase A 2 mechanisms may be directly responsible for pPE loss. It is also possible pLPC loss is due to either accelerated use as an acceptor by acyltransferases leading to conserved levels of pPC despite putative oxidative loss or tissue uptake during sepsis. Another possibility is pPE and pLPC decrease as a result of reduced release from the liver and vascular endothelium. In human sepsis, HDL-cholesterol decreases (Vavrova et al., 2016;Tanaka et al., 2019), which may also be due to decreased secretion from the liver. Decreased plasma plasmalogens and increased liver plasmalogens during sepsis are similar to plasmalogen changes in H-Lrpprc mice, a mouse model of the monogenic form of the mitochondrial disease, Leigh syndrome (Ruiz et al., 2019). In H-Lrpprc mice, hepatic Far1 and Agps are also elevated suggesting decreased plasma plasmalogen levels mediate a feedback system to increase liver plasmalogen biosynthesis. Such a feedback system may also be responsible for elevated liver plasmalogen levels in livers during sepsis. It will be interesting in future studies to examine Agps and Far1 as well as differences in the levels of the plasmalogen precursors, alkyl ether lipids, in the livers from septic and control rats.
The possibility that pLPC is a circulating precursor to enrich plasmalogens in endothelium is intriguing. Plasmalogen enhancement in isolated cell studies protects cells from oxidative stress (Zoeller et al., 1999). Additionally, several studies have investigated plasmalogen precursors as a potential treatment in inflammatory diseases (Bozelli and Epand, 2021;Paul et al., 2021). Enhancing plasmalogen levels is difficult since dietary consumption of plasmalogens is reduced due to the acidic environment of the gastrointestinal tract. Using acid-stable precursors such as alkyl ether lipids will raise plasmalogen levels over time following desaturation of the alkyl ether bond to the vinyl ether. However, under acute conditions such as sepsis, the conversion of an alkyl ether to plasmalogens likely will be very slow. On the other hand, circulating pLPC already has the vinyl ether bond and lysolipids are rapidly incorporated into cells. It will be important in the future to determine the source of circulating pLPC under physiological conditions as well as during sepsis. It could be envisaged that pLPC is a product of lipoprotein-associated pPC hydrolysis by either secretory phospholipase A 2 or lipoprotein lipase. During sepsis pLPC levels potentially are dependent on a combination of oxidation of pPC or pLPC and pPC hydrolysis. Finally, the role of pLPC during sepsis needs to be further considered as a biomarker of outcomes. Similarly, the role of other plasmalogens, as well as the relationship of plasma plasmalogen levels with changes in plasma 2-chlorofatty acid levels, need to be considered as outcome predictors. The relationship of plasmalogen and chlorinated lipid levels may also allow distinction of changes in these lipids with greater specificity to infection compared to other disease states associated with only decreased plasma plasmalogen levels with the exception of lupus (Dragonas et al., 2009;Fabelo et al., 2011;Ifuku et al., 2012;Mahieu et al., 2014;Hu et al., 2016;Paul et al., 2019;Su et al., 2019).
The studies herein show plasmalogen loss during sepsis. However, there are several limitations to these studies. In the human studies we analyzed differences between septic humans and healthy control humans. Our healthy cohort average age was thirty-eight while the sepsis group was sixty. To overcome this limitation, we selected the oldest individuals (n = 7) in the healthy group and assessed differences in this control subset compared to the larger group of septic subjects ( Figures 1D-F, 2; Table 1). These additional analyses indicated plasma pLPC and pPE levels were reduced in the sepsis cohort when compared to this agealigned control subgroup. Another limitation is that we have no data on the sex of individuals in our healthy cohort, while our sepsis cohort was comprised of 40% females. Our rat studies focused on changes occurring only in male rats and 20 h following cecal slurry injection. Thus, comparisons of rat specimens to human specimens were collected at different times and sex differences were not a parameter in the rat studies. It should also be appreciated that plasma levels of plasmalogens were considerably different in healthy controls due to the inherent differences in plasmalogen levels in man versus rat. Nevertheless, both human and rat sepsis led to decreases in plasma plasmalogen levels, and the rat studies afforded the opportunity to investigate changes in plasmalogen levels in the liver and kidney during sepsis. There were also limitations to the SARS-CoV-2 infection studies when comparisons are made to the rat and human sepsis studies. The SARS-CoV-2 infection studies were a viral infection elicited by airway inoculation to transgenic mice expressing the hACE2 receptor in all epithelial cells. Humans do not express ACE2 in all epithelial cells. Furthermore, these studies were performed only in female mice due to availability of genotyped mice for this study. Future studies are needed to consider sex as a parameter in both SARS-CoV-2 infected mice and rat cecal slurry sepsis. Compared to the unknown time for human sepsis beginning and the known time for CS injection, mouse infections with SARS-CoV-2 leading to pulmonary inflammation require time for viral replication to elicit injury which is typically 3-5 days. While our human and rat sepsis studies involved systemic infection, SARS-CoV-2 infection of K18 mice initially was primarily localized to infection of the respiratory tree. Infection led to robust increases in the expression of pro-inflammatory cytokines. The loss of plasmalogen in the lung during SARS-CoV-2 infection likely is the result of oxidative stress. We did not observe a loss in DPPC in the lung of infected mice. The chemical makeup of plasmalogens compared to DPPC provides a contrast in susceptibility to oxidative stress. The plasmalogen vinyl ether bond is a target for oxidation while the saturated fatty acids of DPPC are very stable under oxidative stress. Similar to findings with severe COVID-19 patients (Schwarz et al., 2021;Snider et al., 2021) we detected decreases in plasma plasmalogens in infected K18 mice. This is the first demonstration of the loss of plasmalogens at a molecular species level in human sepsis. Furthermore, we show pLPC loss in both human and rodent sepsis. It is possible that plasma pLPC is a critical lipid to maintain endothelial plasmalogen levels under oxidative stress associated with sepsis. The demonstration of plasmalogen loss during SARS-CoV-2 further highlights the nature of plasmalogen loss during oxidative stress associated with infectious disease. The role of plasmalogens as biomarkers of outcomes in Frontiers in Cell and Developmental Biology | www.frontiersin.org June 2022 | Volume 10 | Article 912880 sepsis and COVID-19 need to be explored as well as the potential protective role of plasmalogens during infectious disease.
DATA AVAILABILITY STATEMENT
The original contributions presented in the study are included in the article/Supplementary Material, further inquiries can be directed to the corresponding author.
ETHICS STATEMENT
The studies involving human participants were reviewed and approved by the University of Pennsylvania institutional review board and Saint Louis University institutional review board. The patients/participants provided their written informed consent to participate in this study. The animal study was reviewed and approved by the Institutional Animal Care and Use Committee at Saint Louis University.
AUTHOR CONTRIBUTIONS
DP performed experimental studies and data analysis and prepared the manuscript. RM performed experimental studies and data analysis and contributed to final manuscript preparation. EG performed experimental studies and data analysis and contributed to final manuscript preparation. CA performed experimental studies and data analysis and contributed to final manuscript preparation. DH contributed specimen collection and final manuscript preparation. MS contributed clinical study data collection, statistical analyses, and final manuscript preparation. NM contributed clinical study data collection, statistical analyses, and final manuscript preparation. AP performed data analysis and contributed to final manuscript preparation. DF was responsible for oversight of all aspects of studies, manuscript preparation, and final manuscript.
FUNDING
This study was supported (in part) by research funding from the National Institutes of Health R01 GM-115553 and S10OD025246 to DF. Clinical samples and patient phenotyping were funded by NIH HL137006 and HL137915 to NM. | 2022-06-08T15:12:00.397Z | 2022-06-06T00:00:00.000 | {
"year": 2022,
"sha1": "b98e5ef20944e9457604dd89ba16d3b7ff96862c",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fcell.2022.912880/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "df5ea566be2f6a44a699734e23ab2b9e4f5fee6f",
"s2fieldsofstudy": [
"Biology",
"Medicine",
"Chemistry"
],
"extfieldsofstudy": []
} |
211010969 | pes2o/s2orc | v3-fos-license | A power Schur complement Low-Rank correction preconditioner for general sparse linear systems
An effective power based parallel preconditioner is proposed for general large sparse linear systems. The preconditioner combines a power series expansion method with some low-rank correction techniques, where the Sherman-Morrison-Woodbury formula is utilized. A matrix splitting of the Schur complement is proposed to expand the power series. The number of terms used in the power series expansion can control the approximation accuracy of the preconditioner to the inverse of the Schur complement. To construct the preconditioner, graph partitioning is invoked to reorder the original coefficient matrix, leading to a special block two-by-two matrix whose two off-diagonal submatrices are block diagonal. Variables corresponding to interface variables are obtained by solving a linear system with the coeffcient matrix being the Schur complement. For the variables related to the interior variables, one only needs to solve a block diagonal linear system. This can be performed efficiently in parallel. Various numerical examples are provided to illustrate that the efficiency of the proposed preconditioner.
Introduction. Consider the solution of the following linear system
where A ∈ R n×n is a large sparse matrix and b ∈ R n is a given vector. Preconditioned Krylov subspace methods are often used for solving such systems, see, e.g., [20]. Among the most popular general-purpose preconditioners are the Incomplete LU (ILU) techniques [12,19]. However, ILU often fails, especially in situations when the matrix is highly indefinite [17,23]. In addition, due to their sequential nature, ILU preconditioners will result in poor performance on massively parallel high-performance computers. Algebraic multigrid (AMG) methods constitute another class of popular techniques for solving problems arising from some discretized elliptic PDEs. Often, AMG also fails for indefinite problems. Finally, sparse approximate inverse preconditioners [3,6,10,13] were developed to overcome these shortcomings but were later abandoned by practitioners due to their high memory demand.
Recently, a new class of approximate inverse preconditioners based on low-rank approximations has been proposed. They include the Multilevel Low-Rank (MLR) preconditioner [15], the Schur complement low-rank (SLR) preconditioner [16], the Multilevel Schur complement Low-Rank (MSLR) preconditioner [22] and the Generalized Multilevel Schur complement Low-Rank (GMSLR) preconditioner [8]. These preconditioners approximate the Schur complement or its inverse by exploiting various low-rank corrections and because they are essentially approximate inverse methods they tend to perform rather well on indefinite linear systems. Similar ideas have also been exploited in [9]. A related class of methods is the class of rank structured matrix methods, which include the HOLDR-matrix [1], the H-matrix [2,4], the H 2 -matrix [11] and hierarchically semiseparable (HSS) matrices [5,18,24]. These methods partition the coefficient matrix A into several smaller blocks and approximate certain off-diagonal blocks by low-rank matrices. These techniques have recently been applied to precondition sparse linear systems, resulting in some rank structured sparse preconditioners. We refer the reader to [25,26,27,28] for details.
In this paper, we present a method that combines low-rank approximation methods with a simple Neumann polynomial expansion technique [20,Section 12.3.1] aimed at improving robustness. We call the resulting method the Power -Schur complement Low-Rank (PSLR) preconditioner. A straightforward way to apply the Neumann polynomial preconditioning technique to the Schur complement S is to approximate (ωS) −1 by an m-term polynomial expansion as [20,Section 12.3.1] where ω is a scaling parameter, D is the (block) diagonal of S and N = I − ωD −1 S. However, scheme (1.2) has a number of disadvantages. For example it is difficult to choose an optimal value for the parameter ω. In addition, since the matrix series in (1.2) converges only when ρ(N ) < 1, the approximation accuracy will improve as m increases only under this condition which may not be satisfied for a general matrix. Moreover, even if ρ(N ) < 1, (1.2) is only a rough approximation to S −1 when m is small and using a large m may become computationally expensive. The PSLR preconditioner seamlessly combines the power series expansion with a few low-rank correction techniques and can overcome these shortcomings. We summarize below the main advantages of the PSLR preconditioner over existing low-rank approximate inverse preconditioners. 1. Improved robustness. When ρ(N ) > 1, the classical Neumann series defined by (1.2) diverges and the approximation accuracy deteriorates as m increases. However, low-rank correction techniques can be invoked to address this issue. More specifically, we exploit low-rank correction techniques as a form of deflation to move those eigenvalues of N with modulus larger than 1 closer to 0. The goal is to make the series (1.2) converge for the "deflated" Schur complement. 2. Enhanced decay property. The performance of each of the three previously developed methods, SLR, MSLR and GMSLR, depends on the eigenvalue decay property associated with the Schur complement inverse S −1 . If the decay rate is slow, these preconditioners are not effective. On the other hand, PSLR preconditioner can control the eigenvalue decay rate of the matrix to be approximated by adjusting the number of the expansion term m in (1.2) and this can significantly improve performance. 3. High parallelism. The low-rank correction terms used in the PSLR preconditioner can be computed by solving several linear systems with coefficient matrices that are block diagonal. This results in a much more efficient treatment than with in MSLR and GMSLR preconditioners since ILU factorizations and the resulting triangular solves can be applied efficiently in parallel. In addition, most of the important matrix-vector products of PSLR involve block diagonal matrices or dense matrices, leading to a high degree of parallelism in both the construction and the application stage. 4. Suitability for general matrices. PSLR is quite effective in handling general sparse problems. Unlike SLR and MSLR, it is not restricted to symmetric systems. Numerical experiments in Section 4 illustrate that the PSLR preconditioner outperforms the other low-rank approximation based preconditioners on various tests. The paper is organized as follows. Section 2 is a brief review of graph partitioning, which will be used to reorder the original coefficient matrix A. Section 3 shows how to build the PSLR preconditioner by exploiting low-rank approximations and a power series expansion associated with the inverse of a certain Schur complement S. A spectral analysis for the corresponding preconditioned matrix is also developed. Section 4 reports on numerical experiments to illustrate the efficiency and robustness of the PSLR preconditioner. Concluding remarks are stated in Section 5.
2. Background: graph partitioning. Building the PSLR preconditioner begins with a reordering of the coefficient matrix A with the help of a graph partitioner [16,20]. Specifically, in this paper, we invoke any vertex-based (aka 'edge separation') partitioner to reorder A. As there is no ambiguity, we will still use A and b to denote the reordered matrix and right-hand side, respectively.
Let s be the number of subdomains used in the partitioning. When the variables are labeled by subdomains and the interface variables are labeled last, the permuted linear system of (1.1) can be rewritten as where B ∈ R p×p , E ∈ R p×q and F ∈ R q×p with p + q = n. The submatrices B, E, C have the following block diagonal structures while F has the same block structure as that of E T . For each subdomain i, B i denotes the matrix corresponding to the interior variables and C i represents the matrix associated with local interface variables, the matrices E i and F i denote the couplings to local interface variables and the couplings from local interface variables, respectively. A matrix C ij is a nonzero matrix if and only if some interface variables of subdomain i are coupled with some interface variables of subdomain j.
After it is reordered, the solution to Equation (2.1) can be found by solving two intermediate problems where S = C − F B −1 E is the Schur complement of the coefficient matrix in (2.1).
Since B and E are block diagonal, the second equation in (2.2) can be solved efficiently once the vector y becomes available. Many efforts have been devoted to develop preconditioners for solving linear systems associated with S in the first equation. Algebraic Recursive Multilevel Solvers (ARMS) is a class Multilelvel ILU-type preconditioners [20,21] that consist of dropping small entries of S before applying an ILU factorization to it. The SLR preconditioner [16] developed more recently approximates S −1 by the sum of C −1 and a low-rank correction term. Here the low-rank correction term is computed by exploiting the eigenvalue decay property of S −1 −C −1 . A relative to SLR is the Multilevel Schur Low-Rank (MSLR) preconditioner [22] which approximates S −1 by applying the same idea as in SLR recursively in order to address the scalability issue. Finally GMSLR [8] was developed as a generalization of MSLR to nonsymmetric systems.
3. The PSLR preconditioner. In this section, we first derive a power series expansion of S −1 , and then discuss low-rank correction techniques whose goal is to improve its approximation accuracy.
Power series expansion of the inverse of the Schur complement.
The proposed power series expansion is applied to a splitting form of S rather than S itself. Specifically, we first write the Schur complement S as the difference of two matrices: where Then we have Next, we simply apply a (m + 1)-term power series expansion of (I − C −1 0 E s ) −1 to obtain the following approximation to S −1 One immediate advantage of using (3.3) is that the application of m on a vector only involves linear system solutions associated with C 0 and B, as well as matrix vector multiplications associated with E and F . The block diagonal structures in these three matrices make these operations extremely efficient.
Using results with standard norms it is straightforward to prove the following proposition which analyzes the approximation accuracy of (3.3).
Proposition 3.1. If the spectral radius of where the error matrix A large class of matrices satisfy the condition ρ(C −1 0 E s ) < 1 as required in Proposition 3.1. For example, we can show that ρ(C −1 0 E s ) < 1 holds whenever A is symmetric positive definite (SPD) and its (2,2)-block C is diagonally dominant in the next lemma.
Moreover, if the (2,2)-block C of A is diagonally dominant, then Here λ(·) denotes any eigenvalue of a matrix.
Proof. Since A is SPD, S and C 0 are also SPD and C 9) and this shows that λ(C −1 0 E s ) < 1. Now we prove the second part of this lemma. Let which is the matrix C stripped off its diagonal blocks, and note that C 0 −C g = 2C 0 −C. Then we have which is similar to Since C is a diagonally dominant matrix, the matrix C 0 − C g is also diagonally dominant. This results in the symmetric positive definiteness of Φ. Hence, the eigenvalues of 2I − C −1 0 S are all positive, leading to This along with (3.9) yields the desired result: As an example, we depict the eigenvalues of C −1 0 E s in Figure 3.1 for a 3D discretized Laplacian matrix A on a 20 3 grid and the number s of subdomains is set to s = 5. It is easy to see that the absolute values of all the eigenvalues of C −1 0 E s are smaller than 1. 3.2. Low-rank approximations of S −1 . The power series expansion of S −1 in Section 3.1 only provides a rough approximation to S −1 , especially when m is small or/and ρ(C −1 0 E s ) is slightly smaller than 1. In this section, we will consider some low-rank correction techniques to improve the accuracy of this approximation. In addition, we will also consider the case when ρ( Based on (3.12) we get which leads to Here, we assume I − E rr (m) is nonsingular. Equation (3.15) provides another way to approximate S −1 . If a r k -step Arnoldi procedure is performed on E rr (m), E rr (m) can be approximated by where V r k ∈ R q×r k has orthonormal columns and H r k = V T r k E rr (m)V r k ∈ R r k ×r k is an upper Hessenberg matrix whose eigenvalues can be used to approximate the largest eigenvalues of E rr (m). For a given m, it can be justified that the Frobenius norm E rr (m) − V r k H r k V T r k F decreases monotonically as r k increases [20]. As a result, V r k H r k V T r k approximates E rr (m) more accurately as r k increases. Combining (3.15) with (3.16) gives rise to In the above process, we utilize the Sherman-Morrison-Woodbury formula to derive the expression of (I − V r k H r k V T r k ) −1 . Thus, the final approximation to S −1 takes the form:
Approximation accuracy analysis.
In this section, we quantify the approximation accuracy of S −1 app in terms of m and r k . The next theorem first shows the relation between the eigenvalue decay rate of E rr (m) and the number of the power series expansion m + 1.
Theorem 3.3. For any matrix A, the matrix E rr (m) in (3.13) can be rewritten as Proof. Combining (3.1) with (3.14), we have This completes the proof. Theorem 3.3 shows that the eigenvalues of E rr (m) decay faster as m increases. In fact, the eigenvalue decay rate of E rr (m) is m + 1 times faster than that of E rr (0). We then consider two indefinite matrices. The first one is the 3D shifted discretized Laplacian matrix (Figure 3.4) and the second one is the non-symmetric young1c matrix from the SuiteSparse collection [7] (Figure 3.5). The indefiniteness causes the spectral radius of E s C −1 0 greater than 1 in both tests. But as can be seen from Figures 3.4-3.5, only a few eigenvalues have modulus greater than 1. As a result, the majority of the eigenvalues still get clustered around the origin as m increases. Based on this property, we can show that the approximation accuracy of (3.18) can be improved as m increases under mild conditions. In contrast, the classical Neumann series expansion (3.3) will diverge in this case. We first prove an upper bound of the relative approximation accuracy of S −1 app to S −1 in the next proposition. Proposition 3.4. For any matrix norm · , the approximation accuracy of S −1 app to S −1 satisfies the following inequality Proof. From (3.15) and (3.17), we have Using a matrix norm, this yields from which (3.20) follows. Next, we provide two numerical experiments to illustrate Proposition 3.4. The Frobenius norm is employed for both tests and we denote by ∆(m, r k ) the upper bound X(m, r k ) Z(r k ) −1 in Proposition 3.4. The first test is a 3D Laplacian matrix and the second one is a shifted 3D Laplacian matrix. Both matrices have size of 2, 000 × 2, 000. In the tests, we fix r k = 15 and s = 5 and change numbers of terms used in the power series expansion from m = 3 to m = 5. For the Laplacian matrix, we have ∆(3, 15) = 0.49 when m = 3 and ∆(5, 15) = 0.15 when m = 5. For the shifted Laplacian matrix, E rr (m) has 11 eigenvalues with modulus larger than 1. Since r k is larger than 11, when m = 3 and m = 5, we have ∆(3, 15) = 0.72 and ∆(5, 15) = 0.665, respectively. These two tests verify that the approximation is more accurate if m increases as long as the rank r k is larger than the number of the eigenvalues of E rr with modulus greater than 1. From the results of the above two specific problems, we can see that the upper bound ∆(m, r k ) is smaller in general for SPD matrices than for indefinite matrices.
3.2.2. Spectral analysis of the preconditioned Schur complement. The preconditioning effect of the proposed PSLR preconditioner depends directly on the eigenvalue distribution of S −1 app S. When the eigenvalues of S −1 app S are clustered or close to one, one can expect a fast convergence for Krylov subspace methods.
From (3.15) and (3.17), we have where Z(k) and X(m, r k ) are the matrices defined by (3.21). Obviously, it follows from (3.23) that S −1 app S is similar to which implies that When the eigenvalues of X(m, r k ) (or Z(r k ) −1 X(m, r k )) are close to zero, the eigenvalues of S −1 app S are clustered around 1. To illustrate the influence of the approximation accuracy of S −1 app on the eigenvalue distribution of S −1 app S, we display the eigenvalues of S −1 app S for the same 3D Laplacian matrix presented in Section 3.2.1 with n = 2000, r k = 15 and s = 5 in Figure 3.6. The numbers of terms used in the power series expansion are m = 3 and m = 5 for two different cases, respectively. As can be seen from Figure 3.6, the eigenvalues in the right subfigure are more clustered than those in the left subfigure. This further illustrates the fact that the approximation is improved if m increases but the rank r k is fixed. For this specific problem, the PSLR preconditioned GMRES method converges in 8 and 17 iterations, respectively, when m is set to 5 and 3 and iteration is stopped when the initial residual is reduced by 10 8 . As another illustration, Figure 3.7 depitcs the eigenvalues of S −1 app S for the same shifted 3D Laplacian matrix presented in Section 3.2.1 with n = 2000, r k = 15 and s = 5. For this test, the PSLR preconditioned GMRES method with m = 5 converges in 13 iterations and the iteration number increases to 24 when m is reduced to 3, using the same stopping criterion as earlier.
Construction and application of the PSLR preconditioner.
This section provides a short description of the construction of the PSLR preconditioner and its application. Recall from (2.2), the application of the PSLR preconditioner on a vector b follows the following two steps: where b is partitioned into (f T , g T ) T according to the sizes of B and C. The scheme (3.24) requires three linear system solutions, two associated with B and one associated with S app . Applying S −1 app on a vector based on (3.18) involves solving m + 1 linear systems associated with C 0 . Since both B and C 0 are block diagonal, the construction of the PSLR preconditioner starts with the ILU factorization of these diagonal blocks. The computed ILU factors can then be used in the Arnoldi procedure to compute V k and H k associated with S −1 app . The construction algorithm is summarized in Algorithm 1.
The computational cost of the PSLR construction process is dominated by ILU factorization and the triangular solves involved in applying the operator E rr (m) in the Arnoldi process. Since these operations can be performed independently among different diagonal blocks in B and C 0 , Algorithm 1 is highly parallelizable.
Algorithm 2 describes the application of the PSLR preconditioner on a vector b. Besides linear system solutions associated with B and C 0 , the remaining operations are matrix-vector multiplications associated with sparse matrices E, F and dense matrices V r k and G r k . Since both E and F are in block diagonal forms, this application algorithm is also highly parallizable.
Numerical examples. In this section, we report numerical experiments to show the efficiency and robustness of the PSLR preconditioner. The test problems include symmetric and nonsymmetric cases. The PSLR preconditioner was implemented in C++ and compiled with the -O3 optimization option. All the experiments were run on a single node of the Mesabi Linux cluster at the Minnesota Supercomputing Institute, which has 64 GB or memory and two Intel 2.5 GHz Haswell processors with 12 cores each. The PartGraphKway from the METIS [14] package was used to partition matrices. BLAS and LAPACK routines from Intel Math Kernel Library (MKL) were used to enhance the performance on multiple cores. Thread-level parallelism was realized by OpenMP. The preconditioner construction time consists of the incomplete LU factorizations of matrices B i , C i , i = 1, 2, . . . , s, and the computation of V r k and G r k . In actual computations, the right-hand side b was chosen randomly such that Ax = b with x being a random vector, and the initial guess on x was always taken as a zero vector in the Krylov subspace methods.
For the SPD problems, we compare the PSLR preconditioner with the MSLR preconditioner [22] and the incomplete Cholesky factorization preconditioner (ICT) with threshold dropping, and the conjugate gradient (CG) method as the accelerator. For general problems, we compare PSLR with the GMSLR preconditioner and the incomplete LU factorization preconditioner (ILUT) with threshold dropping, using GMRES [20] as the accelerator. BLAS and LAPACK routines from Intel Math Kernel Library (MKL) were used in incomplete factorizations and MSLR and GM-SLR precoditioners. MSLR and GMSLR preconditioners were also parallelized with OpenMP.
In the rest of this section, the following notation is used: • its: the number of iterations of GMRES or CG to reduce the initial residual norm by 10 8 . Moreover, the "F" indicates that GMRES or CG failed to converge within 500 iterations; • o-t: wall clock time to reorder the matrix; • p-t: wall clock time for the preconditioner construction; • i-t: wall clock time for the iteration procedure. If GMRES or CG fails to converge within 500 iterations, then we denote this time by "-"; • t-t: total wall clock time, i.e., the sum of the preconditioner construction time and the iteration time; • r k : the rank used in the low-rank correction terms; • m: the number of terms used in the power series expansion; • fill (total): the total fill-factor defined as nnz(prec) nnz(A) ; • fill (ILU): the fill-factor comes from ILU decompositions defined as nnz(ILU) nnz(A) ; • fill (Low-rank): fill-factor comes from the low-rank correction terms defined as nnz(LRC) nnz(A) . Here nnz(X) denotes the number of nonzero entries of a matrix X. Moreover, nnz(prec) = nnz(ILU ) + nnz(LRC).
Note that we employ the notation nnz(V r k ) and nnz(G r k ) for dense matrices. The term fill-factor, which is meant to reflect memory usage, mixes traditional fill-in (ILU) along with the additional memory needed to store the (dense) low-rank correction matrices.
Test 1.
Consider the following symmetric problem: Here Ω = [0, 1] 3 and these PDEs were discretized by the 7-point stencil. The discretized operation is equivalent to shifting the discretized Laplacian by a shift of h 2 βI for a mesh spacing of h.
Effect of s.
In this subsection, we look into the effect of the number s of subdomains on the effectiveness of the PSLR preconditioner. We solve (4.1) with the shift of 0.05 on a 50 3 grid by the GMRES-PSLR method. The resulting coefficient matrix is indefinite. The number of terms used in the power series expansion is m = 3, and the rank for the low-rank correction terms is fixed at 15. The fill-factor, iteration counts and CPU time for solving (4.1) with shift = 0.05 on a 50 3 grid by the GMRES-PSLR method (m = 3). Here, the rank in the low-rank correction terms is 15, and the dropping threshold in the incomplete LU factorizations is 10 −2 .
s fill (ILU) fill (Low-rank) fill (total) its p-t i-t We can see from Table 4.1 that the fill-factor from ILU decompositions decreases monotonically while the fill-factor from low-rank correction terms increases when s increases from 5 to 55. This is because the size of each B i and C i , i = 1, 2, . . . , s is smaller from a larger s, which reduces the storage and the computational cost for the ILU factorizations. A larger s also results in a larger Schur complement S, which implies that the matrix V r k has more rows. That is why the fill-factor from the low-rank correction terms increases when s becomes larger. These experiments also illustrate the fact that the performance of the PSLR preconditioner does not vary much with the number of subdomains used.
4.1.2.
Effect of m. The number of terms used in the power series expansion is also an important factor, as was previously discussed. We investigate this factor by solving the same problem as in Section 4.1.1 with the rank used in the low-rank correction part being fixed at 15. The iteration counts and CPU times for different m's are given in Table 4.2. As can be observed, the iteration number decreases from 171 to 78 when m increases from 0 to 5. This can be attributed to the improved clustering of the spectrum of the preconditioned Schur complement as the number of terms used in the power series expansion increases. Meanwhile, the time to construct the PSLR preconditioner increases slightly. Since the iteration number is reduced considerably when m increases from 0 to some positive constant and then reduced slightly after that, we expect that the iteration time decreases first and then increases. This is verified by the numerical results in Table 4.2. As is seen from Table 4.2, the iteration time first goes down from .90 to .57 as m increases from 0 to 3 and then increases from .57 to .63 when m increases from 3 to 5. The total time has the same trend as that of the iteration time. The results in Table 4.2 are plotted in Figure 4.1. In the figure we can see that m = 3 is optimal for this test, in terms of CPU time. In general, there is a similar pattern and m should not be taken too large for the sake of a better overall performance. 4.1.3. Effect of r k . In this subsection, we consider the effect of the rank used in the low-rank correction terms on the PSLR preconditioner. Here we consider the same test problem used in the previous two subsections but with different r k 's. We observe from Table 4.3 that the iteration number decreases as r k increases from 0 to 75. The fill-factor from ILU decompositions keeps the same value 2.44 since we fix the number of subdomains. On the other hand, the fill-factor from the low-rank terms and the time to compute low-rank correction terms increase as the rank becomes larger. In the meantime, the iteration time and the total CPU time decrease as r k increases from 0 to 15 and then increase. This indicates that there is no need to take a very large rank in practice In addition, Table 4.4 shows the benefit of incorporating low-rank corrections in the PSLR preconditioner when solving highly indefinite linear systems. Note that the PSLR preconditioner reduces to the Neumann polynomial preconditioner when the rank r k is equal to zero. Results in Table 4.4 show that the low-rank correction technique can greatly improve the performance and robustness of the classical Neumann polynomial preconditioner even when the rank r k is smaller than the number of the eigenvalues of E rr with modulus greater than 1. For example, the GMRES-PSLR combination (full GMRES is used) fails to converge when there is no low-rank correction applied. Here, the shift for the grid 50 3 is set to .14, in which case the shifted discretized operator has 78 negative eigenvalues.
Effect of the number of threads.
We now examine the effect of the number of threads on the performance, when parallelization is achieved through openMP. Table 4.5 shows the total execution time as the number of threads increases from 4 to 24, when solving Problem (4.1) with s = 0.05 on a 50 3 grid. The rank here is taken as r k = 15. As one can see from Table 4.5, the total wall clock time decreases as the number of threads increases. For this case, the total fill factor is 2.79 (2.24 for ILU and 0.55 for the low-rank part) and the iteration number is 86 (regardless of the number of threads). As expected, the execution time for GMRES-PSLR is reduced when more threads are used, due to parallelism. So, the number of threads used in our numerical experiments is taken as the number of cores, i.e., 24. Note that the nodes used for the experiment have 12 cores, but due to hyperthreading up to 24 threads can be efficiently executed in parallel as is shown by the experiment. We now test some general 3D Laplacian matrices to show the efficiency of the PSLR preconditioner. We solve (4.1) with β > 0, where the corresponding problems are symmetric indefinite. For these problems, the discretized Laplacian was shifted by h 2 βI for mesh size h. The numbers of negative eigenvalues are 20, 69, 133 for grids 32 3 , 64 3 and 128 3 , respectively. Here, we set m = 3, r k = 15 and s = 35 in the PSLR preconditioner. As we see from Table 4.6, the PSLR preconditioner outperforms ILUT and GMSLR preconditioners for solving the resulting indefinite problems. This is because the iteration number and the construc- which is a nonsymmetric problem. This equation is discretized by the standard 7-point stencil in 3D, where Ω = [0, 1] 3 and γ ∈ R 3 . Now we present more tests to illustrate the efficiency of PSLR when solving shifted convection-diffusion equations. Here, γ is set to (0.1, 0.1, 0.1) and the shift is taken as 0.16, 0.08, 0.03 for grid 32 3 , 64 3 , 128 3 , respectively. Here, we fixed m = 3, r k = 15 and m = 35 in the PSLR preconditioner. As is seen from Table 4.7, the PSLR preconditioner outperforms GMSLR and ILUT preconditioners. Again GMRES does not converge with the GMSLR and ILUT preconditioners for the case when the shift = 0.03 and the mesh size is 128 3 . Table 4.8 provides a brief description. Numerical results are presented in Table 4.9. Here, we fixed s = 35, m = 3 and r k = 50 in the PSLR preconditioner for all the experiments. From this table, we can see that the GMRES-PSLR method converges for all the test problems without tuning its parameters. Moreover, the iteration time is much less than that of MSLR, ICT, GMSLR and ILUT preconditioners. The GMRES accelerator failed to converge within 500 iterations when used in conjunction with the ICT and MSLR preconditioners for the CFD problem cfd1.
Conclusion.
We have presented an effective Schur complement-based parallel preconditioner for solving general large sparse linear systems. The method utilizes a standard Schur complement viewpoint and exploits a power series expansion along with a low-rank correction technique to approximate the inverse of the Schur complement. The main difference between PSLR and other Schur complement techniques proposed earlier is that PSLR relies on the power series expansion to reduce the rank needed to obtain a good approximation of the inverse of the Schur complement. The number m of terms used in the power series expansion and the rank used in the low-rank correction part control the approximation accuracy of the preconditioner. In practice, small values for these two parameters are sufficient to yield a reasonably good approximation to S −1 .
As was illustrated in the experiments, a big advantage of PSLR is its high level of parallelism. Another advantage is its robustness when solving indefinite linear systems. Finally, PSLR is fairly easy to build and apply and is quite general. All that is required at the outset is a problem that is partitioned into subdomains. In our future work, we will develop a general-purpose distributed memory version of our current code. | 2020-02-04T02:00:54.367Z | 2020-02-03T00:00:00.000 | {
"year": 2020,
"sha1": "8e9cb3ee28759e3d0220627a4c62839084009f51",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/2002.00917",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "8e9cb3ee28759e3d0220627a4c62839084009f51",
"s2fieldsofstudy": [
"Mathematics",
"Computer Science",
"Engineering"
],
"extfieldsofstudy": [
"Mathematics",
"Computer Science"
]
} |
244128377 | pes2o/s2orc | v3-fos-license | Tempoyak from Agam district of West Sumatera, Indonesia as a local probiotic super food candidate
Tempoyak is a traditional food made from durian which has the potential to be a probiotic superfood. Tempoyak is made by fermenting durian fresh in anaerobic conditions. Tempoyak naturally contains probiotic microorganisms called Lactic Acid Bacteria (LAB). LAB is one of the most significant organisms and has benefits for help the healing process of various diseases such as diarrhea, constipation, irritable bowel syndrome, and infections. This study aims to determine the total colonies of LAB and chemical properties, namely protein value, fat content, water content, pH, and TTA (Total Titrable Acid) from tempoyak from Agam district. The method used is descriptive method and laboratory analysis. The sample used in this study was durian fresh (Durio zibethinus L.) with three treatments. The first treatment uses durian meat only (TK1), the second treatment uses durian meat with chili (TK2), the third treatment uses durian meat with salt (TK3). The results showed the total colonies of LAB 39x107 - 98x107, the highest was in tempoyak TK3. Protein content 2.84% - 3.90%, the highest was in tempoyak TK1. Fat content 3.37% - 3.74% and the highest was in tempoyak TK1. Water content 75.53% -85.38% and the highest was in tempoyak TK3. pH 4.1-4.3, the highest was in tempoyak TK1. Total Titrable Acid 0.28% - 0.36% and the highest was in tempoyak TK1.
Introduction
Tempoyak is a traditional food from various regions in Indonesia, such as Sumatra and Kalimantan which is made by fermenting fresh durian under anaerobic conditions. Tempoyak is generally yellowish white and has a distinctive and sharp aroma. Tempoyak fermentation occurs spontaneously, and generally occurs around 4-7 days [1]. Tempoyak fermentation can be done with or without additional ingredients such as chili or salt. The addition of salt to tempoyak aims to draw water and nutrients from the fermented material network, which will be used as a substrate for the growth of bacteria involved in fermentation [2].
After completion of fermentation, tempoyak can be stored for 2 months to 1 year. This long shelf life is caused by the acid produced by lactic acid bacteria during the fermentation process, suppressing the growth of pathogenic microbes [3].
Tempoyak might be one of the unknown superfoods. One of the reasons tempoyak is considered a superfood is because fermentation makes all food nutrients more bioavailable than raw, so all the vitamins, minerals and phytonutrients that durian fruit has to offer can become more bioavailable after fermentation. In previous studies, it was found that the nutritional content of tempoyak was 89.97% water content, 6.49% protein content, and 3.04% fat content [4]. In addition to increasing bioavailability, tempoyak also contains lactic acid bacteria. The presence of Lactic Acid Bacteria is what makes tempoyak as a local probiotic Superfood candidate.
Probiotics are living organisms that will provide health benefits to their host when consumed in certain amounts [5]. One of the groups of probiotic organisms that provide significant benefits is lactic acid bacteria (LAB). LAB can help the healing process of various diseases such as diarrhea, constipation, irritable bowel syndrome, and infections [6]. LAB also has various healing properties such as antioxidant, anti-allergic, and anti-anxiety properties. In addition, LAB can also increase the bioavailability of vitamins/minerals [7,16].
While most studies limit LAB to dairy products such as yogurt and kefir or limit superfood research to non-fermented products such as moringa plant powder, this durian fermentation also allows producing superfoods as well as probiotics.
The purpose of the research
The purpose of this study was to determine the total colonies of Lactic Acid Bacteria and chemical properties including the nutritional value of Tempoyak, Agam Regency, West Sumatra Indonesia as a candidate for local super food probiotics.
The material of the research
The materials used in making tempoyak are durian fresh, chilies, and salt. The materials used to calculate the total colonies of Lactic Acid Bacteria were de Mann Rogosa Sharpe (MRS) broth (Merck), MRS Agar (Merck), distilled water, and alcohol. While the materials used to see the nutritional value of tempoyak are distilled water, methyl red indicator, H2SO4, 30% NaOH, 0.1 N NaOH, spirits, benzene, phenolphthalein (pp). The equipment used in this study were label paper, porcelain plates, analytical scales, Kjeldahl flasks, pH meters, electric ovens, funnels, beaker glasses, Erlenmeyer, Bunsen, Eppendorf tubes, micropipette tips, goiter pipettes, hockey stick, magnetic stirrer & hot plate, test tube, distillation flasks, autoclave, volumetric flask, soxhlet, bunsen, grease paper, aluminum foil, anaerobic jar, Quebec colony counter, and laminar airflow.
Methods of the research.
The method used in this research is a descriptive method and laboratory analysis. [8], with the following work steps: All equipment to be used is sterilized first using an autoclave at a temperature of 121ºC for 15 minutes with a pressure of 15 lb. 68.2 grams of de Mann Rogosa Sharpe (MRS) Agar (Merck) media mixed with 10 mL of distilled water, then homogenized using a magnetic stirrer on a hot plate at 100ºC and then put into an autoclave. Wait for the solution to cool slightly (± 55ºC), then pour it into ± 8 mL Petri dishes. The sample was weighed using a sterile spoon and 1 gram of aluminum foil, then dissolved in a test tube containing 9 mL of de Mann Rogosa Sharpe (MRS) broth solution and then vortexed until homogeneous (10-1dilution result). The result of the dilution was taken as much as 100 µL and transferred to the first Eppendorf tube (10-2 dilution result) and so on until a 10-4 dilution was obtained. The results of 10-4 dilution were taken as much as 100 µL and planted with the spread method on a Petri dish containing MRS agar media and leveled with a hockey stick (carried out on laminar airflow and near the bunsen). The inoculum was stored in an anaerobic jar and then put in an incubator for 48 hours with a temperature of 37ºC (Petri dish marked with each sample). The growing LAB colonies were viewed using a Quebec colony counter. , with the following working procedures: Destruction stage. A total of 1 gram of dry sample was put into a different Kjeldahl flask and then added 1 gram of selenium catalyst and 25 ml of H2SO4. Then digested over low heat in a fume hood and shaken from time to time to make it homogeneous. Heating is done until the solution is clear yellow.
Total colonies of Lactic Acid Bacteria. Method
Distillation Stage. The solution in the Kjeldahl flask is diluted into a 250 ml volumetric flask with distilled water to mark the line. A total of 25 ml of sample solution was put into a distillation flask given a boiling stone and then added with 25 ml of 30% NaOH which had been mixed with 75 ml of distilled water through a tighter. The solution is then distilled until 2/3 of the liquid has been distilled and captured by 25 ml of 0.05 N H2SO4 which is first mixed with 3 drops of methyl red indicator. Rinsing is carried out in the distillery into the collection flask.
Titration stage. The distillation result is titrated with 0.1 N NaOH using a micro buret until the color changes (X ml). Then made a blank titration, added 3 drops of MM indicator into H2SO425 ml 0.05 N, and titrated with 0.1 N (Y ml) NaOH.
Fat content of tempoyak.
Method [9]. A total of 1 gram of the sample was put into a paper sleeve covered with cotton, then dried in an electric oven at 105°C for 12 hours. Then it is weighed hot and extracted with benzene for 4-6 hours until the benzene in Soxhlet becomes clear, then the sample is cooled to dryness, where the benzene will evaporate, then dried in an electric oven at a temperature of 105°-110°C for 4 hours to obtain constant weight. Samples were weighed one by one while still hot. Where the difference in weight before and after extraction is the weight of fat.
Water content of tempoyak.
Method [9]. The aluminum plates were oven-dried at 110 ° C for 1 hour and then cooled in a desiccator. The dishes are weighed and filled with a 5-gram sample. Then dry in an oven at 105 ° C for 8 hours. Cool it in a desiccator and weigh it, then do it repeatedly until the weight is constant.
The pH of tempoyak.
Tempoyak pH measurements were carried out with a pH meter. First, the pH meter is calibrated using a standard buffer solution of pH 7, to show the pH number mentioned above. Then the electrodes are washed with distilled water, then dried with tissue paper. PH measurements were carried out by diluting 1 g of tempoyak with 10 ml of distilled water in a container. Next, the electrode is immersed in the solution and let it move until it is a constant position and the number shown by the pH meter is the tempoyak pH.
TTA (Total Titratable Acidity).
Method [9]. The sample was weighed using an analytical scale of as much as 5 g and dissolved with 10 ml of distilled water, stirred using a stir bar until the sample was homogeneous. The prepared biuret was filled with 0.1 N NaOH, then 2 ml of phenolphthalein (pp) indicator was added. Titrate with 0.1 N NaOH until a color change (equivalence point) occurs and the volume used for the titration is recorded and the last calculation is carried out. The description of the sample used can be seen in Table 1.
Total colonies of Lactic Acid Bacteria
The results of the total colonies of lactic acid bacteria from tempoyak can be seen in table 2. In table 2. This is the total colony of lactic acid bacteria. Plating onto MRS agar media, and anaerobic incubation for 24 hours at 37C. Total bacterial colonies from tempoyak without addition or TK1 obtained total LAB colony 75 x 107 CFU / g, tempoyak addition of chili or TK2 with total LAB colony 39x107 CFU / g, and tempoyak addition of salt with total LAB colony 98x107CFU / g.
When compared with the previous study, which was as much as 6.0 x 106 3.8 x 107 [10], the results of this study were much higher. This is in accordance with the criteria FAO/WHO (2002) because the LAB probiotic food produced must be in the amount of 106 -108 CFU / Gram. The varying number of LAB colonies from this tempoyak isolate was due to the different nutritional conditions where the bacteria grew and developed. This difference in the growing environment of LAB will produce very varied LAB isolates [11].
Protein content of Tempoyak
The protein content of tempoyak, can be seen in table 3 below. The protein content of TK1 tempoyak was 3.90%, TK2 tempoyak was 3.39%, and TK3 tempoyak was 2.84%. The highest tempoyak protein content was found in TK1 without the addition of additional ingredients, which was 3.90% and the lowest tempoyak protein content was at Tk3 tempoyak with the addition of salt, which was 2.84%. The result of the tempoyak TK1 protein content was lower than the previous study, which was 6.49% [4]. While the lowest protein content was obtained tempoyak TK3 with the addition of salt, which was 1.42%. In general, salt will reduce the protein content of food [12]. This happens because salt is a strong electrolyte that can dissolve protein, so salt is able to break the bonds of water molecules in water and can change the nature of the protein.
Fat content of Tempoyak
The fat content of tempoyak can be seen in table 4. The fat content of tempoyak in TK1 is the largest at 3.74%, followed by TK3 at 3.41%, and finally TK2 at 3.37%. The results of the fat content in this study were higher when compared to previous studies, where the fat content in tempoyak ranged from 1.03-3.04% [4]. This fat content can be influenced by LAB activity to reduce fat. During the fermentation process, lipase enzymes that are naturally present in food or produced by microbes that grow on fermented foods will degrade fat. The fat is then broken down into volatile and non-volatile fatty acids which will form the aroma and taste in tempoyak [4]. [4]. The difference in water content between one sample and another is due to differences in treatment between each sample.
pH of Tempoyak
The pH of tempoyak, can be seen in table 6 below. The tempoyak pH range in this study was 4.1 -4.3.
With tempoyak TK1 has the highest pH of 4.3, followed by TK2 of 4.2, and finally TK3 of 4.1. The pH value produced in this study is acidic. The duration of fermentation also affects the pH value of tempoyak. Following previous research, the acid pH range was obtained with a value of 3.8 -4.1 [4]. | 2021-11-16T20:04:04.662Z | 2021-11-01T00:00:00.000 | {
"year": 2021,
"sha1": "1ade035968cf1e3dc077d09608fc4e8d1a69885e",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1755-1315/888/1/012048",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "1ade035968cf1e3dc077d09608fc4e8d1a69885e",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": [
"Physics"
]
} |
271033152 | pes2o/s2orc | v3-fos-license | Exploration of Innovation Strategy in the Deep Integration of Information Technology and Education and Teaching
: The rapid development of information technology provides a new opportunity and impetus for the reform of education and teaching. Deep integration of information technology and education and teaching is the only way to promote the modernization of education. Starting from the connotation of the deep integration of information technology and education and teaching, this paper analyzes the existing problems in the current integration process, and puts forward innovative strategies from the aspects of concept, resources, model, and evaluation. Through measures such as building a smart teaching environment, enriching high-quality teaching resources, innovating teaching organization models, and establishing multiple evaluation systems, we will realize the deep integration of information technology and teaching, promote the reform of teaching and learning methods, improve the quality of personnel training, and provide strong support for the modernization of education.
Introduction
Information technology is developing at an unprecedented speed, which has triggered profound changes in all fields of society.As a future-oriented cause, education must actively adapt to the development trend of information technology and promote the reform of education and teaching with information technology.The report of the Party's 20th National Congress pointed out that it is necessary to "speed up the modernization of education and do a good job of education that the people are satisfied with," which pointed out the direction for education reform and development.The 10-Year Development Plan for Education Informatization (2011-2020) of the Ministry of Education clearly states that the deep integration of information technology and education and teaching is the key support for promoting education reform and improving education quality.As the main position of training high-level talents, colleges and universities should actively explore innovative ways of deep integration of information technology and education and teaching, and train innovative talents to meet the needs of the development of information society.
Connotation of deep integration of information technology and education and teaching
The deep integration of information technology and education and teaching refers to the process of making full use of modern information technology to optimize the teaching process, innovate the teaching mode, and improve the teaching efficiency and quality in all aspects of the education concept, teaching content, teaching method, and teaching evaluation.This integration is not simply the application of technology, but the process of profoundly reshaping teaching and learning through technological means [1] .
Integration of ideas
Educators should establish the concept of lifelong learning and innovative development, learn to master and use information technology, and actively adapt to the needs of information development.It is necessary to break through the traditional "teacher-centered" concept, build a new teaching ecology of "student-centered," stimulate students' learning interest and initiative, and cultivate students' independent learning ability and innovation ability.
Integration of resources
Information technology provides a variety of possibilities for the presentation of teaching content, and teaching resources extend from a single textbook to a broad network space.The application of new technologies such as big data, virtual reality, and artificial intelligence has made teaching resources more diverse and vivid.The rise of online courses such as MOOCs (massive open online courses) and micro-courses has broken the time and space restrictions of campuses, and students can share high-quality educational resources.Teachers should actively develop and utilize information-based teaching resources, create real situations, and stimulate learning interest.
Integration of processes
The application of information technology changes the role of classroom teachers from "knowledge imparts" to "learning guides," and students from "passive receivers" to "active inquirers."The new teaching mode represented by project-based learning and problem-based learning emphasizes the principal position of students and the cultivation of ability.New technologies such as mobile Internet and virtual reality have expanded learning time and space, and new teaching organization methods such as online and offline mixed teaching and ubiquitous learning have emerged, reshaping the process of teaching and learning [2] .
Integration of evaluation
Information technology helps the reform of teaching evaluation and provides the possibility of combining process evaluation with result evaluation.With the help of big data analysis, intelligent assessment, and other technologies, students' learning processes and effects can be recorded to achieve personalized evaluation and accurate feedback [3] .The application of microteaching, electronic portfolios, and other tools has expanded teachers' teaching evaluation and students' learning vision.The wide application of formative assessment contributes to the continuous improvement of teaching and learning quality.
Insufficient depth of integration
At present, the application of information technology in many colleges and universities still stays in the shallow level integration stage such as courseware making and multimedia teaching, and the combination of technology and teaching is superficial, failing to deeply affect the core elements such as teaching content, teaching organization, and teaching evaluation.The use of information technology by teachers and students mainly focuses on teaching assistance and classroom presentation, and the integration of teaching design, teaching management, and teaching service is insufficient, which fails to optimize the teaching process as a whole and promote the teaching reform.
Deficient supply of high-quality resources
Compared with the requirements of the rapid development of information technology, the construction of digital teaching resources in colleges and universities is still relatively backward.In some schools, the information transformation of teaching resources is deficient, the presentation of resources is single, and the interaction is poor.The total amount of high-quality online courses and virtual simulation experiments is limited, and the sharing degree is low, which makes it difficult to meet the needs of students' personalized learning [4] .The pertinence and practicability of resources need to be strengthened, and it is difficult to support teaching innovation effectively.
Inadequate teachers' information literacy
Teachers are the key force for the deep integration of information technology with education and teaching, but many teachers still need to strengthen their understanding and application of information technology.Influenced by traditional teaching concepts, some teachers are cautious or even resistant to information technology and lack the awareness of active use.Although some teachers have certain information teaching skills, they lack the experience of deep integration of technology and teaching, and cannot flexibly use technology to optimize teaching.
Imperfect system guarantee of integration
Universities lack top-level design and systematic planning in promoting the integration of information technology and education and teaching, and the construction of relevant systems is lagging.In the aspects of teaching management, performance appraisal, teacher development, etc., there is a lack of incentive and restraint mechanisms to promote integration and encourage innovation.The lack of clear policy guidance and normative guidance for new teaching modes such as online teaching and blended teaching has affected the participation enthusiasm of teachers and students to a certain extent [5] .
Updating the educational concept and creating an atmosphere of integration and innovation
The deep integration of information technology and education and teaching should first start with idea renewal.Colleges and universities should firmly establish the "student-centered" education concept, respect students' personality differences and development needs, and create an open and interactive intelligent teaching environment.We should encourage teachers to establish the concept of lifelong learning, actively learn information technology knowledge, improve the ability to use it, and integrate information technology innovation into the whole teaching process.It is necessary to strengthen publicity and guidance, popularize the concept of information-based teaching, and improve the initiative and creativity of teachers and students to use information technology to promote teaching reform [6] .We need to establish a sound teaching incentive mechanism, give policy, funding, and other support to teaching innovation, and mobilize teachers' enthusiasm for in-depth application of information technology.Additionally, we should build a platform for teachers' teaching discussion and experience sharing, and promote the promotion and application of excellent cases.
Enriching high-quality resources and building an intelligent learning environment
We should give full play to the advantages of information technology and strengthen the construction of highquality digital teaching resources.We need to encourage teachers to actively participate in the research and development of online open courses, virtual simulation experiments, and other resources to improve the quality and richness of resources.We should also establish a mechanism for resource co-construction and sharing, promote the exchange and application of high-quality resources, and strengthen the contextualization and adaptive design of resources to improve the pertinence and practicability of resources.New technologies such as big data and artificial intelligence are used to achieve intelligent resource push and personalized services [7] .New teaching spaces such as smart classrooms and innovation labs will be built to create immersive and interactive learning experiences.A ubiquitous learning support environment can provide teachers and students with convenient access to resources and services anytime, anywhere, and on demand.
Innovating the teaching mode and improving the quality of learning experience
The use of information technology to promote the innovation of teaching mode is a key measure of deep integration.It is necessary to optimize the teaching design with students as the center, comprehensively use online learning, mobile learning, virtual reality, and other technical means, and build a hybrid teaching model that combines online and offline.We should have reasonable design of flipped classrooms, projectbased learning, and other new teaching organization forms, fully mobilize students' learning initiative and participation, and provide individualized learning support and guidance for different students [8] .A sound online learning management and support service system is established to track learning situations in time and improve teaching.Innovative ways of teaching interaction, including brainstorming, group discussion, role-playing, and other diverse activities, strengthen the communication between teachers and students and enhance the fun and effectiveness of the learning experience.
Establishing multiple evaluations and optimizing the quality of talent training
It is necessary to deepen the integration of information technology and education and teaching evaluation and establish a diversified teaching quality evaluation and monitoring system.Big data, learning analysis, and other technologies are used to dynamically collect students' learning process data, master the characteristics and rules of learning behavior, and implement precision teaching intervention and quality diagnosis.We should improve the multiple evaluation system, which combines the process evaluation and the result evaluation, and guide the students to pay attention to the improvement of their skills.Students are encouraged to record their personal development through learning portfolios, development records, and other ways to achieve self-management and evaluation [9] .Teaching evaluation feedback and improvement mechanisms are established and teaching reform measures are continuously optimized according to problems.The effectiveness of teachers' use of information technology to promote teaching innovation should be included in the performance appraisal system, and teachers should be guided to concentrate on teaching and educating people [10] .
Strengthening training and empowerment to improve teachers' and students' information literacy
We should formulate a systematic information literacy training and improvement plan for teachers and students, and carry out demand-oriented classified and hierarchical training.It is imperative to focus on the actual teaching needs of teachers and improve their technology application and teaching design ability; develop training courses deeply integrated with subject teaching to enhance the effectiveness of training; promote teachers to strengthen cooperation and exchange, integrate superior resources, and jointly improve the ability to use information technology to innovate teaching [11] .We also need to strengthen the education of teachers' ethics and style, guide teachers to enhance their awareness of education, and improve their teaching ability.The elective course for information literacy promotion is set up for students, and it is included in the talent training program to improve students' ability to use information technology.The application ability of information technology should be taken as an important indicator of teacher admission, assessment, and promotion to stimulate teachers' enthusiasm for learning [12] .
Improving the system construction and strengthening the long-term mechanism of integration
In order to realize the deep integration of information technology and education and teaching, it is necessary to carry out top-level design from the macro level and establish a clear medium and long-term plan and roadmap.This requires the joint participation of the government, education departments, and academic institutions to clarify goals and tasks, highlight key areas, and formulate specific promotion measures [13] .For example, special funds can be set up to support innovative projects, encourage school-enterprise cooperation, and promote the research and development and application of educational technology.At the same time, it is essential to establish and improve relevant supporting systems.This involves the adjustment of personnel training policies, such as attracting and training more versatile talents who know technology and can teach; the optimization of funding investment mechanism to ensure that there are sufficient funds to support technology updating and teaching reform; and the innovation of teaching management and performance appraisal system to encourage teachers to actively participate in information-based teaching.For online teaching and smart classroom construction, clear management methods and normative standards should be formulated to ensure that these new teaching modes can be carried out in an orderly and standardized environment [14] .This not only helps to improve the quality of teaching but also helps to protect students' learning rights and interests.In addition, improving the teacher incentive mechanism is the key.Integrating information-based teaching performance into the evaluation system of teacher title evaluation and job promotion can effectively mobilize the enthusiasm of teachers, encourage them to constantly learn and master new technologies, and improve teaching methods [15] .Finally, strengthening teaching process supervision and quality assessment is an important measure to ensure the effectiveness of integration.The establishment of a regular supervision, inspection, and evaluation feedback mechanism can not only detect and solve problems in time but also continue to promote the deepening of education integration and ensure the effectiveness of education information construction, and the results are widely applied and promoted [16] .Through these measures, we can gradually build a long-term mechanism to promote the deep integration of information technology and education and teaching and provide a solid institutional guarantee for the modernization of education.
Strengthening collaborative innovation and promoting the integrated development of industry and education
Faced with the rapid iteration and update of information technology and the emergence of various new technologies and products, colleges and universities should actively promote collaborative innovation, promote the integrated development of production and education, and drive the deep integration of information technology and professional education through the integration of production and education [17] .First of all, colleges and universities should consciously strengthen their cooperation with information technology enterprises and establish a close cooperation mechanism of production, study, and research.Enterprise experts are introduced to participate in teaching design and course construction, so that students can get in touch with cutting-edge technologies and understand the industry development dynamics [18] .Secondly, colleges and universities should encourage professional teachers to take temporary positions in enterprises to improve their practical teaching skills; establish and improve the management system of teacher-to-enterprise practice, and incorporate it into the teacher training and assessment system.In addition, colleges and universities should also provide students with good practical training opportunities, support students to participate in the real project development of enterprises through training bases, enterprise internships, and other ways, and strengthen innovation and entrepreneurship ability in practice.Finally, we need to actively build a platform for connecting production and education to promote the two-way flow of talents, projects, and resources [19] .Focusing on the needs of regional economic and social development, we will actively carry out collaborative research and accelerate the transformation and application of scientific and technological achievements.The integration of industry and education can promote the positive interaction between college education and teaching, technology, and industrial development, and enhance the ability to serve economic and social development [20] .
Concluding remarks
To sum up, the deep integration of information technology and education and teaching is a complex and longterm systematic project, which needs to be coordinated and systematically promoted from multiple dimensions such as concept, resources, model, evaluation, training, and system.Colleges and universities should base themselves on their own situations, formulate plans scientifically, innovate systems and mechanisms, and actively explore distinctive and differentiated development paths while consolidating the foundation and improving the conditions.It is necessary to strengthen school-enterprise cooperation and inter-school collaboration, gather advantageous resources, and form a strong force to promote integrated development.We must adhere to the problem-oriented and demand-oriented principles, focus on the requirements of talent training, and continue to deepen education and teaching reform.It is believed that through scientific policies and systematic promotion, it will be able to promote the integration of information technology and education and teaching to achieve a higher level and make greater contributions to improving the quality of personnel training and serving economic and social development. | 2024-07-07T15:52:00.329Z | 2024-07-03T00:00:00.000 | {
"year": 2024,
"sha1": "11c2d24703799fcc11865d6e8a005dc5cf073671",
"oa_license": null,
"oa_url": "https://doi.org/10.26689/jcer.v8i6.7107",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "bb45fc85d8e17445455ee4773812d6c483099a2c",
"s2fieldsofstudy": [
"Computer Science",
"Education"
],
"extfieldsofstudy": []
} |
240102353 | pes2o/s2orc | v3-fos-license | Bidet Toilet Use May Cause Anal Symptoms and Nosocomial Infection
Electric bidet toilets are widely used in Japan and are sanitary devices, that are integral to daily life. Approximately, half of the population washed the anus before or after defecation. Cleaning the anus after defecation using the bidets contributes to hand hygiene and local comfort, and it may be effective against constipation. However, excessive bidet use potentially causes anal pruritus and anal incontinence (AI). Physicians are advised to instruct patients with anal pruritus to avoid excessive cleaning of the anus and those with AI to discontinue bidet use. For the estimation of the inherent severity of AI, physicians should instruct a bidet user with AI to discontinue bidet use and assess the severity of AI later. Additionally, the nozzle surface and splay water of bidet toilets may be contaminated with fecal indicator bacteria, such as Escherichia coli and Pseudomonas aeruginosa, as well as antimicrobial-resistant bacteria, rendering them a potential vehicle for cross-infection. In the hospital setting, compromised patients must be cautious regarding the shared use of bidet toilets to prevent infection by antimicrobial-resistant bacteria. Specifically, they should be provided with bidet toilets exclusive for them or may need to be instructed to not use a bidet.
Introduction
Electric bidet toilets are automatic devices that deliver a jet of water to clean the anus after defecation. Ever since their introduction to the Japanese market in 1967, Japanesemanufactured electric bidet toilets have steadily gained popularity. According to the Cabinet Office's Consumer Trend Survey [1], the diffusion rate of bidet toilets in households was 80.2% in March 2020, and the number of these units owned per 100 households was 114.5, which is more than one per household ( Figure 1) [2]. Bidets have been developed to incorporate different functions to improve user comfort. Presently, users can select their preferred force, thickness (narrow or wide), and temperature of the water jet. Bidet toilets are installed not only in general households but also in public facilities such as commercial buildings, hotels, airports, and hospitals, and have emerged as a sanitary device that is integral to daily life in Japan.
Notwithstanding, anal symptoms associated with the inappropriate use of bidet toilets and nosocomial infections caused by using these units have been reported. Thus, physicians are advised to reemphasize the appropriate use of bidet toilets. To the best of our knowledge, there has been no published comprehensive report focusing on issues related to bidet toilet use. In this review, we describe the advantages, current status, and issues regarding the use of bidet toilets.
No approval from any IRB was required because this review article was based on previously published papers.
Advantages of Using Bidet Toilets
One advantage of bidet use is that it contributes to hand [3] conducted a simulation experiment to examine hand contamination from wiping the buttocks after the use and non-use of an electric bidet toilet with splay water. A model of the buttocks was smeared with an artificial liquid stool containing Serratia marcescens and wiped by the participants with toilet paper after the use or non-use of the splay water. The number of bacteria adhering to the hand was significantly lower when the splay water was used before wiping the artificial liquid stool. This finding corroborates the effectiveness of splay water in preventing defecation-related hand contamination.
A positive effect on toileting has also been highlighted. When women aged 75 years or older were asked to use bidet toilets in a nursing home, approximately 50% of them reported a sense of comfort and cleanliness after defecation [4]. The usefulness of bidet use against constipation or defecation difficulties has also been reported. -Uchikawa et al. [5] discovered that the use of bidets induced defecation in 15/20 (75%) patients with spinal cord injury. In a study by Shigematsu et al. [6] of 18 patients who had undergone hysterectomy, 15 (83%) of the patients reported smooth bowel movement during the postoperative period when using the bidet toilets. In a study of pregnant women in Turkey, the group that washed their anus before defecation reported a significant improvement in constipation scores compared with the control group, despite not using electric bidet toilets [7].
Regarding the anorectal physiological benefits, Ryoo et al. [8] found that bidet use at low or medium water jet pressure, at a warm temperature, and with a wide-type water jet potentially reduce anal pressure with an effect resembling that of a warm sitz bath. The authors suggested that the effect of relaxing anal sphincter pressure may be beneficial to patients with elevated anal pressures due to anorectal diseases, such as an anal fissure or hemorrhoids, and during postoperative periods after surgery for anal diseases. Watanabe et al. [9] reported that bidet use at a warm temperature for 10 min increased blood flow in the submucosa of the anus, which potentially contributes to postoperative wound healing.
Current Use of Bidet Toilets
Previously, we conducted a survey of electric bidet use among Japanese community-dwelling residents and found that 55% (2,724/4,952) of the respondents washed the anus either before or after defecation [10]. Additionally, at least 30% (828/2,724) of bidet users washed before defecation, and 70% of the respondents reported "Because it aids defecation by stimulating the anus with a jet of water," and 20% reported "Because it aids defecation like an enema when water penetrates the rectum." In a survey of bidet use in 575 outpatients conducted by Yano et al. [11], 349 (61%) washed the anus at every defecation and 75 (13%) did so occasionally. Among the 424 bidet users, 392 (93%) reported that this was for anal cleanliness and 111 (26%) reported that this eased defecation. In a survey of college students, 34% (47/139) of the female students and 44% (43/98) of their male counterparts reported using the washing function of bidet toilets [12]. Overall, approximately half of the population washed the anus before or after defecation.
Anal pruritus
Excessive bidet use may cause itching of the anus [13]. Kurokawa et al. [14] reported that perianal dermatitis found in 932/3,541 (26%) patients was due to excessive bidet use. In our surveillance, the incidence of respondents who complained of anal pruritus was 14% (345/2,449), and multivariate analysis of the risk factors showed that the correlates for anal itching included the active use of bidet toilets, such as washing before defecation, and using relatively warm water for washing the anus [10]. The pathophysiology underlying anal pruritus is the shedding of sebum around the anus due to excessive bidet toilet use, leading to skin dryness [13].
Anal incontinence
In our surveillance, 6% (156/2,534) of the respondents experienced fecal incontinence at least once a month after using bidets [10]. In another study that investigated the relationship between AI (defined as incontinence to gas, mucus, or feces) and bidet use, 49 patients with AI who had habitually used bidets were asked to discontinue bidet use for a median of 4 weeks. Consequently, both the AI score ( Figure 2) and the frequency of fecal incontinence were significantly lower at follow-up than at baseline [15]. Although the causes of AI are multifactorial, it is possible that when patients wash the anus using a bidet, water may penetrate the rectum, especially in those with a lax anal sphincter. Enemas induce the defecation reflex and increase bowel peristalsis with water streaming into the rectum, thereby resulting in post-defecation AI symptoms.
Anal fissure
A previous study reported 10 cases of anterior fissures due to bidet toilet use for 1-5 min [16]. The author speculated that the stronger water pressure of the bidet use with a longer duration may be the causative factor of the anterior fissure.
Bidet Toilet Use and Bacterial Contamination
Outbreaks of resistant bacteria have been reported to be due to the contamination of the cleaning nozzles of bidet toilets in hospitals [17][18][19]. In each case, the outbreaks were reportedly due to bidet use by patients in the hematology department, and it is speculated that antimicrobial-resistant bacteria attached to the nozzles spread to other patients through the splay water. According to the outbreak of drugresistant bacteria in the hematology unit, Enterobacter cloacae producing metallo-β-lactamase (MBL) infection was found in16 (5 males and 11 females) patients [17] and Pseudomonas aeruginosa producing MBL infection was found in 24 (18 males and 6 females) patients [18].
Iyo et al. [20] investigated the actual status of bacterial contamination of water storage-type bidet toilets installed on university campuses and found that Escherichia coli and Pseudomonas aeruginosa were detected at frequencies of 2.4% (3/127) and 1.6% (2/127), respectively, in the splay water. It is inferred that fecal indicator bacteria attached to the nozzle surface and around the water discharge hole were mixed with the splay water and detected. The heterotrophic bacteria in the splay water proliferated significantly more than those in the tap water. This might have been because the heterotrophic bacteria in the tap water proliferated in the water storage tank and nozzle piping as the residual chlorine concentration decreased because of heating of the water storage tank [20,21]. We have also reported similar findings regarding the remarkable growth of heterotrophic bacteria in the splay water of bidet toilets [22].
Regarding the detection of antimicrobial-resistant bacteria recovered from water storage-type bidet toilets, a survey of 292 toilets installed at a university hospital in Japan detected methicillin-resistant Staphylococcus aureus and extendedspectrum β-lactamase (ESBL)-producing E. coli contamination on the nozzle surface [23]. We also conducted a similar study on 192 water storage-type bidet toilets installed in a district hospital and found that E. coli was detected in five (2.6%) of the nozzle surfaces and in the splay water of four (2.1%) of the bidet toilets, and ESBL-producing E. coli was recovered in one sample each (Table 1) [22].
Guidance
Excessive anal washing with bidets is considered the pri- In the present writer's clinic, anal cleaning is specifically restricted to less than 5 s with weak water pressure and a wide water jet, because a strong water pressure or a thin water jet may feel harder and more stimulating at the anus. Patients with AI are instructed to discontinue washing to prevent the splay water from penetrating the rectum. Conversely, when researchers estimate the severity of AI using the AI score, they should check whether a patient with AI is a bidet user or not. If the patient uses a bidet, instead of treating AI immediately, the patient should be instructed to discontinue bidet use for a certain period of time and the severity of AI should be assessed again later. To prevent infection by antimicrobial-resistant bacteria, not only patients with hematological malignancies but also compromised hosts, such as patients with severe inflammatory bowel disease, terminal cancer, and those receiving hemodialysis, must be cautious in the shared use of bidet toilets. Specifically, they should be provided with bidet toilets exclusive for them or may need to be instructed to not to use a bidet. Additionally, the appropriate cleaning method for bidet toilets, including the nozzle or the service life of these units, must be considered. Recently, an on-demand type of bidet toilet with a nozzle cleaning mechanism that uses electrolyzed hypochlorite water, has been devised to replace the water storage-type bidet toilets; however, in the nozzle-contamination test, E. coli in the splay water was sterilized, whereas P. aeruginosa was not completely sterilized [24].
Conclusion
The current status and issues regarding the use of bidet toilets were reviewed. The bidet toilet is the most widely used toilet in Japan, and approximately half the population washes the anus before or after defecation. Excessive bidet use should be considered an etiologic factor in patients undergoing a medical examination for anal pruritus or AI. In the hospital setting, compromised patients should be cautious regarding the shared use of bidet toilets to prevent infection by antimicrobial-resistant bacteria. Further studies are required to confirm the issues surrounding bidet toilet use.
Coronavirus disease 2019 has been prevalent since 2020, and the novel severe acute respiratory syndrome coronavirus 2 has reportedly been detected in the stool or urine of patients affected by this disease [25]. Consequently, the cleaning of bidet toilets (including the nozzles) that are installed in hospital rooms dedicated to this disease must be prioritized.
Conflicts of Interest
There are no conflicts of interest.
Author Contributions Akira Tsunoda: the design of the research, acquisition of data, analysis and interpretation of data, drafting of the article, and final approval of the version to be published.
Approval by Institutional Review Board (IRB) No approval from any IRB was required because this review article was based on previously published papers. | 2021-10-29T15:08:10.748Z | 2021-10-28T00:00:00.000 | {
"year": 2021,
"sha1": "c0365233348126e635002e84554b0bc746624003",
"oa_license": "CCBYNCND",
"oa_url": "https://www.jstage.jst.go.jp/article/jarc/5/4/5_2021-027/_pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "0e46c7e921a4bbe366177f26d51956645ec81d18",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
136153494 | pes2o/s2orc | v3-fos-license | Validation of X-ray radiography for characterization of gas bubbles in liquid metals
X-ray radiography has proved to be an efficient and powerful tool for the visualization of two-phase flows in non-transparent fluids, in particular in liquid metals. This paper presents a validation of the X-ray radiography by comparing measurements in water with corresponding results obtained by optical methods. For that purpose Ar bubbles were injected through a single orifice. The measurements results are compared in terms of bubble size, bubble shape and velocity. Furthermore, visualization experiments were performed in the eutectic alloy GaInSn where the image contrast between the liquid phase and the gas bubble is much stronger. Some obvious differences of the bubble dynamics in water and GaInSn are discussed.
Introduction
Liquid metal two-phase flows are an important part of many technical applications in metallurgy and continuous casting. Argon gas is injected during continuous casting in order to prevent clogging of the submerged entry nozzle and to separate undesired inclusions from the melt. Gas stirring is used in ladle metallurgy to homogenize the melt and to improve the cleanliness of the steel. On the other hand, injection of gas implicates many side effects as for example the generation of highly turbulent complex two-phase flows and creation of additional defects in final casted products. Effective control and optimization of this process requires a comprehensive understanding of the behavior of liquid metal two-phase flows. Many numerical and experimental studies of gas bubbles rising in water and transparent viscous liquids exist so far covering a broad range of Reynolds and Eötvös numbers (see for instance [1][2][3][4][5][6][7][8][9]. Many studies follow the approach to extrapolate the bubble behavior in liquid metals from experiments in water. However, strong differences in material properties such as density, viscosity and surface tension lead to discrepancies in essential non-dimensional parameters as the bubble Reynolds number, the Weber number or the Morton number. Therefore, direct experimental investigations in liquid metal two-phase flows become important and desirable. However, the value of experiments carried out in real liquid metal flows depends on the availability of trustworthy and efficient measurement techniques. Previous measurements in liquid metal bubbly flows were performed by means of ultrasound Doppler velocimetry (UDV) [10], by conductivity probes [11], by Local Lorentz force velocimetry (LLFV) [12] or by neutron radiography [13,14]. Most visualization of liquid metal two-phase flows [15][16][17][18][19][20]. X-ray radiography is a fully contactless method based on the absorption contrast between the liquid and gas phase. The weak point of this technique is the limitation of the sample thickness due to the high attenuation coefficients in liquid metals. Previous publications did not discuss the accuracy of the X-ray radiography for determining parameters like bubble size, bubble shape and bubble velocity in detail. The aim of this work is to demonstrate that the analysis of the X-ray radiography images provides accurate results for bubble size and shape. For this purpose we compare the results of the X-ray radiography with optical measurements performed in water. Further experiments were carried out in the eutectic GaInSn alloy.
Experimental setup
Bubble visualization experiments were carried out at the X-ray laboratory at HZDR. The scheme of the setup is demonstrated in figure 1. A high power X-ray source (ISOVOLT 450M1/25-55 from GE Sensing & Inspection Technologies GmbH) operating with a maximum voltage of 320 kV and a current of 14 mA generates a divergent polychromatic X-ray beam. A scintillation screen (SecureX HB from Applied Scintillation Technologies) is attached to the surface of the container as shown in Figure 1. The non-absorbed part of the X-ray beam comes upon to this scintillation screen where its intensity is converted into visible light. The further imaging is completed with a lens system (Thalheim -Spezial -Optik) and a high-speed video camera (Pco.edge from PCO) equipped with a sCMOS-sensor. The images were captured with 100 frames per second (fps) and an exposure time of 3 milliseconds. The exposure time was optimized to achieve a good signal-to-noise ratio without causing bubble blurring due to their high rising velocities. The container is made of acrylic glass because the walls do not cause significant attenuation of the X-ray beam intensity. The container represents a rectangular tank with 12 mm gap and 144 mm width which was filled with water or with liquid GaInSn alloy up to a height of 144 mm. The eutectic GaInSn alloy is liquid at room temperature. Thermophysical properties of the alloy are reported in [21]. In our experiments the field of view was approximately 60 110 mm 2 . The inert Ar gas was injected through a long bevel stainless steel orifice of 1.1 mm outer diameter (from Sterican®) positioned in the middle of the bottom part of the container. The Ar gas flow rate was 50 cm³/min. A diffused light source was used for the optical measurements to minimize the light reflection at the bubble interface.
Image data processing
The quantitative analysis of the bubble dimensions, positions and velocities are performed by off-line data processing using Matlab scripts. The procedure of image processing is illustrated in figure 2 where an exemplary raw image is shown in figure 2a. Prior to the image analysis a shading correction is done by subtracting a mean reference image measured at zero gas flow rate (figure 2b). As a next step a Gaussian filter is applied to the images to reduce the noise signal (figure 2c). Further, a thresholding algorithm is applied to separate the individual bubbles and bubble clusters from the background. As a result, the images are converted to binary images where all pixels that belong to the bubbles are marked as 1, while the pixels marked as 0 correspond to the background (figure 2d). The corresponding parameters for the Gaussian filter applied to the optical measurements was chosen in such a way that the dimensions of the bright reflecting regions were reduced but no blurring of the bubble boarders was caused (i.e. the Gaussian filter was applied using small values of the variance σ 2 ). An appropriate threshold value was chosen which guarantees that all dark pixels in Figure 2c were counted as part of the bubbles. In turn, the parameters for the Gaussian filter and for the thresholding for GaInSn were obtained from a calibration measurement of two glass balls with 5 and 10 mm diameters surrounded by GaInSn at a zero gas flow rate. As a next step the binary images were analyzed using the function 'regionprops' integrated into Matlab which allows to directly extract parameters like perimeter, area, center of masses, etc. Figure 2e presents the raw image with the determined bubble perimeters and their center of mass and Figure 2f shows the raw image with fitted ellipses and their main axis. The obtained parameters were converted from pixel to metric values using the image scaling. The equivalent bubble diameters are then calculated from the bubble projection area assuming a spherical bubble shape. Additionally a 'Simple tracker' algorithm was implemented into the Matlab script. It allows to track the bubbles and, hence, to calculate the bubble velocities along their trajectories. All bubbles which move in close vicinity to each other or even overlap were excluded from further evaluation. The parameters derived from the X-ray radiography were compared to the parameters obtained from the optical measurements. Self-evidently, both measurements were performed in water applying the same process conditions. Figure 3 displays snapshots of bubble chains rising in water. The optically captured image is shown in figure 3a while X-ray image can be seen in figure 3d. The gas bubbles can be clearly identified in Figure 3d even though the X-ray image demonstrates a rather weak contrast between the bubbles and the background. The subsequent bubble analysis is performed according to the algorithm described in the previous section. The parameters for the Gaussian filter applied at the X-ray measurements were taken over from the optical measurements. The only difference was the threshold value, which was chosen so that all the bright pixels in Figure 3d were assigned to the bubbles. The corresponding sauter mean bubble diameters were calculated from the bubble projections according to the formula = 2 √ ⁄ ), where S is the area covered by the bubble in the image. The results for bubble diameter, bubble velocity, etc. are shown in figure 4 where the blue and red data points correspond to the optical measurements and the X-ray radiography, respectively. The average bubble diameter derived from the optical measurements is ~3.4 ± 0.3 mm while a value of 3.25 ± 0.4 mm was found for the X-ray measurements. These results are in a very good agreement and correspond very well to the value calculated directly from the bubble detachment frequency taking into account a gas flow rate (50 cm 3 /min) and a number of 37 bubbles being ejected from the orifice per second (d 3.5 mm). Figure 4a displays the evolution of the bubble size along the height. Remarkable differences can be observed near the nozzle and at a height of approximately 37.5 mm. Moreover, the bubble sizes obtained from the optical measurements in the upper part of the container are slightly larger than the ones obtained from the X-ray measurements. These differences could be explained by two reasons. First, the X-ray beam has a Gaussian shape showing a maximum at a height of 60 mm height. Therefore, the whole container is not illuminated homogenously. Since the image thresholding is performed using a fixed value for the entire image the less illuminated bubbles provide a weaker signal and the algorithm might determine a too small bubble size. This problem explains the differences in the bubble size near the nozzle and in the upper part of the container near the free surface. Second, the interface between bubble and liquid is directly detected by the optical method whereas the X-ray radiography relies on the absorption contrast which is mainly determined by the local bubble cross-section along the X-ray beam direction. Therefore, it can happen that two bubbles of the same volume but different shapes (see the two examples on the left-hand side of figure 5a) will provide different X-ray signals. The corresponding X-ray intensity can be estimated using the Beer-Lambert law = 0 − , where I 0 is the primary beam intensity, µ is the X-ray absorption coefficient and x is the thickness of the liquid. Calculations of the X-ray intensity were carried out taking into account the values I 0 = 100 and µ = 0.1 the latter is the water absorption coefficient for the X-ray energy of 320 KeV. As expected, the results show that bubbles with the larger thickness parallel to the X-ray beam (case A in figure 5) deliver a stronger X-ray signal compared to those with the smaller thickness (case B). A thresholding of both bubble signals using the same threshold value leads to a distinctly larger error in case B in comparison to case A. The consequence is that the total bubble size for case B will be underestimated. This effect might explain the differences in the bubble size at a height of approximately 37. measurements that significant deformations of the bubble occur after detachment of the bubbles from the injector [22]. These fluctuations of the bubble shape lead to a flattening of the bubble in the vertical plane. The increased cross-section detected by the camera simulates a supposed but not real increase in the bubble size The bubble deformation is defined as the ratio between the length of the major and the minor axis of the ellipses. Corresponding results are shown in figure 4b. The bubble shape undergoes strong deformations along the whole bubble trajectory and reaches a maximum at the height of 25 ± 5 mm. Another deformation peak is observed at a height of 48 ± 2 mm. According to Bhaga [3] the bubble shape can be considered as an oblate ellipsoid with a wobbling surface. The bubble tilt angles (inclination angle of the major axis) and velocities are shown in figure 4c and 4d, respectively. The tilt angle reveals that the bubbles are detached from the nozzle with the main axis inclined positively to the horizontal line which is governed by the bevel shape of the injection nozzle. Then, the zig-zag motion of the bubble is starting fairly quickly. At a height of 29 mm the bubble orientation changes and the bubble tilt angle starts to decrease drastically till a turnaround point is reached at a height of 45 mm. Beyond the height of 68 mm the bubbles tilt angle data are strongly scattered but the zig-zag motion can be clearly identified. The data for the bubble velocity in figure 4d indicate a short acceleration phase just after detachment from the injection nozzle. The final velocity reaches values up to 400 mm/s with a pronounced peak at a height of 31 mm before the zig-zag motion appears. The scatter of the velocity data increases with increasing distance from the gas injection point. Figure 5b illustrates the experimental X-ray signals from a bubble horizontal cross-section (black) at a height of ~25 mm above the bottom of the container. It becomes obvious that the X-ray signal in water is strongly affected by the noise. The corresponding signal-to-noise ratio calculated as = ( / ) 2 , where A signal is the signal amplitude and A noise is the average noise amplitude, amounts to 1.3. Such a low signal-to-noise ratio is attributed to strong scattering of the X-rays in water. However, despite of the weak contrast between the bubble and the background and the related low signal-to-noise ratio the algorithm applied for the analysis of the X-ray images provides fairly accurate results with only slight discrepancy in the bubble size and shape (see figure 4). In conclusion, it can be successfully applied for the analysis of bubble rising in liquid metals where the higher X-ray attenuation of the metallic melt provides a better contrast and a better signal-to-noise ratio (see red line in figure 5b). Horizontal cross-section (mm) water GaInSn Figure 5. a) Camera view and side view of two bubbles of equivalent volume. Simulated X-ray intensity obtained from the case A (black) and case B (red) for the largest bubble cross-section parallel to the X-ray beam. b) X-ray signal from a horizontal cross-section of a bubble having a diameter of 3 mm diameter in water (black) and in GaInSn (red).
Comparison GaInSn versus water
This section aims to demonstrate the suitability of the X-ray radiography in the eutectic GaInSn alloy and to show some differences with respect to the bubble rising dynamics in water and GaInSn for a given Ar gas flow rate of 50 cm 3 /min. Figure 6 presents the X-ray radiography image analysis for bubbles in GaInSn according to the image processing steps described in section 2.2: a) raw image, b) image with reduced reference image, c) gauss filtered image, d) binary image, e) raw image with bubble boundaries and bubble centers, f) raw image with fitted ellipses and their main axis. In comparison to the situation in water the horizontal cross-section of a 3 mm bubble in GaInSn shows a much better signal-to-noise ratio of 14.7 (see figure 5b). Such a large value is rather convenient for further data processing. As a result the analysis to be done for the experiments in GaInSn should provide much more accurate values for bubble size and shape as in water where the validation of the method was carried out. The experiments in GaInSn were performed using the same gas injector and the same gas flow rates as in water. The most striking difference is that only a number of 6 bubbles are injected per second for the same gas flow rate from the orifice in GaInSn while a bubble detachment rate of 37 bubbles/s was observed in water. The average bubble diameter is 6.35 ± 0.4 mm which is twice as large as in water. The measured value is in a very good agreement with the value calculated from the gas flow rate and bubble detachment frequency (d 6.42 mm). The significant deviations with respect to the bubble number and size can be attributed to the strong differences in the surface tension and the wetting behavior at the injection nozzle [14]. Bubbles undergo the maximum deformation just after the detachment from the nozzle. The less scattering of the data points for the bubble deformation (see figures 4b and 7b) is also a direct result of the larger surface tension for GaInSn which prevents significant surface wobbling. During the initial stage of their rise the bubbles show a distinct deformation and can be described as ellipsoids but on their further trajectory the bubbles approach almost spherical shape: the deformation tends to become close to 1 at container heights above 30 mm. When leaving the nozzle the bubble main axis is almost parallel to the bottom of the container. Strong variations start beyond a height of 25 mm. At the positions below 10 mm the bubbles are still attached to the nozzle. This elongation of the bubble along the vertical direction explains the large tilt angles in figure 7c. The strong scattering of the tilt angle data at the upper part of the container is caused by the almost spherical bubble shape where all diagonals are almost equivalent. The almost spherical bubble shape impedes the accurate determination of the main axis by the Matlab algorithm. The bubble velocity in the GaInSn melt reaches values up to 400 mm/s with a pronounced peak at a height of 21 ± 1 mm.
As a summary, it can be concluded that the applicability of X-ray radiography for quantitative measurement of bubble parameters rising in a stagnant liquid was successfully validated by parallel optical measurements in water. Investigations in liquid metals benefit from a better signal-to-noise ratio due to better X-ray contrast. The experiments showed that for a chosen Ar gas flow rate the total number of injected bubbles is much larger in water than in GaInSn. Likewise, the bubble deformation is larger in water than in GaInSn. The bubbles in GaInSn tend to have a more spherical-like shape while the bubbles in water can be described as oblate ellipsoids with a wobbling surface. The main reason for these effects is the higher surface tension in GaInSn in comparison to water. The rising velocities of the gas bubble are found to be rather similar both water and GaInSn. The higher buoyancy in the liquid metal is obviously compensated by the increased drag force owing to the larger bubble size [10]. | 2019-04-29T13:17:43.568Z | 2017-07-01T00:00:00.000 | {
"year": 2017,
"sha1": "fcd9f190ba44cb8b7d39e0dc8211909c0ef6b74d",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1757-899x/228/1/012009",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "3f5096c9d550a8e3f7ba2870f5838774efe53683",
"s2fieldsofstudy": [
"Materials Science",
"Engineering",
"Physics"
],
"extfieldsofstudy": [
"Physics",
"Materials Science"
]
} |
25240845 | pes2o/s2orc | v3-fos-license | Endometriosis Patients in the Postmenopausal Period: Pre- and Postmenopausal Factors Influencing Postmenopausal Health
Objective. To evaluate patients' health status and the course of endometriosis from the premenopausal to the postmenopausal period and evaluate influencing factors that may be relevant. Methods. Questionnaire completed by 35 postmenopausal women in whom endometriosis had been histologically confirmed premenopausally. Correlation and regression analyses were carried out to identify factors relevant to their postmenopausal health status. Results. Overall, there was clear improvement in typical endometriosis symptoms and sexual life. Clear associations (P < 0.005) were observed between premenopausal factors like physical limitations caused by the disease, impaired social contacts and psychological problems, and postmenopausal pain and impairment of sexual life. Three statistical models for assessing pain and impairment of sexual life in the postmenopausal period were calculated on the basis of clinical symptoms in the premenopausal period, with a very high degree of accuracy (P < 0.001; R 2 = 0.833/0.857/0.931). Conclusions. The results of the survey strongly suggest that physical fitness and freedom from physical restrictions, a good social environment, and psychological care in both the premenopausal and postmenopausal periods lead to marked improvements in the postmenopausal period with regard to pain, dyspareunia, and influence on sexual life in endometriosis patients.
Introduction
Endometriosis is one of the most common gynecological diseases, with an estimated current incidence of 40,000 new patients per year in Germany [1]. Worldwide, nearly 80 million women are affected by the disease. Data in the medical literature suggest that the prevalence of endometriosis is 10-15% in all women of reproductive age [1][2][3].
Several theories on the etiology and pathogenesis of endometriosis have been proposed, but a definitive explanation of the pathophysiological mechanism involved has not yet been found. Three basic theories are under discussion: the theory of cell transplantation [4], the theory of metaplasia [5], and the theory of the endometrial-subendometrial unit or "archimetra" [6]. Immunological, endocrinological, genetic, and inflammatory factors also appear to be essential elements in the pathogenesis of endometriosis [7][8][9][10][11][12][13][14]. However, estrogen dependence is considered to be central to the pathophysiological process and persistence of the lesions [11,15]. The latter concept has led to the widely held belief that endometriosis is a disease of premenopausal women that is "cured" by the menopause [16,17].
Cases have occasionally been reported in the literature describing endometriosis in the postmenopausal period in patients with or without hormone replacement therapy [18][19][20][21][22][23][24]. However, the current state of the data is inadequate to allow any assessment of this and the mechanisms underlying the entity have not been explained [25].
BioMed Research International
The aim of the present study was to evaluate health status and the course of endometriosis from the premenopausal to the postmenopausal periods. In addition, relevant factors influencing this were to be identified. Using a statistical model, an attempt was made to calculate the expected health status in the postmenopausal period on the basis of premenopausal clinical symptoms.
Methods
Institutional review board (IRB) approval was obtained (ref. number K-20-12). Data for 35 endometriosis patients who were postmenopausal at the time of responding to a questionnaire were collected and statistically analyzed in this epidemiological study. Before the menopause, all of the participants had undergone surgery for endometriosis, with the findings confirmed histologically.
The inclusion criteria were histologically confirmed: premenopausal endometriosis and age ≥55 years, with the last menstruation being at least 12 months previously. Patients with bilateral adnexectomy who were not receiving hormone replacement therapy were also included. Exclusion criteria were questionnaires that were not fully completed and patients under the age of 55 who had undergone hysterectomy premenopausally. The hysterectomy would have made it impossible to obtain a menstrual history, giving rise to bias in relation to menopausal status.
A total of 150 questionnaires were presented to two selfhelp groups (the Austrian Endometriosis Association and the German Endometriosis Association) and were also made available in our own outpatient gynecological department. Forty-one women decided to participate in the study anonymously. The anonymous questionnaire letter boxes were opened at the end of 6 months and the forms were checked for completeness. Six of the 41 questionnaires had not been fully completed or did not match the inclusion criteria. All of the questions were explicitly related to endometriosis. The patients were thus instructed to respond to the questions-for example, in relation to "psychological problems"-exclusively in relation to endometriosis. As an alternative response, the patients were also given the option "due to a different cause. " The questionnaire included a total of 147 questions, divided into three parts. Part 1 (29 questions) was concerned with the patient's general medical history (18 questions), including social and family history and also surgical history (11 questions). Part 2 (54 questions) inquired into symptoms (21 questions) and complaints (33 questions) in the period before the menopause. Part 3 (64 questions) was concerned exclusively with questions about symptoms (24 questions) and complaints (40 questions) in the postmenopausal period. A visual analogue scale (best grade: 0, poorest grade: 10) was used for responses to questions about pain and impairment of sexual life (best grade: 0, poorest grade: 10).
Statistical Analysis.
The exact Wilcoxon test was used to compare the patients' general condition before and after the menopause. Spearman's rank correlation coefficients and point biserial correlation coefficients were calculated to assess
Results
The group of patients consisted of 35 women aged 37-79 years. The patients' average age was 53.9 ± 9.78 years.
Their average age at the onset of menopause was 43.03 years. The median time from menopause to completing the questionnaire was 11 years (Table 1). Each participant had undergone a mean of 2.74 ± 1.69 gynecological operations due to endometriosis at the time of the questionnaire.
All of the patients (100%) stated that they were enjoying or had enjoyed their occupations, including nine (25.7%) who were retired at the time of the questionnaire.
To the question of how often the participants had been pregnant, nine (26%) responded that they had never been pregnant. Ten (28%) had been pregnant once, nine (26%) twice, and seven (20%) more than twice. Nine participants (26%) had not given birth to any children, 13 (37%) had given birth once, 11 (31%) had had two children, and two (6%) had given birth to more than two children.
There were no relevant differences from the normal population with regard to concomitant diseases. The large number of 21 patients with allergies (60%) was notable. Six participants (17.1%) stated that they were regular smokers at the time of the questionnaire, while 16 (45.7%) had been smokers in the past.
With regard to family history, nine patients (25.7%) stated that their mothers had had dysmenorrhea; three of the mothers had histologically confirmed endometriosis. Six patients (17.1%) reported that a sister had dysmenorrhea; five of the six sisters had histologically confirmed endometriosis. Eight patients (22.9%) stated that their daughters had dysmenorrhea; three of the daughters had histologically confirmed endometriosis.
General Health Status.
Eleven patients (31.4%) described their general state of health in the premenopausal period as "excellent" to "good, " while 24 patients (68.6%) described it as "not so good" to "poor. " In the postmenopausal period, 18 patients (51.4%) described their general state of health as "excellent" to "good, " while 17 patients (48.6%) described it as "not so good" to "poor" (Figure 1). In the premenopausal period, psychological problems were reported by 51.4% of the patients and restriction of social contacts by 62.9%, and as many as 80% described occupational restrictions due to endometriosis. The corresponding postmenopausal figures were 20%, 17.1%, and 20%. (Table 1). Other symptoms the patients experienced are summed up in Table 2.
Pain
Premenopausally, 21 patients (60%) had regularly taken analgetics; 10 (28.6%) had taken gonadotropin-releasing hormone (GnRH) analogues; and 15 patients (42.9%) had taken the contraceptive pill. The mean for the total period of drug intake was 40.9 months. Twenty-three of the patients (65.7%) stated that taking medication had not led to any improvement in symptoms, and side effects developed in 37.1%.
A total of 26 patients (74.3%) had undergone hysterectomy, 10 of them with bilateral adnexectomies. Three patients had a bilateral adnexectomy without hysterectomy. At least one laparoscopy was carried out on 32 patients and 16 patients had at least one laparotomy. Thus, 13 patients had at least one laparoscopy and a laparotomy. Table 3 shows the most notable associations between the general intensity of pain, pain intensity during sexual intercourse, and influence on sexual life in the postmenopausal period, on the one hand, and various premenopausal and postmenopausal variables, on the other hand. All of the parameters are listed in Supplemental Digital Content 1. Factors that correlated poorly with the target variables were concomitant diseases and bowel symptoms, family history, all forms of drug intake including the period of medication and alternative therapies, pregnancies, and parity. In addition, only a slight association was noted between the number, type, and method (surgical technique) of operations and postmenopausal target variables, with the exception of hysterectomy and adnexectomy or combinations of them (Table 3). There were no correlations worth mentioning between bladder symptoms in the premenopausal period and those in the postmenopausal period.
Discussion
It is interesting that concomitant diseases and bowel symptoms (Table 2), family history, all forms of medication including their duration and alternative therapies, pregnancy, and parity proved to be quite unimportant influencing factors relative to the postmenopausal target variables mentioned. The number, type, and method (surgical techniques) of operations carried out also hardly correlated at all with the target variables "general pain experienced, " "pain during sexual intercourse, " and "disturbance of sexual life, " with the exception of hysterectomy and adnexectomy and the combinations of them listed in Table 3. It is notable here that hysterectomy with the adnexa or bilateral adnexectomy led to marked deterioration of symptoms during the postmenopausal period. As Figure 1 shows, there was a clear improvement in the patients' general condition when the premenopausal and postmenopausal periods are compared. However, it should be pointed out that this parameter is probably composed of several factors. It appears that a poor general state of health during the premenopausal period markedly correlates with more severe general pain in the postmenopausal period (Table 3).
Clear improvement with regard to pain, dyspareunia, and influence on sexual life is seen in the postmenopausal period (Table 1). However, it is also notable here that general pain and its intensity in the premenopausal period are not significantly associated with any postmenopausal findings. By contrast, dyspareunia and influence on sexual life in the premenopausal period certainly correlate well with symptoms in the postmenopausal period (Table 3).
We would interpret these results as follows. As endometriosis is a long-term disease that usually has a course lasting several years, the symptoms and complaints in the premenopausal period can become chronic and can thus have an effect on postmenopausal life [26]. Dyspareunia and a negative effect on sexual life in the premenopausal period may thus perhaps be able to leave "inner scars" that have negative effects on the postmenopausal period. On this view, general pain that is not necessarily associated with sexual life appears to have a less marked effect on the period after the menopause.
The results with regard to psychological problems, impairment of social contacts, and impairment of everyday life and working life due to endometriosis in the premenopausal period are surprising. Effects of these are seen very clearly during the postmenopausal period in the deterioration in general pain experienced, pain during sexual intercourse, and disturbances of sexual life. There is also evidence in the literature in this connection showing that chronic pelvic pain (CPP) can have a negative influence on family life, sexual life, and social life [26]. The target variables in the postmenopausal period understandably also deteriorate from the psychological point of view (Table 3). The patients were asked to relate psychological problems, impairment of social contacts, and impairments of everyday life and working life only to the endometriosis. It might be questionable whether it is really possible for patients to assign such psychological factors to endometriosis in isolation and objectively. However, the data suggest that the patients were in fact able to do this, since a marked decline in these factors was observed in the postmenopausal period.
Similarly surprising were the results with regard to physical restrictions (Table 3). Almost all physical restrictions in the premenopausal and postmenopausal periods correlate very strongly with the postmenopausal target variables (general pain experienced, pain during sexual intercourse, and disturbances of sexual life). This clearly shows how important maintenance of physical fitness is even in the premenopausal period.
Stress plays a very important role in the clinical picture of endometriosis [27,28]. It has been shown in an animal model that stress leads to a deterioration in endometriosis [29]. It has been well demonstrated that stress is often linked to nicotine consumption [30]. This is also reflected in the present study, in which the proportion of smokers was notably high (46%) in comparison with that in the general population in Austria (19% of women over the age of 15) [31].
There was also a high proportion of patients with allergies (60%). Comparable prevalence figures among women in the general population are 25% in Austria [32] and 29% in Switzerland [33]. Studies have shown that endometriosis patients suffer significantly more often from immunodeficiencies, asthma, and allergies [34]. This might also be attributable to increased stress caused by the endometriosis, leading to a deterioration in the immune system [35,36]. However, it has already been clearly shown that physical exercise and psychological care lead to a marked reduction in the level of stress in endometriosis patients [37].
On the basis of the present results, it can be concluded that physical fitness and an absence of physical symptoms, a good social environment, and psychological care not only during the premenopausal period but also in the postmenopausal period as well lead to marked improvement with regard to pain, dyspareunia, and effects on sexual life.
In Germany, there are already two rehabilitation centers for endometriosis patients that have been certified by the Endometriosis Research Foundation (Stiftung Endometriose-Forschung (SEF)) [38]. The centers focus on physical exercise, physiotherapy, and psychological care.
If the results of the present study are confirmed by further research, there will be an urgent need for accessible institutions of this type to be established in every country in the world. This is particularly the case in view of the fact that endometriosis has been identified as a high cost factor in studies investigating economic targets, which have indicated that a potential cause of this might be inadequate infrastructure [39][40][41][42][43].
The good predictability of the target parameters, "general intensity of pain in the postmenopausal period, " "pain intensity during sexual intercourse in the postmenopausal period, " and "influence on sexual life in the postmenopausal period" relative to premenopausal factors, might be due to endometriosis patients' very good ability to recall the symptoms and complaints that they had during the premenopausal period (Supplemental Digital Content 2).
Despite the high level of statistical accuracy of the models, the current state of knowledge and the questionable representativeness of the data discussed above do not make it currently possible to draw any detailed conclusions regarding the actual course of the disease. Therapeutic decisions should on no account be made on the basis of these models. Endometriosis in itself represents an extremely complex and polymorphous disease, and it is influenced by the patients' individual characters. The absolute focus should always be on individual consideration and treatment of each and every patient.
A major limitation of the present study is the relatively small number of cases included. This is due to the difficulty of accessing a relevant group of patients with treatments that may already lie decades in the past. In addition, the fact that evidently only a small proportion of candidate patients decided to contribute their information and complete the questionnaire substantially increases the chances that individuals with exceptionally positive or exceptionally negative experiences may be overrepresented (recall bias). Furthermore, the choice of a self-help group for the investigation increases the risk of a selection bias. However, irrespective of this limitation on the validity of the study, it would be valuable to take the very marked and partly surprising results of this survey as an interesting approach that needs to be investigated in further research. | 2018-04-03T02:39:32.013Z | 2014-06-02T00:00:00.000 | {
"year": 2014,
"sha1": "a355555ddaa515d954593e0c642a16faaf753d49",
"oa_license": "CCBY",
"oa_url": "http://downloads.hindawi.com/journals/bmri/2014/746705.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "a09486e30dcebc04f9557e2fa4e3a713daf368a0",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
229361275 | pes2o/s2orc | v3-fos-license | A Typical Instrument for Assessing the Genre-Based Writing
Received Aug 05, 2020 Revised Sept 07, 2020 Accepted Sept 19, 2020 Recently, it is a fact that the genre-based writing English Curriculum in Indonesia either in high or higher education has been implemented for years. Unfortunately, it seems that the student’s writing is not assessed in accordance to the nature of genre based writing itself yet. As a matter of fact, assessment plays important role to success of teaching and learning. A proper assessment instrument leads to the valid and reliable learning achievement or otherwise. In terms of genre based writing, as a typical approach to teaching and learning writing also needs the assessment instrument meeting the nature of the genre based writing itself. This paper is aiming at providing a preliminary understanding and sights of typical instrument for assessing the genre based writing.
Even though, the genre-based writing to teaching and learning writing in the last ten years has been implemented in schools and universities, there are seemingly some problems, constraints, and mismatches, regarding to the process of teaching and learning writing through genre-based writing and assesment model used to assess the students' writing. (Dirgeyasa, 2014a).
First, it seems that the teachers and lectures are having diverse understanding and competences regarding to genre-based writing. Dirgeyasa (2014a) in his article reports that one of the among six constraints and problems to teaching genre-based writing at school and university is the lack of knowledge among the teachers and lecturers. This condition leads to deviation and confusion in order to understand the genre writing among the students.
Second, the process of teaching and learning of writing is not in line with model of teaching and learning of genre-based writing known as genre-based cycle of teaching and learning as proposed by (Vygotsky, 1978) in (Kroll, 2003). Generally, teachers and lecturer still teach writing by using conventional method in which they tend to explain the concept, give the example of writing, and students start to follow the model given. The students are not guided to implement the cyclic model of genre-based approach to teaching and learning yet. In fact, it actually supports or 'scaffolds' the learners through an interactive process of analysis, discussion, and joint and individual construction of text. In addition, this model of cyclic teaching and learning writing really helps the students learn to write from the very simple and assisted process up to the individual and independent one. Third, the teachers and lecturers still seemingly have perceptions that genre-based writing is just a matter of the type of text or the typical writing itself. They do not pay much attention that genre-based writing also covers the teaching and learning model. By this condition, they just focus on the study of text types rather than how to teach and learn genre-based writing. This is clearly stated that genre writing as a new approach to teaching and learning, truly combines two things-the product of the writing itself and the way or technique or strategy how the writing is produced (Ann, 2002). So, it is a kind of a coin with two facets (Dirgeyasa, 2014b).
Fourth, observing and documenting the students' writing assessement model used by the teachers and lecturers, it was found that they still assess the students' writing through synthentic model in which the students' writing works are assessed generally without considering and analyzing the components of writing such content, organization, grammatical pattern, mechanics, etc. In addition, even some teachers and lecturers use analytical method of writing assessment, still they use the instrument for assessing writing proposed by some experts such as (Glass, 2005;Heaton, 1989;Brown, 2004) by including the components of writing such content, organization, grammatical pattern, mechanics, etc. However, this of course, is not in accordance with the genre-based writing features and characteritics. This model may be appropriate for writing without genre based approach.
Finally, the reality of genre-based writing asssesment currently used by the teachers and lectures happen due to the some reasons such as the unavaliability of a genre-based assessment model, the practical and pragmatic reasons, the lack of knowledge and understanding toward the genre-based writing assessment model of the teachers and lecturers, etc.
Concerning the real and sporadic phenomena in assessing genre-based writing accros the schools and universities in the countries, it is important to introduce a model of genre-based writing assessment. It seems that it is not only important and significant but also it is truly urgent to design and develop a appropriatemodel of genre-based writing assessment meeting every type of genre-based writing itself. This paper is aiming at providing a preliminary understanding and sights of typical instrument for assessing the genre based writing.
Discussion
The Genre-Based Writing Approach Writing text types by names could be different and typical ones in accordance their approaches. For example, by traditional terms or (non-genre approach) writing can be classified into descriptive, argumentative, expository etc. Then, by genre approach writing text types vary by its name such as descriptive, narrative, review, annecdote, etc. By genre approach, writing text becomes very typical and distintive one. The typicality of writing genre based approach covers a) textual structure or generic structure, social function, and linguistic features. This means that every text or writing type will be different in terms of its structure, linguistic features, reader, and purpose. In terms of genre-based writing (Hyland, 2003); (Knapp and Watkins, 2005). Therefore, every different type of writing has its own characteristics and features showing that one is really different from another one (Knapp and Watkins, 2005;Pardiyono, 2007;Dirgeyasa, 2014b). In addition, they also state that the characteristics of genre-based writing consists of a) certain communicative purpose, b) certain rethorical structure or generic structure and c) certain linguistic features. In addition, the genre also has its own styles and registers (Andrew, 2002;Knapp and Watkins, 2005;Pardiyono 2007).
By its physical structure, each genre-based writing has different elements of text either by names or number. One particular genre may be simple and the other one may be complex in term structure. For example, descriptive genre consists of 'identification' and 'description' in terms of rethorical structure, and the linguistics features used tends to be present tense and adjectives, and so forth. On the other hand, the recount genre consists of (Orientation) ^ (Record of event) ^ (Re-orientation). Or the procedure text has (topic+ statement of purpose) ^ (Squences of steps to accomplish job or activities stated in the topic) ^ (Closing-if necessary) and so forth. In detail table 1 below shows the generic structure, textual elements, and function of recount genre-based writing.
2.The Preliminary Genre Writing Assessment Model
Theoritically and empircally, it would be better that the writing should be assessed by analytical scoring or analytical method rather then synthentical scoring or synthetical method. Hyland (2003) states that analytical scoring procedures requires the reader to judge a text against a set of criteria seen as important to good writing. He further argues that analytical score more clearly defines the features to be assessed by separating, and sometimes weighting, individual components and is therefore more effective in discriminating weaker text. By this criteria, the analytical score really assists the raters to assess the closer and more qualified writing work.
In practical implmentation, there are a large numbers of terms and criteria used in this model of assessment. Glass (2005) proposes five criteria and indicators used for writing assessment such as ideas and content, organization, sentence fluency, word choice, voice, and convention. In line with the use of analytical model, Brown (2004) states that the classroom evaluation of learning is best served through analytical scoring, in which as many as six major elements of writing are scored, thus enabling the students to home in on weaknesses and to capitalize on the strengths. But, again they are not really in line and relevant to the genrebased writing.
In line with genre-based writing, analytical model provides more relevant and appropriate indicators and items to be assessed. Consequently, the result of assessement, of course, would be more valid, reliable, and significant. To do so, the assessment instrument should be designed and developed in accordance to principles, features and characteristics of genre based writing. This is done in order to meet need of the genre-based writing.
Therefore, the assessment instrument needs improving, modifying, and adjusting in accordance to the type of genre-based writing text repectively. Of course, the basic criteria and indicators for every type of genre-based writing text will be different one another then. For example, the details of recount text are different from the details of procedure text, narrative text, anecdote text etc. Table 2 below shows the preliminary assessment model for recount genre-based writing is designed and developed. Table 2 The preliminary assessment instrument for recount genre-based writing.
No
Performance Indicators Scores 5 4 3 2 1 1 First paragraph introduces the topic clearly and grabs the reader's attention. 2 The content/idea of the text is in line with the topic/title.
3
Overall writing makes sense/ has a clear message.
4
A series of events run in a chronological (time) order 5 The background information covers the words who, what, where and when.
6
The paragraphs run cohesively and coherently. 7 The text structure/generic structure meets the nature of recount generic structure. 8 The structural patterns follow the conventions of English language and in line with the recount text. 9 The vocabulary and word choices, including temporal conjunctions, are clear and correctly and properly used.
10 It uses correct spelling and it is legible writing.
11
The text mechanics are correctly and properly used.
Total Score Student's Score: Total Score/55x100 Dirgeyasa (2014b) Table 2 above shows that the assessment instrument consists of performance indicators and scale or grade. The scale 5, 4, 3, 2, and 1 show the quality or value of the item. By this instrument model, the recount genrebased writing can be assessed well and meets the characteristic and feature of recount genre-based writing itself.
This, will of course, be storngly different from other genre writing type such as narrative text. The instrument items and indicators must follow the characterisitcs and features of the narrative text itself. Table 2 below tries to show a preliminary assessment model of narrative genre-based writing. 1 First paragraph introduces the topic clearly, grabs the reader's attention, and provides information about the characters and setting.
2 Story relates a series of events that create an entertaining story with a problem and solution. 3 The content/idea of the text is in line with the topic/title.
4
Overall writing makes sense/ has a clear message.
5
Story is finished with complication/problem resolved in detail.
6
The paragraphs run cohesively and coherently. 7 The text structure/generic structure meets the nature of narrative generic structure. 8 The structural patterns follow the conventions of English language and in line with the narrative text. 9 The vocabulary and word choices, including temporal conjunctions and temporal circumstances, are clear and correctly and properly used.
10 It uses correct spelling and it is legible writing.
11
The text mechanics are correctly and properly used.
Total Score Student's Score: Total Score/55x100
Conclusion
Asssessing writing, particulary a genre based writing approah is regarded complex and difficult to do. Its complexities could be by patterns, forms, types, functions, goals, and linguistic features, etc. Due its complexity, assessing writing could be actually time consuming and is also sometimes frustrating and annoying.
The genre-based writing assessment instrument must be designed and developed in accordance to characteristics and features of genre-based writing itself such as such as communicative purpose, generic structure, and grammatical patterns and vocabulary choices in order to provide a qualified assessment result. That's why every type of genre-based writing has its own typical assessment instrument either in terms of items, content, and number of itmes.
In short, a genre-based writing as a new approach to writing does not only fall into two uniqueness namely its typical writing form and teaching and learning process as what generally known, but it is also its assessment model This clearly shows that genre-based approach is absolutely different from the other approaches such as product approach, content approach, and structure approach, etc. | 2020-11-26T09:06:24.052Z | 2020-10-19T00:00:00.000 | {
"year": 2020,
"sha1": "d2ff87482630d4529c3b169a48c3b6ae6ed99eb1",
"oa_license": "CCBY",
"oa_url": "http://journal.ucyp.edu.my/index.php/ASHREJ/article/download/46/38",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "9b1820303995d9f2c2f13885d0d6cf5110077133",
"s2fieldsofstudy": [
"Education"
],
"extfieldsofstudy": [
"Psychology"
]
} |
256594083 | pes2o/s2orc | v3-fos-license | Molecular Capture of Mycobacterium tuberculosis Genomes Directly from Clinical Samples: A Potential Backup Approach for Epidemiological and Drug Susceptibility Inferences
The application of whole genome sequencing of Mycobacterium tuberculosis directly on clinical samples has been investigated as a means to avoid the time-consuming need for culture isolation that can lead to a potential prolonged suboptimal antibiotic treatment. We aimed to provide a proof-of-concept regarding the application of the molecular capture of M. tuberculosis genomes directly from positive sputum samples as an approach for epidemiological and drug susceptibility predictions. Smear-positive sputum samples (n = 100) were subjected to the SureSelectXT HS Target Enrichment protocol (Agilent Technologies, Santa Clara, CA, USA) and whole-genome sequencing analysis. A higher number of reads on target were obtained for higher smear grades samples (i.e., 3+ followed by 2+). Moreover, 37 out of 100 samples showed ≥90% of the reference genome covered with at least 10-fold depth of coverage (27, 9, and 1 samples were 3+, 2+, and 1+, respectively). Regarding drug-resistance/susceptibility prediction, for 42 samples, ≥90% of the >9000 hits that are surveyed by TB-profiler were detected. Our results demonstrated that M. tuberculosis genome capture and sequencing directly from clinical samples constitute a potential valid backup approach for phylogenetic inferences and resistance prediction, essentially in settings when culture is not routinely performed or for samples that fail to grow.
Introduction
Tuberculosis (TB) remains one of the most important infectious diseases globally [1]. The gold standard for the routine clinical diagnosis and drug susceptibility testing (DST) for Mycobacterium tuberculosis is culture-based, which requires months for visible growth, leading to a potential prolonged suboptimal antibiotic treatment [2]. Establishing a resistance profile from the initial TB diagnosis is a priority. Indeed, although there are several molecular assays already endorsed by the World Health Organization (WHO), they fall short on targets and resistance-related regions or genes, to assure a correct prediction of resistance [3]. The potential of Whole Genome Sequencing (WGS) as a diagnostic assay has been repeatedly demonstrated and enables a comprehensive identification of all known resistant mutations for all TB drugs as well as it can provide reliable contact tracing information [4][5][6][7][8][9]. For these reasons, WGS-based methodologies have been implemented as a routine for early positive cultures identification, resistance prediction, and surveillance at the National Reference Tuberculosis Laboratory (NRL-TB) of the Portuguese National Institute of Health (NIH) [10,11]. This approach performs at a comparable cost to phenotypic assays offering short turnaround times. However, even early positive cultures may represent some weeks of bacterial growth. In this context, generating WGS information directly from samples (bypassing the time-consuming culture) would constitute a tremendous achievement towards a rapid DST-informed diagnosis. It has, however, a major potential hurdle as biological samples contain variable amounts of human cells (mixed with M. tuberculosis cells) that can account for up to 99.9% of the total DNA [12][13][14], which results in high human to bacterial DNA content ratio with consequent low depth of coverage and poor identification of resistance [13]. One of the alternatives proposed, and already in use for routine diagnostic purposes, consists in sequencing only regions associated with drug resistance (DR). Targeted sequencing (TS) panels have shown effectiveness in obtaining resistance profiles directly from clinical samples, providing a signature of genetic markers associated with drug resistance, and promoting a personalized treatment [15,16]. It also increases the sequencing depth, facilitating the identification of subpopulations of susceptible and resistant bacteria (heteroresistance), which can impact the early diagnosis of DR-TB [17]. However, a TS approach does not provide information regarding the identification of novel resistant markers and does not allow genomic epidemiology and transmission inferences. As such, the ability to directly sequence the complete genome of M. tuberculosis from clinical specimens of infected patients would be the logical step forward to deliver the full potential of WGS for TB control.
Several studies in multiple areas of infectious diseases have already described the use of specific protocols relying on custom designed RNA oligonucleotides spanning the entire microbial genome, which can recover by hybridization (i.e., "target enrichment") low copy numbers of DNA directly from clinical samples with sufficiently high sensitivity and specificity to enable efficient WGS [18][19][20][21]. However, mycobacterial cells may aggregate because of the high mucus content of respiratory samples, meaning that the volume and Acid-Fast Bacilli (AFB) count may not represent the total quantity of mycobacteria available [12,13,21]. Therefore, clinical samples require pre-processing for homogenization and enrichment purposes and depletion of non-Mycobacterium cells/DNA.
The main objective of this study is to provide a proof-of-concept regarding the application of the molecular capture of M. tuberculosis genome sequences directly from positive sputum samples collected from TB patients as a potential backup approach for epidemiological and drug susceptibility inferences.
Impact of Sample Characteristics (Smear Grade and Human/Bacteria Load) on Genome Capture Success
For the sake of clarity, all data regarding samples' characterization and WGS-associated data are summarized in Table S1. All 100 smear-positive sputum samples were subjected to the SureSelect XT HS Target Enrichment protocol (Agilent Technologies, Santa Clara, CA, USA; see Methods). Among these, 48 were 3+ smear samples, 33 were 2+ and 19 were 1+. For 30 samples, we ended up with an amount of DNA range of 2.4-9.9 ng, which is slightly lower than the recommended minimum of 10 ng as DNA input.
The number of M. tuberculosis (mean = 3821.5 katG copies) and human (mean = 146,626.5 β-actin copies) cells per µL were determined for 81 samples. For the remaining 19 samples, the total volume of the extracted DNA was used in the target enrichment protocol. While higher numbers of human cells were expectedly associated with higher DNA inputs, the number of M. tuberculosis was higher for lower DNA inputs (Table S1). Moreover, comparing the three smear categories, lower smear grades correlated with lower M. tuberculosis loads and higher number of human cells, while 3+ samples showed the highest M. tuberculosis loads and lowest number of human cells (Figure 1). lower M. tuberculosis loads and higher number of human cells, while 3+ samples showed the highest M. tuberculosis loads and lowest number of human cells ( Figure 1). Table S1). Differences between smear groups were tested using the Mann-Whitney U test. * p-value < 0.05; ** p-value < 0.01. All considered metrics are higher for higher smear grades (2+ and 3+) with the exception of β-actin copies, which is higher for the 1+ samples, although not statistically significant.
Most samples (64/100) generated a final library with a molarity below 0.5 nM (minimum recommended for loading the sequencing apparatus Illumina NextSeq 550), leading to the need of concentrating the pooled libraries in a SpeedVac system. Of note, this was not due to low initial DNA input since the two variables were inversely proportional to each other (Table S1). Even with the addition of a concentrating step, that allowed a better normalization of libraries and an increase in the flow cells output, the generated number of reads per sample was highly variable and ranged between 306 and 30,635,472 pairedend reads (mean = 6,170,816.36; median = 2,655,720) ( Figure 1C).
As a means of assessing the success of the target enrichment, we determined the percentage of reads "on target" (reads that mapped against the M. tuberculosis H37Rv reference genome) and the percentage of reads classified by Kraken2 as Mycobacteriaceae family (or lower taxonomic levels). Expectedly, these two metrics were highly correlated and directly proportional (Table S1). Overall, we obtained promising results of target enrichment for a high number of samples, with 78/100 samples having at least 50% of reads "on target", and 44 of these having more than 90% of reads "on target". Moreover, the number Table S1). Differences between smear groups were tested using the Mann-Whitney U test. * p-value < 0.05; ** p-value < 0.01. All considered metrics are higher for higher smear grades (2+ and 3+) with the exception of β-actin copies, which is higher for the 1+ samples, although not statistically significant.
Most samples (64/100) generated a final library with a molarity below 0.5 nM (minimum recommended for loading the sequencing apparatus Illumina NextSeq 550), leading to the need of concentrating the pooled libraries in a SpeedVac system. Of note, this was not due to low initial DNA input since the two variables were inversely proportional to each other (Table S1). Even with the addition of a concentrating step, that allowed a better normalization of libraries and an increase in the flow cells output, the generated number of reads per sample was highly variable and ranged between 306 and 30,635,472 paired-end reads (mean = 6,170,816.36; median = 2,655,720) ( Figure 1C).
As a means of assessing the success of the target enrichment, we determined the percentage of reads "on target" (reads that mapped against the M. tuberculosis H37Rv reference genome) and the percentage of reads classified by Kraken2 as Mycobacteriaceae family (or lower taxonomic levels). Expectedly, these two metrics were highly correlated and directly proportional (Table S1). Overall, we obtained promising results of target enrichment for a high number of samples, with 78/100 samples having at least 50% of reads "on target", and 44 of these having more than 90% of reads "on target". Moreover, the number of raw reads and reads on target was higher for higher smear grades samples (Figures 1 and 2). stream analysis (i.e., requiring enough horizontal and vertical genome coverage). For example, only for 37 out of 100 samples we obtained showed ≥90% of the reference genome covered with at least 10-fold depth of coverage (see methods for details), which we routinely use in our laboratory as minimum requirements for genome-based surveillance of M. tuberculosis. Of these, 27, 9, and 1 were 3+, 2+, and 1+ smear grade samples, respectively, showing that success can be obtained for samples with low M. tuberculosis content ( Figure 2).
Figure 2.
Overview of the relation between genome coverage (bottom panel) and multiple factors (microscopy grade, No. of trimmed reads, No. of katG copies and No. of β-actin copies), ordered from the sample with the highest to the lowest % genome coverage. Samples with higher genome coverages correlate with higher smear grades (top panel), number of katG copies and, expectedly, number of reads generated. β-actin copies show the opposite pattern as lower number of copies correlate with higher genome coverages. As for the percentage of reads "on target", it is highly variable as, for instance, a high genome coverage also depends on the number of reads, even if the capture/enrichment is successful.
SNP-Based Core Genome Analysis
Aiming at understanding the usefulness of the M. tuberculosis genomes captured through the target enrichment approach for genomic surveillance purposes (e.g., to study phylogenetic relationships), we analysed the 37 samples for which we obtained ≥90% of the reference genome covered with at least 10-fold depth of coverage, following a single nucleotide polymorphism (SNP)-based core genome analysis ( Figure S1). The generated , ordered from the sample with the highest to the lowest % genome coverage. Samples with higher genome coverages correlate with higher smear grades (top panel), number of katG copies and, expectedly, number of reads generated. β-actin copies show the opposite pattern as lower number of copies correlate with higher genome coverages. As for the percentage of reads "on target", it is highly variable as, for instance, a high genome coverage also depends on the number of reads, even if the capture/enrichment is successful.
However, the success in the capture of M. tuberculosis from biological samples does not indicate that enough reads were obtained for a given sample to be suitable for downstream analysis (i.e., requiring enough horizontal and vertical genome coverage). For example, only for 37 out of 100 samples we obtained showed ≥90% of the reference genome covered with at least 10-fold depth of coverage (see methods for details), which we routinely use in our laboratory as minimum requirements for genome-based surveillance of M. tuberculosis. Of these, 27, 9, and 1 were 3+, 2+, and 1+ smear grade samples, respectively, showing that success can be obtained for samples with low M. tuberculosis content ( Figure 2).
SNP-Based Core Genome Analysis
Aiming at understanding the usefulness of the M. tuberculosis genomes captured through the target enrichment approach for genomic surveillance purposes (e.g., to study phylogenetic relationships), we analysed the 37 samples for which we obtained ≥90% of the reference genome covered with at least 10-fold depth of coverage, following a single nucleotide polymorphism (SNP)-based core genome analysis ( Figure S1). The generated minimum spanning tree was based on 3441 variable positions and allowed the detection of four genetic clusters. In four patients, each with two samples available for analysis, no SNP differences were found among the pairs of DNA samples, reinforcing the reproducibility and robustness of the methodology.
Drug-Resistance/Susceptibility Prediction
All samples were subjected to drug-resistance screening directly from reads using TB-profiler [22,23]. Since some of the genomic regions screened have homology with other bacterial species (e.g., rrs gene), the raw data were firstly filtered to exclude non-Mycobacterium reads and reduce false-positive hits in these regions. With this approach, 68 and 42 samples could be screened in ≥50% and ≥90% of the >9000 nucleotide positions ("hits") analysed by TB-profiler, respectively. Among the 42 samples for which ≥90% hits were covered, 24 had 100% hits covered. Moreover, it was possible to identify a multidrug resistant strain and confirm the fully susceptible profile associated with most samples ( Table S1). As 18/42 samples lacked coverage in some positions, mostly associated with the rrs gene, we repeated the same analysis by reducing by half the depth of coverage required to validate a position. This allowed the validation of 100% of the positions in 32 samples, without affecting the mutations identified previously with the default cut-off depth of coverage of 10 reads.
Importantly, this analysis also allowed the identification of mutations in minor populations in 6/42 samples, or 12/100 when considering lower coverage samples (with proportions between 11% and 25% associated with resistance to isoniazid, pyrazinamide, ethambutol, and aminoglycosides; Table S1), that could have been missed if sequencing had been performed from a cultured strain and not directly from the clinical sample.
Discussion
In the present study, we provide a proof-of-concept on the value of molecular capture of M. tuberculosis genome sequences directly from positive sputum samples for resistance prediction, genomic surveillance, and potential evaluation of intra-host genetic diversity. The capture of large genomic sequences directly from clinical samples, such as the baitsbased Agilent SureSelect procedure, has been widely used in genetic-related diagnoses such as hemoglobinopathy cases [24] and viruses typing [25]. More recently, its role in the surveillance and diagnostic of M. tuberculosis has been exploited [12][13][14]20,21] with uncertain efficiency and with results not easily comparable due to different methodologies and/or sample selection criteria. For instance, Nimmo et al. (2019) followed a similar enrichment protocol (SureSelect) but no information is available regarding AFB counts or M. tuberculosis quantification of the samples processed, which hampers the comparison of success rates [26]. Similarly, Doyle et al. (2018) used the SureSelect XT approach and reported high success rates, even among AFB 1+/scanty samples [27]. Nonetheless, in line with our results, the authors also observe the best results with higher smear grades, showing that AFB counts can be a good predictor of success [27]. Despite launching important clues about the usefulness of this approach in M. tuberculosis, these studies mostly focused on the drug susceptibility issue and used limited sample datasets. In our study, we provide a detailed characterization of 100 samples, not only regarding the analytic parameters but also the upstream sample traits that can ultimately affect the outcome of the protocol. Furthermore, we tried to mimic a real diagnostic scenario where each sample was sequenced only once. For instance, some samples with a good percentage of reads "on target" but low number of reads could have been recovered with new rounds of sequencing to increase genome coverage.
M. tuberculosis is particularly appropriate for the use of diagnostic WGS with enrichment, since unlike the majority of pathogenic organisms, it has a well-characterized clonal nature, with low levels of sequence variation, and does not undergo recombination or horizontal transfer [28]; thus, a stable set of oligonucleotide baits can be created/designed and sequencing data can be mapped against a reference genome. By overcoming the constraints of time-consuming laboratory procedures for the isolation of M. tuberculosis strains in culture, we were able to provide a methodology that allowed not only the identification and prediction of genotypic resistance, but also the possibility to integrate this approach in a "real-time" surveillance for the rapid articulation with the public health authorities. Within a 5-day wet-lab procedure, after DNA isolation directly from sputum samples, we can retrieve all the information needed for routine diagnostic purposes, skipping the 1-3 weeks period required for culture isolation. Furthermore, a rough estimation on the total cost of this methodology showed, comparing with WGS-based analysis from strains isolated in culture, an increase in about EUR 120-150. Thus, an estimated final cost would roughly be below EUR 200 per sample, depending on the desirable coverage and on the sequencing equipment and flow cell that is used. However, it is important to note that other implementations that vary in both turnaround times and final costs are possible.
The decision regarding which samples should undergo additional enrichment with RNA baits and the robustness of the computational analysis is crucial for the success of the direct-WGS workflow. As expected, samples with higher AFB counts and consequently a higher number of katG copies (indirect estimate of the number of M. tuberculosis cells) were more likely to provide confident results. For example, we obtained a higher number of reads on target for higher smear grades samples (i.e., 3+ followed by 2+) (Figure 1). Contrarily, the number of β-actin copies seems to be inversely correlated with the number of katG copies and, hence, the success of the procedure. Although in a speculative basis, we believe this can be related with the sample collection, as a patient with a higher M. tuberculosis load may yield samples enriched in mucous/bacteria rather than in human cells. This is also illustrated by the results obtained among the 37 out of 100 samples showing ≥90% of the reference genome covered with at least 10-fold depth of coverage, as 27, 9, and 1 samples were 3+, 2+, and 1+ smear grade samples, respectively, (Figure 2). However, these data also show that samples with lower AFB/katG counts could also yield good results, as the bioinformatics WGS data analysis improvements (including removal of non-mycobacteria reads and establishment of "success" thresholds for analysis) allowed the successful inclusion of several samples with smear results <3+. The drugresistance/susceptibility prediction was not an exception as the 42 samples for which ≥90% of the >9000 hits surveyed by TB-profiler were covered, included several 2+ samples.
Another relevant application of direct WGS is the study of M. tuberculosis genetic diversity in sputum samples, which might better reflect the within-patient bacterial populations. Unlike WGS performed from DNA isolated from pure cultures, which potentially leads to the loss of information on the existence of sub-populations, sequencing directly from clinical samples can provide information on the real scenario of the in vivo sub-populations that might co-exist during the infection period. Once more, this can be illustrated for 12 out of the 100 samples as we identified mutations in minor populations (ranging from 11% to 25% intra-patient frequency) associated with resistance to several antibiotics.
There are limitations to surpass before direct WGS approaches can be used to support the control of the TB pandemic. All these workflows are unaffordable in the majority of high/medium TB-burden countries, and sensitivity remains low compared with culturing. However, in terms of drug resistance and epidemiological surveillance, genome capture coupled with WGS enables the study of TB transmission dynamics and resistance in countries where culture and drug susceptibility testing are not routinely performed. It also constitutes a potential valid backup approach for non-viable samples. In this regard, it would have particular interest for samples predicted (by rapid molecular tests) to contain multi-drug resistance strains, for which it would be important to determine not only a more complete set of resistant hits, but also the phylogenetic context.
Samples Description
Smear-positive sputum samples (n = 100) retrieved from pulmonary TB patients and that were sent to the NRL-TB from the Portuguese NIH for routine diagnostic purposes were tested ( Figure S2). Samples were decontaminated using N-acetyl-l-cysteine/NaOH (1% NaOH final concentration) and resuspended after centrifugation in 2 mL phosphate buffer (pH 6.8). After inoculation for phenotypic testing, all the remaining sputum specimens were kept frozen at −20 • C until further use. The set was composed of samples with different positive AFB scoring of 1+, 2+, and 3+ (visually quantified according to WHO guidelines) in order to test the target enrichment protocol with different M. tuberculosis smear grades.
Phenotypic Resistance Profiles
All isolates were phenotypically tested for susceptibility to first-line drugs rifampicin
DNA Extraction
After heat killing of the bacteria (95 • C for 1 h), high-quality DNA samples were prepared using the QIAamp DNA Mini Kit (Qiagen, Düsseldorf, Germany) according to the manufacturer's protocol.
Generation of Standard Curves for Real-Time Quantitative PCR (qPCR)
To quantify the number of M. tuberculosis genomes in each sample, a plasmid standard curve was generated as previously described for other pathogens [19,29,30]. Primers for the conserved M. tuberculosis single-copy gene katG were designed based on constant regions (primers KatG-A TTACCGCTGGGCGTGTTC and KatG-B TCACGAAGAAGTCGTTG-GTCAGT by using Primer Express software v3.0; Applied Biosystems, Waltham, MA, USA), according to the sequence of MTB H37Rv strain (Genbank # AL123456). An amplified fragment (58 bp) of katG was cloned into the pCR ® 2.1 vector using the TOPO TA technology (Invitrogen, Waltham, MA, USA) according to the manufacturer's instructions. After transformation of DH5α E. coli with the cloned vector and subsequent overnight propagation, plasmid DNA was purified and transformation was confirmed by PCR and sequencing. The plasmid copy number was calculated according to the formula: Nº plasmid/mL = (Avogadro's No. × Plasmid conc. (g/mL))/MW of 1 mol of plasmids (g). The standard curve consisted of eight serial plasmid dilutions (~1 to 1 × 10 8 plasmid copies/µL). The number of human cells/sample was quantified by a similarly generated plasmid standard curve using an amplified fragment (73 bp) of a single copy human gene (β-actin) cloned in a similar vector, according to Gomes et al. 2006 [29] (primers β-actin-3 GGTGCATCTCTGCCTTACAGATC and β-actin-4 ACAGCCTGGATAGCAACGTACAT).
qPCR for Quantification of MTB vs. Human Cells
The real-time quantification was performed using the Light-Cycler ® 480 SYBR Green chemistry and optical plates (Roche Diagnostics, Basel, Switzerland). The qPCR reagents consisted of 2 × SYBR Green I Master Mix, 400 nM of each primer, and 5 µL of DNA sample in a final volume of 25 µL. The thermocycling profile was: 10 min/95 • C followed by 40 cycles of 15 s/95 • C and 1 min/60 • C. Specificity was checked by generating the dissociation melting curves. Absolute quantification of bacterial and human genomes was calculated in relation to the respective plasmid standard curve. The relative load of M. tuberculosis cells in each sample was determined as the ratio between the number of katG and β-actin copies.
DNA Capture Directly from Clinical Samples: SureSelect XT HS Target Enrichment
In order to capture the M. tuberculosis DNA directly from clinical samples, complementary RNA oligonucleotide "baits", 120 bp in size, were designed to span the~4.5 Mb of the M. tuberculosis genome. As such, the reference genome sequence of the MTB H37Rv strain (Genbank #AL123456) was in silico fragmented into 120 bp sequences twice, to ensure an overlap of 60 bp between sequences. Due to their rich GC content, which can interfere with DNA capture, all M. tuberculosis genes of the PE, PPE, and PE-PGRS family were also independently fragmented into 120 bp sequences in order to increase capture sensitivity. All resulting sequences were BLASTn searched against the Human Genomic + Transcript database to excluded homologous sequences to the human genome. Overall, a total of 42,278 RNA probes were generated and this custom bait library was then uploaded to the SureDesign software (https://earray.chem.agilent.com/suredesign, accessed on 4 August 2016) and synthesized by Agilent Technologies (Santa Clara, CA, USA). During synthesis, the 2198 sequences complementary to the PE, PPE, and PE-PGRS family were unbalanced 8:1 to potentiate capture.
For the libraries preparation, the SureSelect XT HS Target Enrichment System for Illumina Paired-End Multiplexed Sequencing Library (Agilent Technologies, Santa Clara, CA, USA) procedure was used (version E1, April 2021). The "Preparation of high-quality gDNA from fresh biological samples" instructions were followed in Step 1. to ensure high quality gDNA yield (see DNA extraction in section above). In order to calibrate the input to 10-200 ng in 7 µL, the gDNA samples were quantified using Qubit HS kit (Invitrogen, Life Technologies, USA), and subsequently fragmented using the Agilent's SureSelect XT HS Low input Enzymatic Fragmentation kit ("Method 2: Enzymatic DNA Fragmentation" option described in Step 2), according to manufacturer's instructions. Library preparation was then resumed at Step 3, "Repair and dA-Tail the DNA Ends" and carried out with no further alterations. Distribution of library fragments was evaluated using the Fragment Analyzer (Agilent, Santa Clara, CA, USA) and the PROSize 3.0 analysis software. Library concentration was obtained following smear analysis of fragment sizes. Pools of 16-indexed libraries were sequenced in the Illumina NextSeq 550 instrument (Illumina, San Diego, CA, USA) according to the manufacturer's instructions and the sequencing run recommendations of the SureSelect XT HS Target Enrichment System protocol (Agilent Technologies, Santa Clara, CA, USA).
In Silico Drug Resistance Prediction
For in silico drug resistance prediction and to minimize the issues related to genes with homology in different bacterial species (such as rrs), the trimmed reads were filtered using the seqtk v1.3-r106 tool (https://github.com/lh3/seqtk) and only the reads classified as Mycobacteriaceae family (or at a lower taxonomic level) by Kraken2 were kept. The filtered reads were then analysed using TB-profiler v4.2.0 with minimum depths of coverage of 10 and 5 for comparison.
SNP-Based Core-Genome Analysis
For the SNP-based analysis, the trimmed reads from all samples were mapped against the MTB H37Rv strain genome using Snippy v4.5.1 (https://github.com/tseemann/snippy) and the variant positions were validated with a minimum depth of coverage of 10-fold, minimum proportion of 70% of reads showing the alternative allele, a minimum mapping quality of 30, and a minimum base quality of 20. The core single nucleotide variants (SNVs) were extracted using the Snippy core module in Snippy, and only the samples with at least 90% of the reference genome covered and at least 10-fold depth of coverage were validated for the final core-SNV phylogeny. To minimize bias in the phylogeny, core-SNV falling within known M. tuberculosis genomic regions with high GC content or repetitive elements, as well as known SNVs in resistance-associated positions, were excluded (compiled by Kohl and colleagues, available at https://github.com/ngs-fzb/MTBseq_source/tree/master/ var/res, accessed on 31 July 2018) using the "mask" parameter of Snippy core. A core-SNV minimum spanning tree of the validated samples was generated using Grapetree v1.5.0 (https://github.com/achtman-lab/GrapeTree) [34].
Funding:
The acquisition of WGS-associated equipment used in this study (including the Illumina NextSeq 2000) was funded by the HERA project (Grant/2021/PHF/23776) supported by the European Commission through the European Centre for Disease Control and Prevention and partially funded by the GenomePT project (POCI-01-0145-FEDER-022184), supported by COMPETE 2020-Operational Programme for Competitiveness and Internationalisation (POCI), Lisboa Portugal Regional Operational Programme (Lisboa2020), Algarve Portugal Regional Operational Programme (CRESC Algarve2020), under the PORTUGAL 2020 Partnership Agreement, through the European Regional Development Fund (ERDF), and by Fundação para a Ciência e a Tecnologia (FCT).
Informed Consent Statement: Not applicable.
Data Availability Statement: Sequence data (only reads mapping against M. tuberculosis H37Rv reference genome) generated in this study were deposited in the European Nucleotide Archive under the Bioproject PRJEB59106. The set of RNA baits sequences used in the current Target Enrichment protocol are available at https://doi.org/10.5281/zenodo.7550025.
Conflicts of Interest:
The authors declare no conflict of interest. | 2023-02-05T16:06:45.182Z | 2023-02-01T00:00:00.000 | {
"year": 2023,
"sha1": "ec0d79812193ee083dd7127f3b3dcbefd4d969fb",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.3390/ijms24032912",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "966bfa141fe1ca932fb531a69c164ffeff45938e",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
21747537 | pes2o/s2orc | v3-fos-license | OPTIMAL DESIGN FOR DYNAMICAL MODELING OF PEST POPULATIONS
. We apply SE -optimal design methodology to investigate optimal data collection procedures as a first step in investigating information content in ecoinformatics data sets. To illustrate ideas we use a simple phenomenological citrus red mite population model for pest dynamics. First the optimal sampling distributions for a varying number of data points are determined. We then analyze these optimal distributions by comparing the standard errors of parameter estimates corresponding to each distribution. This allows us to investigate how many data are required to have confidence in model parameter estimates in order to employ dynamical modeling to infer population dynamics. Our results suggest that a field researcher should collect at least 12 data points at the optimal times. Data collected according to this procedure along with dynamical modeling will allow us to estimate population dynamics from presence/absence-based data sets through the development of a scaling relationship. These Likert-type data sets are commonly collected by agricultural pest management consultants and are increasingly being used in ecoinformatics studies. By applying mathematical modeling with the relationship scale from the new data, we can then explore important integrated pest management questions using past and future presence/absence data sets.
(Communicated by Jia Li)
Abstract. We apply SE -optimal design methodology to investigate optimal data collection procedures as a first step in investigating information content in ecoinformatics data sets. To illustrate ideas we use a simple phenomenological citrus red mite population model for pest dynamics. First the optimal sampling distributions for a varying number of data points are determined. We then analyze these optimal distributions by comparing the standard errors of parameter estimates corresponding to each distribution. This allows us to investigate how many data are required to have confidence in model parameter estimates in order to employ dynamical modeling to infer population dynamics. Our results suggest that a field researcher should collect at least 12 data points at the optimal times. Data collected according to this procedure along with dynamical modeling will allow us to estimate population dynamics from presence/absence-based data sets through the development of a scaling relationship. These Likert-type data sets are commonly collected by agricultural pest management consultants and are increasingly being used in ecoinformatics studies. By applying mathematical modeling with the relationship scale from the new data, we can then explore important integrated pest management questions using past and future presence/absence data sets. "ecoinformatics". Ecoinformatics studies address ecological questions using observational, preexisting data rather than experimental, researcher-generated data and often combine data sets from several sources into a larger data set [18,19]. These sources can include farmers, pest management consultants (PMC), federal and state repositories, among others [18].
There are several weaknesses in experimental approaches that can be complemented by ecoinformatics. For example, due to cost limitations, experiments are often on a smaller scale, both spatially and temporally, while ecoinformatics approaches often reflect the scale of the farming that is being studied. The goals of IPM include improving crop yield for farmers; however, information drawn from experiments may only be relevant to a limited range of farming conditions. Ecoinformatics can include farmer participation from the start, and the farmers may be more confident in the recommendations that are generated from analyzing their own data [18,19]. Although there are several benefits of applying ecoinformatics methods to IPM research, an important potential weakness to address is the information content of the data, which affects the accuracy of the resulting conclusions. Ecoinformatics data sets are often heterogeneous due to the variety of sources and sampling methods. In addition, pest densities and other variables of interest can be measured qualitatively rather than quantitatively (e.g., "trace", "low", "moderate", and "high" densities in Likert-type [16] data sets as opposed to population counts). There is, of course, a trade-off; collecting qualitative data is much more time efficient but can significantly reduce the information content in the data. For further discussions on the strengths and weaknesses of experimental and ecoinformatics data sets, and for a review on ecoinformatics in the context of agricultural entomology, see [18,19,12] and the references therein.
Our own efforts in dealing with such Likert type data sets arose in dealing with qualitative data sets such as those in [17]. In order to use mathematical models to detect trends in this type of data, population counts as well as corresponding Likerttype data are needed to establish a scale between these two types of data. This scale can then be applied to existing qualitative data sets. The optimal design for quantitative data collection (sampling strategies including how often and how much data to be collected) maximizes the accuracy in estimating population dynamics and is the major focus of this brief note.
The ability to perform this data conversion is an important step for determining the information content in farmer-generated ecoinformatics data sets. There are numerous methods to investigate quality (information content) of a data set, among them the use of dynamical models and related sensitivity as well as statistical uncertainty quantification tools. In this context, information content of a data set refers to the quality of the data with respect to accurate estimation of model parameters with an acceptable statistical confidence associated with these parameters. The parameters are first determined by solving an inverse problem such that the model solution best fits the data. A data set with high quality contains sufficient information to produce statistically accurate (such as acceptable confidence intervals or some other associated measure of uncertainty) parameter estimates. With accurate parameter estimates, a model solution can then realistically capture population trends, which can help, for example, investigate the minimum pesticide amount needed to reduce pest populations below an economic threshold. Examples of previous works using dynamical models to investigate information content in ecological data include [1,2,3,7].
To illustrate the assessment of the quality of large ecoinformatics data sets, here we consider a subset of the data from [17] from a single PMC who collected repeated measures of citrus red mite (CRM), Panonychus citri, densities over multiple time points (e.g., longitudinal data -a necessity for applying dynamical systems to data). CRMs are citrus pests that extract cell sap from leaves and fruit, which causes yield loss and stippling that can reduce the grade of the fruit [13]. CRM populations gradually increase over the spring and then sharply decline during the hot summer months [11]. We wish to capture this growth trend using dynamical modeling, as this will allow us to evaluate the information content in the data. Investigating CRMs is a research pest management priority, specifically with respect to secondary outbreaks and the relationship between pest densities and loss of fruit quality/quantity.
The PMC generated data did not contain quantitative pest counts. Specifically, the subset considered here only provided CRM infestation proportion, defined as the proportion of leaves sampled that contain at least one CRM. That is, where infestation sample is the number of sample units (leaves) checked, and infestation finding is the number of sample units infested with one or more CRMs. This sampling method provides no quantitative information as to how many CRMs are present on each infested leaf. Thus, an increasing infestation proportion over the spring months provides only indirect information as to the dynamics of the CRM population. Therefore, we are not able to use only infestation proportion to analyze the seasonal trend in the data. Previous work reports relationships between infestation proportion and a total population, which potentially could be used to convert our infestation proportion data to population counts. The authors in [14] develop a sampling plan to predict the total CRM population from the proportion of leaves infested with at least one adult female on the lower surface of a leaf. However, this relationship was developed based on lemon plants in Riverside and Ventura Counties in California. Thus this relationship may not be applicable to our data collected on oranges and mandarins in the San Joaquin Valley. Therefore, we aim to apply optimal design methodology to determine when and how often to collect count data from similar fields in order to develop a relationship between CRM count and infestation proportion data, similar to that in [14]. That is, in this paper we aim to answer the following questions 1. For a set time period and a fixed number of data points, when should data be collected? 2. With optimized data collection time points, how many data points are needed?
Once data are collected according to the optimal design formulation, we can determine a relationship between CRM infestation proportion and total population. With this relationship, we can convert the infestation proportion data to population counts and apply our dynamical model to investigate the quality of the ecoinformatics data set as well as examine other pertinent IPM questions.
In Section 2 we introduce a simple CRM population model (primarily to illustrate ideas since a more sophisticated validated population model is not available) as well as the statistical model used in our optimal design formulation. The framework for this SE -optimal design is then given in Section 3, with the implementation of the constrained optimization given in Section 4. Section 5 discusses computing standard errors (SE) using asymptotic theory for Monte Carlo simulations. The results are presented in Section 6 and conclusions are discussed in Section 7.
2. Dynamical modeling of CRM populations. Mathematical models are used to represent biological systems and investigate hypotheses regarding the biological processes. While a mechanistic model hypothesizes the relationships between different biologically interpretable variables and parameters, a phenomenological model solely aims to capture qualitative trends in the desired dynamics. We present a simple phenomenological CRM population model since here, we only aim to apply the model to optimal design methodology rather than hypothesize specific mechanisms of population growth and death. That is, we use a model that represents the general seasonal trends as represented in seasonal curves [11] and hence the model is not based on specific growth/death mechanisms from a previously developed and validated model. The simple mathematical model we use for this is given by with scalar observation process with parameters θ = (a, b, c, K) ∈ R p , f ∈ R m = R, and where x represents the number of CRMs. The CRM population is assumed to grow logistically with timedependent intrinsic growth rate g(t) and carrying capacity K. The CRM population death rate is also assumed to be time-dependent, given by d(t). The tuning parameters, a, b, and c, adjust the shape of the intrinsic growth and death rate curves in this phenomenological model and hence do not have specific mechanistic-based meaning. The simple functions g(t) and d(t) were chosen so that the CRM dynamics in a 7 month season (January -July) generally reflect those reported in biological literature [11,12] with a minimal number of parameters. Other simple functions commonly used in modeling, such as polynomials, can depend on a larger number of parameters, which generally require more data to estimate. The model solution for nominal parameters θ 0 = (a 0 , b 0 , c 0 , K 0 ) = (0.12, 0.015, 0.025, 250) is given in Figure 2b and represents what studies suggest might be the dynamics of a typical infestation period [11,12]. In order to account for the uncertainty we would expect in observational data, we consider the following statistical error model where Y (t) is a random variable, θ 0 is the nominal parameter vector, and E is assumed to be independent and identically distributed with mean 0 and variance σ 2 0 . A realization of the statistical error model is given by where is a specific realization of the random variable E.
OPTIMAL DESIGN FOR DYNAMICAL MODELING OF PEST POPULATIONS 997
3. SE optimal design formulation. We aim to determine the sampling times of experiments in order to maximize the information content in the data collected. In order to explain the optimal design methodology, we begin by giving an intuitive explanation of information content (Subsection 3.1). With this, we then provide the motivation for the specific type of optimal design implemented here (Subsection 3.2).
3.1. Information content. In this context, information content refers to the quality of the data in regards to estimating model parameters. That is, data with high information content allow us to accurately estimate parameters as well as attach high degrees of statistical confidence to these parameters. With this, one can hope to infer valuable information about the actual population trends. We first discuss the motivation behind the SE-optimal design formulation. That is, we discuss how areas of high information content are determined. For intuition, let us consider the effect that the parameters have on the model solution (i.e., sensitivity of the model solution with respect to the parameters over time). Figures 1a -1d depict these sensitivities. From this, one can see that the sensitivity of the model solution to a given parameter varies over time. Data taken at times where the solution is more sensitive to a given parameter, correspond to more accurate estimation of that parameter. For instance, consider the sensitivity of the carrying capacity, K, given in Figure (1d). One can see that as the season progresses, the sensitivity of the solution with respect to K steadily increases until it reaches its maximum towards the end of the season. This makes sense as one expects to have less information about the carrying capacity of the population early on, but attain the most information about the carrying capacity when the population reaches its maximum (around day 150). After this peak, there is little additional information gained, which corresponds to the decrease in sensitivity observed at these later times.
Among possible optimal design formulations (D-optimal, E-optimal, SE-optimal, etc. [5,6,8]), it is common to base the design criterion on the Fisher Information Matrix (FIM), as this indirectly includes information about the sensitivities. Section 3.2 discusses the FIM as well as the specific criterion for the SE -optimal design formulation used here.
3.2.
Optimal design criterion. Although sensitivities play a role in determining information content, the individual sensitivities do not solely determine the optimal sampling times. Rather, the criterion takes into account a combination of the effects of sensitivities through the FIM. A derivation following [5,6] is given next that explains how minimizing a criterion dependent on the FIM determines the optimal sampling times.
Given data corresponding to a distribution of sampling times, P (t), one often evaluates how accurately a model solution fits these data via a weighted least squares cost functional. For instance, consider the error functional given below Note that this error functional represents a more general case where the variance in the data can change over time (although in our problem variance is assumed constant). The lower the value of J, the more closely the model fits the data. Our question is what distribution of sampling times can produce the smallest J value? Since the goal is to determine the optimal sampling times prior to data collection, we wish to use (5) to develop a minimization criterion that is based on the mathematical model and is independent of the data. Recall, the statistical model is of the form Then expanding f (t, θ) about the nominal parameter set θ 0 using a Taylor Series, we obtain where ∇ θ is given by [∂ θ1 , . . . , ∂ θp ]. Note that ∇ θ f is an 1 × p matrix, which gives the sensitivity of the solution with respect to the parameters. Now let us substitute (6) and (7) into the functional given in (5), resulting in the modified functional where J ≈J in a neighborhood of θ 0 . Note that We observe that a minimum argumentθ in cost functional (8) (tacitly assumed to occur in the interior of the set of possible of values) implies that or equivalently We see that this equation contains the Generalized Fisher Information Matrix (GFIM), defined by Since our optimal mesh is considered to be a discrete set of time points, we can now introduce a discretization of the sampling distribution P (t). Without loss of generality we can consider these distributions as probability measures on [0, T ], where the set of all such measures is denoted P (0, T ). Suppose for points τ = where ∆ ti is the Heaviside function (with the derivative being the Dirac delta function) with atom at {t i } (see Appendix 7). That is, Considering the measure P given above, we have the discrete version of (10) given by We observe that this contains the discrete form Fisher Information Matrix given by which is tacitly assumed to be of full rank. Now consider that we wantθ to be as similar to θ 0 as possible and solve for (θ − θ 0 ) T in (14): . We see that b contains the observational random error term, , on which we would not want to base our design. From (16) one can see why a minimization criterion for the optimal design formulation is based on F −1 . Now let us recall the optimal design problem. That is, we wish to determine the optimalP τ such that, for J : R p×p → R + , Specifically for SE -optimal design, J SE is given by Minimizing this cost functional corresponds to minimizing the sum of the squared normalized standard errors, where standard errors are used to calculate confidence intervals for parameter estimates (see Section 5.1).
Constrained optimization and SE design implementation.
The SE -optimal design computational method utilizes a constrained optimization to determine the mesh of time points, where T is the set of all time meshes such that 0 < t 2 < · · · < t N −1 < T . The algorithm used to implement this constrained optimization is MATLAB's fmincon.
Since the optimal mesh should contain 0 and T , which are assumed to be known, we optimize over N − 2 parameters. To enforce the linear time mesh constraint in fmincon, we use the following linear system where A is an (N − 1) × (N − 2) matrix, t is an (N − 2) × 1 time vector, and b is an (N − 1) × 1 vector. For this implementation, (20) has the following form: This constraint forces the first optimized mesh point to be greater than or equal to 1, the final optimized mesh point to be less than or equal to T − 1, and all interior mesh points to be at least one day apart (since this is reasonable in the field). Furthermore, note that although we are dealing with discrete days, we do not force this in the optimization. Once the optimal mesh is determined, we round to the nearest whole number. This seems reasonable in practice since we are not concerned with what time of day sampling occurs. For this experiment, we consider grids with N = 6, 12, 18, 24, and 30. This corresponds to sampling 6 times in the sampling season (January through July) and considers how doubling the number of samples improves our ability to estimate parameters accurately. Since uniform sampling may be more feasible in practice, we compare the standard errors corresponding to the optimal grids to those of uniform grids. Figure 2a depicts the distribution of sampling times for the optimized grids. Figure 2b shows these distributions for N = 6 and N = 12 along the solution curve. We note that the optimized time meshes cluster to areas of high information content, based on the cost functional in (18). To provide intuition as to why this occurs, we plot in Figure 3a the cost value (J) for the optimal mesh, the uniform mesh, and two hypothetical meshes for N = 12. The hypothetical meshes were designed to have clustering similar but not identical to the optimal mesh. As expected, the optimized mesh produces the smallest cost value, while Mesh 2 (the most similar to the optimal mesh) has the second lowest cost. We observe that the uniform mesh has the largest cost. In the context of inverse problems, it is advantageous to have multiple samples in time periods with high information content. 5. Standard error methodology. We first implement the constrained optimization scheme using the SE design formulation to determine the optimal distribution of sampling points τ * = {t * j } N j=1 for fixed values of N . We then generate simulated data corresponding to these optimal meshes as well as to uniform meshes and compare standard errors for different sampling distributions. The following section describes the method for computing asymptotic standard errors for scalar models such as the one given in model (1).
5.1.
Asymptotic theory for computing standard errors. Consistent with the statistical error model given in equation 4, we estimate our parameters by solving an inverse problem with an ordinary least squares (OLS) formulation, following [9,10]. The OLS estimator is given by which is estimated aŝ Since the dependence of our estimate on the OLS formulation is understood, the OLS subscript notation will be dropped. Next, we compute the sensitivity matrix which is done using the complex step method [4]. That is, where h is size of the perturbation, e k is the k th unit vector in R p , and i is the imaginary unit. Note that χ = χ N is an N × p matrix. The true, constant variance is given by We can estimate this variance bŷ The true covariance matrix is approximately given by and the true Fisher Information Matrix (FIM) is given by When θ 0 and σ 2 0 are unknown, the covariance matrix is estimated bŷ for which the corresponding estimate of the FIM iŝ Then, the asymptotic standard errors are given by which are estimated by SE k (θ) = (Σ N (θ)) kk , k = 1, . . . , p.
The confidence interval for parameter estimateθ k with a confidence level of 100(1 − α)%, is given by where α ∈ [0, 1] and t 1−α/2 is computed from the Student's t distribution with N −p degrees of freedom.
5.2.
Monte Carlo methods for asymptotic standard errors. Monte Carlo (MC) trials can be used to examine the average asymptotic behavior of the standard errors. This accounts for the variability in residual errors in simulated data sets (as we have indicated earlier, no experimental quantitative data sets are available to test our results). For each Monte Carlo trial, data are simulated as where θ 0 is the nominal parameter set, N corresponds to the number of time points in the optimal mesh {t * j } N j=1 , and j is a realization of E j ∼ N (0, σ 2 0 ) for σ 0 = 20. For each trial, parameters are estimated and standard errors calculated using the OLS procedure described in Section 5.1. The average standard errors and parameter estimates are calculated over 1000 Monte Carlo trials. This provides the average performance of each optimal grid over 1000 noisy data sets. 6. Results. In Figure 4, the average standard errors are given for each parameter over 1000 Monte Carlo trials for both the optimized and uniform time meshes corresponding to N = 6, 12, 18, 24, and 30. Observe that for each N , the standard errors for the optimized grids are lower than those of the uniform grids, which is expected. Also note that as N increases, the standard errors for both the optimized and uniform grids decrease. It should be noted that although the optimized grids consistently perform better than the uniform grids, the standard errors for both might be considered acceptable, as they are all at least one order magnitude smaller than their corresponding parameter value.
In Figure 5, 95% confidence intervals are given using the average standard errors for each parameter corresponding to the optimal grids for N = 6, 12, 18, 24, and 30. The average parameter estimate is given by the dot at the center of each interval and is close to the nominal value. We observe that as N increases, the confidence intervals for each parameter become more narrow, with the most substantial decrease in interval width being between N = 6 and N = 12. This suggests that as the number of mesh points on the optimal grid increase, we are able to estimate the parameters with increasing accuracy.
From Figures 4 and 5 we see that data collected according to the optimal grid design provide acceptable standard errors, which allow us to be confident in the parameter estimates. However, we see that there is not a substantial improvement in standard errors and confidence intervals for N > 12. This decrease in improvement as N increases is reasonable due to the fact that sampling times cluster around areas of high information content, leading to a limiting effect in improvement. 7. Discussion. We have determined an optimal design with regards to when observational CRM data should be collected. This optimal design criterion provides that data are collected in such a way that parameters can more confidently be estimated. Population count data collected according to the optimal grids would permit the use of dynamical modeling to infer CRM population sizes over a growing season. More importantly, with simultaneously collected corresponding proportional data (collected at the same time and with regards to the same sample unit), a scaling relationship between the population size in counts and corresponding proportional data could be estimated. This could allow us to make use of the current and future farmer-generated data sets consisting of only proportional data to develop and validate a suite of mechanistic mathematical models for use in investigating pest population dynamics using the broad ecoinformatics datasets. We first addressed the question, given a fixed number of data collection points, when are the optimal times to collect data? To do this, we use the SE -optimal design framework for fixed N = 6, 12, 18, 24, and 30 to obtain the optimal sampling grid. We observed that the optimal sampling time points tend to aggregate in areas of high information content, resulting in clustered time points. In addition, this clustering could be beneficial when collecting the data; a field researcher would only need to collect data for intermittent time periods compared to uniformly throughout the entire growing season. The next question we considered is given these optimal meshes, how much data should a field researcher collect? We analyzed the performance of these meshes by comparing the standard errors of parameter estimates corresponding to each grid. The parameters were estimated using OLS methodology and MC simulations. As expected, a higher number of data points coincides with lower standard errors, with limiting improvement. In addition, the optimized grid performs better than uniform grids of the same size. We felt this was an important comparison as uniform sampling is often the procedure for research data collection in the field.
In order to further determine how much data are adequate for dynamical modeling, we calculated confidence intervals for the estimated parameters. It is clearly seen that there is no significant decrease in confidence interval width for N > 12.
Since there are reasonable standard errors for all N examined, dynamical modeling could be beneficial with as few as 6 data points at optimal times. However, we recommend a minimum of 12 data points due to the significant decreases in the size of confidence intervals between N = 6 and N = 12. The days at which these samples should be taken are given as [0 33 34 35 88 89 140 141 177 178 179 210], where day 0 corresponds to January 1st, and day 210 corresponds to approximately July 31st in the examples considered here.
Answering the optimal sampling distribution questions (when and how much data to collect) is dependent, of course, upon the mathematical model chosen to represent the population dynamics. For example, it might be expected that growth/death rates may depend on density of the pests and hence a corresponding model (even a phenomenological one such as (1)) would require density dependent coefficients. Also, our phenomenological model solution represents only typical dynamics observed in a single growing season. To account for more realistic, time-varying, biological factors such as weather, predator-prey interactions, etc., a more mechanistic model would need to be developed and validated. Thus, we emphasize the importance of interdisciplinary collaboration to pursue all aspects of the efforts represented here.
Being able to infer population level dynamic information from proportional data collected by farmers would allow us to investigate important questions relating to ecoinformatics. Presence/absence sampling is more time efficient compared to counting individuals, which enables the collection of a larger volume of data (both spatially and temporally). This facilitates more timely pest management decisions. Once a scaling relationship between count and proportional data is estimated, large proportional data sets in combination with mathematical modeling can be used to investigate problems such as the minimal number of pesticide treatments needed while not reducing crop yield. In addition, a better understanding of crop vulnerability to pest damage over time could help define a window of crop sensitivity in the growing season. Furthermore, we could investigate the impact of pests on mandarin varieties, which make up a rapidly growing part of citrus production in the San Joaquin Valley, CA. To date, there have been few formal investigations into this impact, making it a meaningful problem to pursue in interdisciplinary efforts.
Appendix. In Section 3.2 the notion of a cost functional dependent on a distribution, P (t), is introduced in equation (5). We then discuss the Generalized Fisher Information Matrix, where a discretization of the distribution provides us with the discrete Fisher Information Matrix. In this section of the appendix we provide the mathematical details for using the discretization of a distribution (P τ ) via Heaviside functions to go from equation (11) to equation (15) (the GFIM to the FIM).
We begin by introducing the Heaviside function with atom at {t i } (Figure 6a) is defined as with derivative given by the Dirac delta "function" (Figure 6b): Consider points τ = {t i } N i=1 ∈ [0, T ], and define which is plotted in Figure 6c. The derivative of P τ (t) (Figure 6d) is given by Consider the following for some function f (t) and distribution of sampling times P (t): With this, one can see beginning with GFIM and introducing a distribution discretized as above we have the following | 2018-11-15T08:55:05.777Z | 2018-03-01T00:00:00.000 | {
"year": 2018,
"sha1": "9ab5ccdeb78f824bbcbe56c2e9be4628acac6797",
"oa_license": "CCBY",
"oa_url": "https://www.aimsciences.org/article/exportPdf?id=42805be7-8668-4080-8686-6f38034f3bf2",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "45034b644d3967ad133bfe6d426034b462c4ec65",
"s2fieldsofstudy": [
"Computer Science",
"Environmental Science",
"Mathematics"
],
"extfieldsofstudy": [
"Computer Science",
"Medicine"
]
} |
243895716 | pes2o/s2orc | v3-fos-license | The Brown Marmorated Stink Bug ( Halyomorpha halys Stål.) Influences Pungent and Non-Pungent Capsicum Cultivars’ Pre- and Post-Harvest Quality
: Halyomorpha halys is an important invasive pest that causes severe damage to fruits and vegetables. Peppers are susceptible to infestation by H. halys , resulting in yield losses. Plants respond to the insect infestation with a metabolic response. With this study, we attempted to determine the intensity of the metabolic response of infested peppers, how pungent and non-pungent peppers react to the infestation, and how the H. halys affects the post-harvest quality of both cultivars. The shelf life of the infested peppers did not change compared to the control treatments. We observed a drastic decrease in metabolite levels after storage in all three treatments in both cultivars, especially capsaicinoids, with an approximate decrease of 30% in the pericarp and 95% in the placenta of the pungent ‘Eris F1’. In some cases, the accumulation of metabolites was not limited to the fruit exposed to the H. halys infestation, but to the entire plant. We observed a 15-fold increase in capsaicinoid content in the infested fruits of cultivar ‘Eris F1’ and a 4-fold increase in the pericarp of cultivar ‘Lombardo tago’, which could lead to a possible further study on the defensive function of capsaicinoids and their use against H. halys . glycosides were calculated as apigenin-7-glucoside equivalents. Quercetin glycosides were calculated as quercetin-3-glucoside equivalents. Homocapsaicin was calculated as capsaicin and homodihydrocapsaicin as dihydrocapsaicin equivalents. The results of individual phenolics and capsaicinoids were expressed in g kg − 1 or 10 − 3 g kg − 1 dry weight, depending on the content of each metabolite.
Introduction
Peppers (Capsicum sp. L.), both pungent and not pungent cultivars, are widely cultivated around the world [1]. They are cultivated for their versatile usage in the culinary industry and for their benefits to human health, with a high vitamin C content, low sucrose content, and pungent cultivars for their capsaicinoid content [2,3]. There are 25 species in the genus Capsicum, of which five are widely cultivated that are susceptible to pests and diseases [4]. Although pepper pests change with the location of production (geography, climate), there are several pests of peppers, some of the more important being thrips (Thysanoptera), aphids (Aphis sp.), greenhouse whitefly (Trialeurodes vaporariorum L.), and tobacco whitefly (Bemisa tabaci L.), which all cause severe damage to pepper plants and are a major problem for farmers across the world [5][6][7].
The brown marmorated stink bug (Halyomorpha halys Stål.) has recently become a major problem. With its large number of host plants, from ornamental plants to fruit and vegetable plants, it causes serious damage to agriculture in Europe and North America [8]. It causes damage to fruit such as apples, blueberries, etc., making them inedible for humans [9]. Several things have helped this pest to become so successful in spreading across the world, including climate change, lack of natural predators, and global trade [10]. The plant-host system, in our case the H. halys-pepper system, was previously studied by Mensah-Bonsu, et al. [11], which reported that the color of the peppers was not the main reason why the H. halys infested peppers, but it was rather a mix of chemical and visual preferences. Red bell peppers contain carotenoids, which play important roles in insects and are involved in providing coloration to various portions of their bodies and eggs, mating signals, vision, and diapause, as well as serve as antioxidants [11,12]. Common damage signs on sweet bell peppers are pale-yellow cloudy areas on the fruit surface associated with whitish, spongy tissue beneath the affected area [13,14].
Damage to fruit caused by pests or diseases can decrease shelf life and can cause economic damage [15], which results in less food for human consumption. The post-harvest quality of peppers is optimally maintained with low temperatures and high humidity [16]. The optimal storage temperature of peppers is between 7 • C and 10 • C, as reported by Hameed, et al. [17] and Lama, et al. [18]. Peppers can be stored for up to 28 days in a cool storage room with sufficient humidity from 90% to 95% [16]. H. halys may decrease storage time, since it pierces the fruit, causing small wounds that can lead to fungal infections [19]. A natural response of plants is to synthesize a defensive substance such as phenolics, which act as a repellent or deterrent, or are toxic to the pests [20]. Non-pungent and pungent peppers synthesize phenolic and capsaicinoid compounds, which can act as repellents for insects, affecting their reproduction cycle or interfering with feeding [21,22]. Among phenolics, flavones are one of the common defense molecules against insects [23]. To be grouped into the non-pungent group, a pepper should contain from 0 g to 0.045 g kg −1 fresh weight of capsaicinoids [24,25].
Here, we analyzed the damage of H. halys to peppers and their chemical response in terms of primary and secondary metabolites in the pericarp and placenta. We tried to answer the following questions. Do different fruit tissues react differently to biotic stress? Does H. halys have an effect on the post-harvest quality and shelf-life of pungent and non-pungent peppers, since the damage can be unseen by the farmers, resulting in yield loss occurring after storage? Does capsaicinoid content increase in biotic stressed pepper plants, which might indicate a potential defense function and could be further investigated in upcoming studies? Since there are only a few reports on H. halys damage to fruit and vegetables in terms of metabolites and no studies on their impact on the post-harvest quality of fruit and vegetables, our study fills a major void in this area of science and could clarify our view on this major and difficult-to-control pest.
Experiment Design
The experiment was carried out in Voklo, Slovenia (46 • 12 53.78 N; 14 • 25 21.31 E) at a local vegetable producer. The plants were grown from 20th of May to 15th of October 2020 by following the Integrated Production Guidelines [26]. Two cultivars were taken into consideration for this experiment. The first cultivar was the non-pungent Capsicum annuum L. 'Lombardo tago,' and the second was pungent Capsicum annuum L. 'Eris F1'. Both cultivars were grown under a high tunnel system. The average temperature and relative humidity were measured in the area of the plants and in storage. The average temperature in the environment was 18.2 • C, with 81.3% ± 2.4 relative humidity.
In the H. halys treatment, the bags contained five H. halys adults each. To exclude the potential impact of the bag itself, we placed empty bags on healthy plants, designating the treatment Control 2. In the Control 1 treatment, there were healthy plants that were not subjected to H. halys infestation. H. halys adult bugs were caught in an apple orchard in Bilje, Slovenia. For easier control over the bugs, insect netting bags were made (the netting density was 0.8 mm × 0.8 mm) and were sealed with a rubber string. The bugs were in the netting for one month, from 15th August to 15th September. Two to three such bags were placed on each plant and tied tightly to prevent the bugs from escaping. Each Agronomy 2021, 11, 2252 3 of 16 bag contained 12 to 15 fruits, resulting in 72 to 90 pepper fruits per treatment. They were divided into equal repetitions for metabolite analysis (five repetitions for each treatment).
Pepper Picking
Peppers were picked when they reached at least 95% cultivar specific color (in both cases red), were firm to the touch, and had a shiny pericarp. At the time of picking, 'Lombardo tago' peppers were sized from 12 to 15 cm long and were up to 2.5 cm in diameter, and 'Eris F1'peppers were from 10 to 14 cm long and up to 1.5 cm in diameter.
Storage Conditions
Half of the picked peppers were stored in a storage room for one month to test whether H. halys impacts the storage life and metabolite composition of peppers. After storage, the peppers were sensorial checked (visual damage, odor) for any damage and prepared for metabolite analysis. The optimal storage temperature for peppers is between 7 • C and 10 • C, as reported by Hameed, Malik, Khan, Imran, Umar, and Riaz [17] and Lama, Alkalai-Tuvia, Chalupowicz and Fallik [18]. In our study, the average storage temperature was 8.1 • C with a relative humidity of 97.2%. The pericarp and placenta were used for metabolite analysis on the HPLC/MS system (MS/MS; LTQ XL; Thermo Scientific, Waltham, MA, USA and Vanquish; Thermo Scientific, Waltham, MA, USA).
Water loss of stored peppers was measured by weighing them before storage and after storage. With the data of the fresh mass and dry mass, we determined the percent of water loss in individual treatments.
Extraction of Sugars and Organic Acids
Lyophilized dried powder (0.05 g) was extracted with 2 mL of bi-distilled water at room temperature. Samples were shaken on an orbital shaker for 30 min and placed in a cooled centrifuge (Eppendorf, Centrifuge 5810 R, Hamburg, Germany), in which the samples were rotated at 12,000× g for 8 min. Samples were filtered through 0.25 µm cellulose filters (Chromafil A-25/25; Macherey-Nagel, Düren, Germany). The extraction procedure and the HPLC settings were based on Zamljen, et al. [27]. All data were calculated with the help of appropriate standards. The results are presented as g kg −1 or mg kg −1 dry weight, depending on the content of each metabolite. The pericarp and placenta were used for total sugar and organic acid analysis.
Extraction of Phenolics and Capsaicinoids
For each of the metabolite extractions, 0.05 g of dry powder was extracted with 80% methanol. Samples were then placed in a cooled ultrasonic bath (0 • C) for 1 h. The samples were centrifuged at 10,000× g for 6 min and filtered through a 0.25 µm polyamide filter (Chromafil AO-45/25, Macherey-Nagel, Düren, Germany).
Total phenolic contents were determined based on Singleton, et al. [28]. The total phenolic contents were presented in g kg −1 GAE (gallic acid equivalents). For individual phenolics and capsaicinoids, the HPLC/MS settings were based on Medic, et al. [29] and Zamljen, et al. [27], respectively. All individual phenolics and capsaicinoids were first determined by HPLC/MS based on Mikulic-Petkovsek, et al. [30] and Zamljen, Jakopič, Hudina, Veberič, and Slatnar [27]. All phenolics and capsaicinoids were calculated based on appropriate standards. Where no standard could be obtained, we calculated the data as equivalents of similar substances obtainable as standards. All luteolin glycosides and chrysoeriol 7-O-(2-apiosyl-6-acetyl) glucoside were calculated as luteolin-7-glucoside equivalents. Apigenin glycosides were calculated as apigenin-7-glucoside equivalents. Quercetin glycosides were calculated as quercetin-3-glucoside equivalents. Homocapsaicin was calculated as capsaicin and homodihydrocapsaicin as dihydrocapsaicin equivalents. The results of individual phenolics and capsaicinoids were expressed in g kg −1 or 10 −3 g kg −1 dry weight, depending on the content of each metabolite.
Statistical Analysis
Program R, version 2.7.1., Stanford, USA (Package Rcmdr) (Team, R.D.C., 2008) was used for statistical analysis. First, the data were checked for normality and equal variances using Levene's test. F-values, p-values, and df were placed into the footnotes of each table. The df were the same in each treatment and table, since the number of treatments was the same every time (df = 2). Where significant treatment effects were found using multi-way analysis of variance (MANOVA), the post hoc Duncan test was performed, comparing all treatments to the Control 1 healthy plant treatment. The factors in the MANOVA were fruit part, storage, H. halys infestation, and cultivar. The cultivars were tested separately. The dependent variables were sugars, organic acids, phenolics, and capsaicinoids. In the results, we presented Wilks' λ, F and p value. The significant level was α ≤ 0.05. In tables where no statistical differences could be observed, for an easier overview of the data, no letters were written.
Visual Appearance of Pepper Fruit and Water Loss
Visual inspection of all three treatments' fruit did not show any visual changes. All fruit were visually marketable at both inspections, before and after 28 d storage (Figure 1). A noticeable smell came from the H. halys infested fruit in both inspections, before and after storage. This smell is common for H. halys, and it derives from trans-2-octenal and trans-2-decenal aldehydes [31]. We could not visualize any marks on the fruit because the stylet bundle, with which the insect feeds, is only 45 µm in diameter, which results in microscopic puncture holes [32], and the peppers were red colored making it harder to spot.
The Control 1 treatment lost more water compared to the other two treatments during storage. In general, the placenta lost more water in the pungent 'Eris F1' than the pericarp. Among treatments, statistical differences were present in the pungent 'Eris F1' pericarp before and after storage, where the Control 1 treatment had approximately 3% higher contents compared to the H. halys and Control 2 treatment. Water loss is closely associated with cultivar, cell membrane ion leakage, lipoxygenase activity, and cuticular wax amount [33]. With controlled temperature, relative humidity, and, in certain cases, atmosphere, we can control most of these processes, especially respiration rate, in the fruit or vegetable, reducing water loss and thus maintaining quality [34]. Water loss after storage was minimal in all treatments (Table 1) 87.4 ± 0.9 85.8 ± 1.0 85.9 ± 1.0 ꜞ a, b different letters denote statistically significant differences (α ≤ 0.05) among treatments in the same cultivar and fruit part before and after storage.
The Control 1 treatment lost more water compared to the other two treatments during storage. In general, the placenta lost more water in the pungent 'Eris F1' than the per- .0 tically significant differences (α ≤ 0.05) among treatments in the same cultivar and fruit part. a, b different letters denote statistically significant differences (α ≤ 0.05) among treatments in the same cultivar and fruit part before and after storage.
H. halys treatment increased the total sugar content in the pericarp and placenta, compared to the empty bag treatment (Control 2) in pungent cultivar 'Eris F1' ( Table 2). Sucrose was not statistically impacted by the H. halys attack in either tissue of 'Eris F1'. In the placenta of 'Eris F1', attacked by H. halys, fructose decreased by 33.3% compared to the healthy Control 1 plants, but it increased by 57.5% compared to the Control 2 treatment. The H. halys-attacked fruit had 48.6 g kg −1 (Control 1) and 75.0 g kg −1 (Control 2) fewer total sugars than both control treatments in the non-pungent 'Lombardo tago' pericarp. A greater effect of H. halys was observed in the placenta of 'Lombardo tago', in which it decreased the total sugar contents by 33.2% compared to the Control 1 treatment. After storage, total sugar contents decreased in the Control 1 treatment. However, H. halys and Control 2 treatment sugar content increased slightly in the pericarp of the 'Eris F1' cultivar. This also happened with glucose in the pericarp and fructose in the placenta of 'Eris F1'.
In the 'Lombardo tago' pericarp, the H. halys treatment after storage contained approximately 66% more total sugars than in the Control 1 treatment and 46.3% more than in the Control 2 treatment.
Individual and Total Organic Acid Contents
Seven organic acids were determined in pungent and non-pungent peppers (Table 3). Oxalic acid was not detected in the pericarp of either cultivar.
In the pungent cultivar 'Eris F1', the H. halys presence was more noticeable in the placenta. Compared to the Control 1 treatment, H. halys reduced citric acid content by 21.4 g kg −1 in the placenta of fruit before storage. It increased fumaric acid content in fruit both before and after storage.
Halyomorpha halys had a significantly more noticeable effect on organic acids in the pericarp of non-pungent 'Lombardo tago'. Before storage, H. halys increased quinic acid and fumaric acid content by 15.1%, and 161.7%, respectively, in the pericarp. After storage, malic acid and fumaric acid increased in H. halys-attacked fruit. Quinic acid content increased during storage, and the Control 1 treatment had 19.2 g kg −1 more quinic acid after storage compared to the H. halys treatment. The placenta of the non-pungent 'Lombardo tago' showed no differences among organic acid contents, except for ascorbic acid.
Ascorbic acid content was not affected by H. halys in either pungent or non-pungent cultivars' pericarps. On the other hand, H. halys had a significant impact on placenta ascorbic acid content. In the pungent cultivar 'Eris F1', the Control 1 treatment before storage had 75.0% higher ascorbic acid content than in the H. halys treatment and after storage 64.9%, although storage negatively impacted all treatments in terms of ascorbic acid content in the placenta and pericarp.
H. halys decreased ascorbic acid content in the placenta of non-pungent 'Lombardo tago' before storage by 27.9%. After storage, the H. halys treatment had 27.0% less ascorbic acid in the placenta than in the Control 1 treatment. H. halys had no effect on ascorbic acid content in the pericarp of either cultivar Table 3. Individual and total organic acid contents before and after storage (g kg −1 ; mean ± SE; n = 15) of two Capsicum cultivars (two fruit parts) infested with H. halys.
Before Storage
After
After storage, most phenolics decreased in contents compared to the pre-storage contents in the non-pungent 'Lombardo tago' cultivar pericarp. Total analyzed phenolics in the pericarp were 0.2 g kg −1 and 0.1 g kg −1 lower in the H. halys treatment than in the Control 1 and Control 2 treatments, respectively. Total phenolic contents increased by 2.4 g kg −1 in the H. halys-attacked fruit compared to the Control 1 treatment.
The healthy plants of the Control 1 treatment had approximately 15 times less total capsaicinoids than the H. halys infested plants and approximately 5 times less total capsaicinoids than Control 2 plants (Figure 3). Accumulation of total capsaicinoids in the placenta of 'Eris F1' before storage was statistically and significantly impacted by H. halys, with 14.0 g kg −1 and 25.3 g kg −1 higher capsaicinoid contents compared to the Control 1 and Control 2, respectively. ꜞ a, b different letters denote statistically significant differences (α ≤ 0.05) among treatments in the same cultivar and fruit part. ♦ Symbol indicates data in mg kg −1 . / Symbol indic that the substance could not be determined. After storage, all capsaicinoid contents dropped, although H. halys treatment still had the highest total capsaicinoid content in the pericarp of the 'Eris F1' cultivar. In the placenta, the total capsaicinoid level dropped to 2.0 g kg −1 in the H. halys treatment, resulting in 2.1 g and 2.7 g kg −1 lower concentrations when compared to the Control 1 and Control 2 treatments, respectively.
Dihydrocapsaicin, nordihydrocapsaicin, homocapsaicin, and homodihydrocapsaicin were all higher in the H. halys treatment than in the Control 1 plants in the non-pungent 'Lombardo tago' cultivar pericarp. No differences were observed between the H. halys treatment and the Control 2 treatment. All capsaicinoids in the placenta were higher in the H. halys treatment than in the Control 1 and Control 2, except for nordihydrocapsaicin. The total capsaicinoid contents in the placenta of 'Lombardo tago' increased almost four times compared to both controls.
After storage, the non-pungent 'Lombardo tago' lost most of its pungency. The total capsaicinoids in the pericarp were higher in the Control 1 treatment and lower in the other two treatments (Figure 3 and Table S3). The placenta after storage lost most of its capsaicinoids, and no differences were observed among treatments.
Discussion
We tested the potential of the brown marmorated stink bug (Halyomorpha halys Stål.) to induce a plant response after feeding/damaging pepper fruits, in terms of primary and secondary metabolites. In our experiment, we infested two cultivars of peppers, pungent 'Eris F1' and non-pungent 'Lombardo tago'. We made an analysis of individual sugars, organic acids, phenolics, and capsaicinoids of fresh peppers, and those after 28 days storage, to see whether there is any impact of H. halys also after storage. We observed a change in individual sugar contents and total sugar content in the fresh fruit before and after storage in both cultivars. Where statistical differences were present, glucose was the most affected sugar of all three determined. Sugars are known to be signaling molecules in plants [35]. Morkunas and Ratajczak [35] also reported that plants with higher sugar contents are more resistant to pests and disease attacks. Glucose and sucrose both act as signaling molecules, of which sucrose affects gene expression [36]. Major differences were observed between the two cultivars in terms of the response to the insect attack. This is mainly due to genetic variation among species and cultivars of plants. An insect infestation or infection of pathogenic fungi always impacts the sugar metabolism in plants, although this response greatly varies from the host-pests' system [35]. In our case, we observed a significant decrease of total sugars in the H. halys-infested non-pungent 'Lombardo tago'. On the other hand, in the pungent 'Eris F1', the H. halys treatment increased sugar content only when compared to the Control 2 treatment, and no changes were observed compared to the Control 1 treatment, which may indicate that the sugar synthesis response was on the entire plant. A possible explanation of this occurrence could be the genetic variability within the cultivar, or that sugars could be translocated to the infested fruit [37]. Another reason may be that more sugars were consumed for energy, since the infested plants had to battle an insect attack [35,36], or they were used in other synthesis pathways, such as the shikimic pathway or phenylpropanoid pathway [38]. Naturally, a certain percent of sugars is also consumed by the pathogen or insect.
After storage, a general decrease in all metabolites was observed in both cultivars. Interestingly, a noticeable decrease of total sugars was observed in the Control 1 treatment and an increase in H. halys infested plants. A possible reason for this may be that due to H. halys feeding, the plant reacted with the synthesis of thicker cell walls to prevent damage. Cell walls and other cell parts are broken down during ripening or storage, which results in higher sugar contents [39].
Total organic acids in the placenta of 'Eris F1' decreased in the H. halys treatment compared to the Control 1 treatment. Fumaric acid was lower in the Control 1 treatment compared to the other two treatments. Fumaric acid plays multiple roles in a plant. It fuels cellular respiration or functions as an alternative carbon sink for photosynthate [40]. In total organic acid contents, major differences were observed between the two cultivars, which indicates how a plant response to insect attacks is also cultivar based. After storage significant differences were observed in total organic acid content, the H. halys treatment had higher total organic acid content than in both control treatments. Similar results were reported by Zushi and Matsuzoe [41], where stressed tomato plants had higher organic acid contents after storage compared to the control. After storage, we observed an increase in malic acid in the H. halys-attacked fruit. Similar results were reported by Leiss, et al. [42], that malic acid content increased when western flower thrips (Frankliniella occidentalis) attacked chrysanthemum (Dendranthema grandiflora).
There was not an overall strong reaction in terms of phenolic response in the pungent 'Eris F1' cultivar. Among 16 determined individual phenolics, only luteolin-6,8-di-Chexoside changed in the pericarp of the pungent 'Eris F1' and luteolin-7-O-(2-apiosyl-6acetyl) hexoside and luteolin-7-O-(2-apiosyl-6-malonyl) hexoside in the placenta. After post-storage, three individual phenolics (luteolin-6-C-hexoside, luteolin 6,8-di-C-hexoside, and quercetin-3-O-rhamnoside) also had significant differences among treatments, whereby both luteolins were highest in the H. halys treatment. On the other hand, the non-pungent 'Lombardo tago' had a much greater phenolic response to the H. halys attack. Before storage, flavonols and flavones increased in both fruit parts. The H. halys increased total phenolic content in the pericarp of 'Lombardo tago' and decreased TPC in the placenta before storage. After storage, seven individual phenolics in the pericarp of 'Lombardo tago' increased in the H. halys treatment compared to the Control 1 treatment. The effect of H. halys was less noticeable in the placenta of 'Lombardo tago' after storage, with only two individual phenolics being impacted by its attack, compared to the Control 1 treatment. An increase of phenolics is common, since they are one of the main defense molecules against biotic and abiotic stressors [43], which was also demonstrated by our results. Flavonoids are well known to be defense molecules in plants [44]. In our study, we observed an increase in individual phenolics such as luteolins, which are a part of flavones. Flavones are known to be defense molecules, as previously noted by Soriano, Asenstorfer, Schmidt, and Riley [23], who reported an increased synthesis of flavones as a response to an invasion by parasitic nematodes in oats (Avena sativa). High levels of flavones are also present in silks of maize (Zea mays), which increase the resistance to corn earworm [45]. Flavonoids can deter or attract insects. They can protect the plant from herbivores or sucking insects, since they can alter the palatability of the plants, reducing their nutritive value, and they can decrease digestibility or act as toxins [43]. A similar pattern was observed in our study, whereby the pepper plants (especially non-pungent 'Lombardo tago') infested with H. halys reacted with an increased accumulation of flavones and flavanols, especially luteolins.
A possible reason for the less noticeable reaction of the pungent 'Eris F1' in terms of phenolics is its capability to synthesize high capsaicinoid levels. Capsaicinoids consist of a phenolic structure (phenylpropanoid way) and fatty acid (fatty acid metabolism) [46]. Phenolics can be used to synthesize capsaicinoids, which would explain why the H. halys effect on phenolics was not as noticeable in the pungent 'Eris F1' as in the non-pungent 'Lombardo tago' cultivar.
H. halys increased total capsaicinoids and, in most cases, also individual capsaicinoids in all before storage treatments, compared to both control treatments. After storage, individual capsaicinoids dropped in both cultivars. Interestingly, the placenta of 'Eris F1' dropped from high capsaicinoid contents in the H. halys treatment to low after storage. Capsaicinoids are known to be repellents for mammals and other animals [47]. Capsaicin acts as a Drosophila ovipositional repellent, and it influences their lifespan, climbing behavior, and digestive tracts [22]. Capsaicin has also been shown to be an onion fly ovipositional deterrent, impacting the insects' thermoregulation, which can cause a change in the behavior of the insect, especially with breeding or laying eggs [48][49][50].
Conclusions
Our study provides a detailed insight into the pre-and post-harvest quality of hot and non-hot peppers infested with the brown marmorated stink bug (H. halys). We were able to confirm that both cultivars responded to the H. halys infestation with an increase in metabolites, particularly phenolics and capsaicinoids, indicating that infestation had occurred even though visual damage was not observed. The observation of an increase in capsaicinoid synthesis in the infested fruit indicates that capsaicinoids may have a defensive function against H. halys and could be an interesting area of investigation for further experimentation. After storage, all metabolite levels decreased, although the H. halys treatments still had the highest metabolite levels, showing us that the effect of H. halys feeding on pepper fruits lasts longer, even after the insects have long disappeared. | 2021-11-10T16:17:31.115Z | 2021-11-08T00:00:00.000 | {
"year": 2021,
"sha1": "26469be013be07641f8a4723eae83ab85c795d11",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2073-4395/11/11/2252/pdf",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "8b459421428fc1163a0580345f54e54ef691482d",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": []
} |
139416033 | pes2o/s2orc | v3-fos-license | Copper-to-copper direct bonding on highly (111) oriented nano-twinned copper in no-vacuum ambient
In this study, we achieve the Cu-to-Cu direct bonding by using the (111) oriented nano-twined-Cu in N2 ambient, without vacuum condition and no additional thermal annealing treatment is needed. A well bonded interface under a temperature gradient between 400 °C and 100 °C was identified by the evidence of the grown grain across the bonding interface. In addition, a creep assisted bonding mechanism for the Cu-Cu direct bonding is proposed. Moreover, a shear test was applied on the bonded joints for the investigation of the bonded joint strength and its fracture mode. The mean value of joint strength is 176 MPa which is significantly higher than the conventional solder joint. In summary, the Cu-to-Cu direct bonding by (111) oriented nt-Cu in no-vacuum ambient as well as the grain evolution across the bonding interface has been achieved and verified.
Results
Fabrication of test dies and Cu surface treatment. Figure 1 illustrates the fabrication procedures of the test dies used in this study. In the first step, thermal oxidation was applied to produce a thin silicon oxide layer on the surface of Si wafers. Then, a 100 nm thick Ti diffusion barrier and 200 nm thick Cu seed-layer were sputtered on oxidized wafers by physical vapor deposition (PVD) process. Subsequently, the processed wafers were subjected to two different process steps for top and bottom die fabrication. For the top die, photolithography and electroplating were applied to form a nt-Cu pillar bump array. For the bottom die, only an electroplating process was performed to form a nt-Cu structured thin film. Figure 2a,b show the as-deposited microstructure of the pillar bump and thin film with nt-Cu, respectively. A texture of columnar grains with lamella nanotwins can be observed. Owing to the rough surface condition of Cu, the surfaces of the Cu pillar bump and thin film were flattened via chemical-mechanical-polishing (CMP) process. Figure 2c,d reveal the post-CMP Cu surface conditions. Figure 2e,f display the AFM scanning results. The measured root mean square roughness values (Rq) are 5.12 nm and 1.82 nm for the nt-Cu bumps and films, respectively. Before the bonding experiments, wet etching was performed to remove organic contaminants and the oxide layer. The test dies were rinsed with deionized water, followed by a short immersion in a mixed solution of citric acid and deionized water (in the ratio 133 g/100 ml) at 25 °C for about 30 s. Then, they were rinsed again with deionized water and dried by N 2 purging before bonding.
Cu-to-Cu direct bonding under a temperature gradient. Thermal compression bonding (bonding pressure = 162 MPa) was performed in a N 2 purging atmosphere with a temperature gradient between the top and bottom dies, as shown in Fig. 3a,b. The schematic Fig. 3a depicts that the temperature gradient was between 450 °C (top die) and 100 °C (bottom die). However, the gradient is reversed in Fig. 3b, where the top die was at 100 °C and the bottom die at 400 °C. The reason for studying the reverse temperature gradient will be given later. Figure 4 shows the cross-sectional focused ion beam (FIB) images of the bonded microstructure under different bonding time lengths of 5, 10, and 15 min for the temperature gradient depicted in Fig. 3a. Despite the fact that Cu oxidation cannot be prevented during bonding without vacuum, Fig. 4 shows well-bonded interfaces with a few voids. Importantly, the bonding time is only 5 min (short). Figure 4a reveals that grains evolved from the pillar bump. This is because high temperature and plastic deformation can trigger the recrystallization process in the pillar bump, but not in the thin film. We speculate that the pillar bump should store sufficient strain energy and dislocations for the nucleation and growth of strain-free grains during recrystallization 19 .
Yet, no recrystallization occurred in the thin film. One possible reason is that its temperature was lower than the bump during compression bonding, as shown in Fig. 3a. Therefore, we carried out experiments under a reversed temperature gradient, as depicted in Fig. 3b, so that the thin film had a higher temperature. The outcome was shown to be the same, and is independent on the temperature gradient. Figure 5 shows the FIB images of the bonded interfaces under the reversed temperature gradient for varying bonding times. They have similar grain growth patterns, as shown in Fig. 4. Alternatively, the other possible reason may be uneven stress distribution during bonding. To verify this assumption, we performed a finite-element simulation with the bonding condition depicted in Fig. 3a. The finite element model in Fig. 6a reveals that a localized high-stress region occurs in the pillar bump, while a low-stress state simultaneously exists in the thin film right below the pillar bump. The pillar bump possesses higher stress than the Cu film, which could enhance the atomic diffusion for recrystallization, resulting in subsequent grain growth. On the other hand, the thin film, which is under a relatively low stress condition, is more stable against recrystallization. As a result, there is a significant difference in grain growth evolution between the bump and the thin film.
For a further investigation of the stress-assisted grain evolution, we made a comparative study between the 162 MPa and 81 MPa bonding stress conditions. For the stress condition at 162 MPa, most of the nt-structure along the bonding interface has been transformed into a few randomly oriented grains or grains with annealing twins (Fig. 6b). For the bonding stress condition of 82 MPa, the nt-grains in the upper pillar bump were transformed into a large grains and some upper grains in the pillar bump grew into the lower thin film (Fig. 6c). It is similar to the high-stress condition. However, many nt-grains on the thin film side can still be observed, unlike the results observed for the case of 162 MPa. Thus, we believe that the lower stress state in the thin film reduced the progress of grain evolution.
Upon further annealing, grain growth across the bonding interface started from the pillar bump and proceeded into the thin film. For this, direct bonding must be performed first. Therefore, we discuss the mechanism of bonding before reporting the grain growth across the bonded interface.
Cu-to-Cu direct bonding mechanism. During thermal compression, stress-induced surface diffusion (creep) simultaneously occurred at the bonding interface. The surface diffusion-induced creep is similar to the model of Nabarro-Herring creep due to lattice diffusion and Coble creep by grain boundary diffusion 20,21 . In these creep models, the driving force for atomic flux is the stress potential gradient 22 . Hence, atoms or vacancies can migrate either within the grains or along the grain boundaries. In the present case, atomic diffusion occurs along the bonding interface. Under compression, the stress potential gradient occurs between the contacted regions and the non-contacted regions along the interface, as depicted by the schematic diagrams in Fig. 7a. This induces creep by surface diffusion to migrate atoms from the strained region to the unstrained (or void) region, as shown in Fig. 7b. This creep would in turn produce new atomic bonds of Cu-to-Cu across the interfaces. Figure 7c shows the result of the bonded region, including the contacted and non-contacted areas along the bonding interface.
Bonding interface characterization.
To validate the aforementioned unique grain growth behavior, a comparative study of nt-Cu and polycrystalline Cu was conducted. Both nt-Cu-to-nt-Cu and polycrystalline Cu-to-polycrystalline Cu were bonded at the temperature gradient between 450 °C and 100 °C for 15 min. Then, the bonded samples were subjected to the mechanical cross-sectioning process for microstructure observations. By comparing the cross-sectional images shown in Fig. 8a,b, different bonding scenarios were identified. It was found that the bonded nt-Cu has fewer voids at the bonding interface (Fig. 8a). In contrast, a region not bonded and numerous voids were found in the bonded polycrystalline Cu (Fig. 8b).
For a detailed structural analysis, two different morphologies were observed. Figure 8c shows the columnar grains in the pillar bump that have already agglomerated into a large size grain, which connected with the lower de-twined columnar grains in the thin film. At the bonding interface, very few voids can be found. For the bonded polycrystalline Cu, non nt-grains can be joined together, but several small voids (smaller than 100 nm) were found along the bonding interface (Fig. 8d). In order to characterize the bonded interface between the grains, high-resolution transmission electron microscopy (TEM) images (Fig. 8e,f) For bonding of (111) nt-Cu bumps to (111) nt-Cu films, although the roughness values for the nt-Cu surfaces are larger than that of polycrystalline Cu surface, the average void size/area for the nt-Cu joints is smaller than that of the polycrystalline Cu joints. Using the TEM images in Fig. 8(a-d), we measured the area of interfacial voids, and the measured average void area for the nt-Cu joints is 2600 nm 2 , whereas it is 5500 nm 2 for the polycrystalline Cu joints. There were 5 voids in the nt-Cu joint and 12 in the polycrystalline Cu joint. It was reported that the surface diffusion of Cu on (111) planes is faster by 3-4 orders in magnitude than other major planes 17,18 . Therefore, the (111) surface facilitates the diffusion of Cu atoms and then voids in the bonding interface can be filled by the Cu atoms. In addition, the nanotwins may be beneficial to reduce interfacial voids. It is reported that the nanotwins may serve as vacancy sinks to hinder the formation of Kirkendall voids during metallurgical reactions of Sn and Cu 23,24 . This is because there were many inherent twins in the electroplated Cu and the defects may be able to absorb vacancies. A statistical study on the void distribution needs to be performed in the future.
Grain growth across the bonding interface from triple junctions. For grain growth across the bonded interface, we found that it initiates at the triple junctions (TJs) of grain boundaries, as shown in Fig. 4a. As the bonding time is increased to 10 min, well-grown grains are found at the TJs along the bonding interface in Fig. 4b. Some of the grains in the thin film merged to form a long, inclined grain with boundaries approaching the bottom seed-layer. For bonding time increased up to 15 min, annealing twins are observed in Fig. 4c, where straight-sided crystals have lattices arranged in a symmetrical manner. In Fig. 4a-c, the microstructure within the rectangular area of the dashed line has been enlarged in the images on the right-hand side. Similar grain growth is observed in Fig. 5.
To study the grain growth behavior across the interface, we ensured a temperature gradient by maintaining the top die at 400 °C and the bottom die at 100 °C. Note that grain growth is quite rapid under the 450 °C/100 °C temperature gradient condition, so we lowered the bonding temperature to decrease the grain evolution. shows a cross-sectional FIB image of a well-bonded interface, and Fig. 9b an electron backscattered diffraction (EBSD) image of the grain orientations and boundaries. As indicated by the dashed squares in Fig. 9a,b, grain growth occurs from the upper to the lower side of the joint. In Fig. 9c, the FIB image of higher magnification shows that grain growth is initiated at the TJs of grain boundaries. We investigated the bonding interface based on the TJs through TEM observations. Figure 9d shows that the grains of the TJ consist of an upper grown grain and two lower nanotwinned columnar grains. Along the boundaries of the TJ, the atomic arrangement is disordered and loose (Fig. 9e); it differs from the dense packing of a face-centered cubic arrangement.
Discussion
We analyze herein the mechanism of boundary movement and energy change. In (111)-oriented nt-Cu, having columnar grains, each grain consists of parallel twin lamellae with a high density coherent twin boundaries (CTBs). All the columnar grains have a common tilt axis, so all the grain boundaries are tilt-type grain boundaries. Besides, the tilt-type columnar grain boundaries (CGBs) contain a high density of TJs (where CTBs meet CGBs). It is reported that the TJs have a significant influence on grain growth due to the energetic or accelerating effect 25 . In addition, high-angle tilt-type boundaries (HATBs) have been confirmed to have greater mobility than low-angle tilt-type boundaries 26 . Therefore, CGBs with HATBs and a high density of TJs would enhance the recrystallization process 27 . An analysis to explain why the bonding interface movement is preferentially initiated at the points of TJs and extends along CGBs with HATBs is presented.
In view of energy reduction, the recrystallization process resulting in the formation of new strain-free grains will attempt to minimize the grain boundary energy, twin energy, and strain energy involved. This is why while the upper grains grew into the lower grains in the thin film, as shown in Fig. 10a, the twin lamellae and their CTBs were consumed gradually. We express the energy change as where γ and s represent the interfacial free energy and area of the boundaries, respectively. The GB term corresponds to the grain boundary energy. The subscripts AB and DC refer to the grain boundaries before grain growth, and AC and BD refer to the new grain boundaries formed after grain growth. In addition, CTB indicates a coherent twin boundary. Moreover, the strain energy and volume are denoted by σ and V, respectively. If the inequality of the above equation is satisfied, then grain growth in the vertical direction can occur.
For comparison with the model of Fullman and Fisher 28 , annealing twin formation near the TJ during grain growth is a mechanism to reduce the free energy, as shown in Fig. 10b. The corresponding inequality can be written as However, it should be noted that our case is opposite to that of Fullman and Fisher. Their objective is to form a twin, whereas ours is to eliminate it. Nevertheless, both cases must lead to a reduction in the total energy. Furthermore, our case involves strain energy besides twin energy and grain boundary energy. Indeed, further studies on the kinetics of growth will be needed.
Conclusions
In summary, vacuum-free Cu-to-Cu direct bonding using (111)-oriented nt-Cu has been achieved through thermal compression bonding for 5 min under a temperature gradient. To investigate the recrystallization occurring across the bonding interface, experiments with different bonding times and temperature gradients were performed. A surface diffusion creep-assisted bonding mechanism has been proposed to account for the observed direct bonding.
nt-Cu test using top/bottom dies.
In this study, highly (111)-oriented nt-Cu was electroplated on the top and bottom dies. The dimensions of the test sample are 5 mm × 5 mm for the top die and 20 mm × 20 mm for the bottom die. The pillar bump array on the top die is 30 μm in diameter and 22 μm to 24 μm in height. The thickness of the thin film in the bottom die ranges from 2 μm to 4 μm. The bonding structure is designed to be amenable to the chip-to-wafer bonding die stacking structure. By this bonding structure, Cu-to-Cu direct bonding can be applied without an alignment process; it can be performed with a simple tool.
Chemical-mechanical planarization. The liquid solution used in this study is the commercial high-removal rate chemical mechanical slurry TSV-C1015-02 (Cabot Microelectronics). It contains 0.3 wt.% colloidal silica as abrasive grits. The average grit size is around 70 nm in diameter, contains 3 wt.% of oxidant, hydrogen peroxide (H 2 O 2 ), is added while polishing. The pH of the slurry is within the range of 3 to 4. The removal rate is around 800 nm/min to 1000 nm/min, depending on the pressure and plate-rotation speed. The slurry also contains an inhibitor to prevent the undesired removal of the concave surface, achieving global planarization and sub-nanoscale roughness on a finished surface. The measured root mean square roughness values (Rq) are 5.12 nm and 1.82 nm for the nt-Cu bumps and films, respectively. However, the measured Rq values are 2.53 nm and 0.69 nm for the polycrystalline Cu bumps and films, respectively.
Pre-cleaning of Cu surface. Wet etching was applied to remove the organic contaminants and oxide layer before bonding. The test dies were rinsed with deionized water, followed by a short immersion in a mixed solution of citric acid and deionized water (133 g/100 ml) at 25 °C for about 30 s. Then, they were rinsed again with deionized water and dried using N 2 purging before bonding.
Direct bonding with temperature gradient. Thermal compression bonding was applied in a N 2 purging atmosphere with a temperature gradient between top and bottom dies. For grain growth study, different bonding times of 5, 10, and 15 min were considered for the temperature gradient between 450 °C (top die) and 100 °C (bottom die). To determine whether the grain growth behavior is affected by the direction of the temperature gradient, we reversed the temperature gradient so that the top die was at 100 °C and the bottom die at 400 °C during the thermal compression process for the same bonding times (5, 10, and 15 min). In addition, for the observation of anisotropic grain growth behavior at the bonding interface, we lowered the temperature gradient by keeping the top die at 400 °C and the bottom die at 100 °C for 20 min. Examination of bonded interfaces. A FIB was employed to observe the bonded Cu-Cu interface and grain growth behavior. Electron backscattered diffraction was performed by a JSM-7800F PRIME field-emission scanning electron microscope with an EBSD detector Nordlys Max3. Aztec EBSD software was employed to analyze the orientation maps and crystallographic textures based on the Kikuchi patterns. The microstructures of the bonded interfaces were examined with a JEOL-JEM-F200 scanning transmission electron microscope. The TEM examinations were performed at 200 kV, with a point-to-point resolution of 0.23 nm and a lattice resolution of 0.14 nm. The surface roughness values of the Cu films were measured using scanning probe microscopy (Veeco Dimension 3100).
Finite element simulation. Finite element analysis was carried out to simulate the thermomechanical behavior of the Cu to Cu direct bonding structure. Ansys software was adopted for the simulation, and Four-noded 182 Element was applied in the two-dimensional model. The total number of nodes and elements in this model were 59,431 and 57,928, respectively. | 2018-01-16T18:22:31.327Z | 2017-01-01T00:00:00.000 | {
"year": 2017,
"sha1": "17dfcd7a0daaedd608a670e5620bbea2b6a68b69",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/s41598-018-32280-x.pdf",
"oa_status": "GOLD",
"pdf_src": "SpringerNature",
"pdf_hash": "764e8264319f4e3253f0980020c28f218a07e4cd",
"s2fieldsofstudy": [
"Materials Science"
],
"extfieldsofstudy": [
"Materials Science"
]
} |
225525580 | pes2o/s2orc | v3-fos-license | Analysis of Asian Mitochondrial DNA Haplogroups Associated With the Progression of Knee Osteoarthritis in Koreans
Received:December 5, 2019, Revised:April 22, 2020, Accepted:May 7, 2020 Corresponding to:Jae-Bum Jun http://orcid.org/0000-0002-0208-0505 Department of Rheumatology, Hanyang University Hospital for Rheumatic Diseases, 222-1 Wangsimni-ro, Seongdong-gu, Seoul 04763, Korea. E-mail:junjb@hanyang.ac.kr Copyright c 2020 by The Korean College of Rheumatology. All rights reserved. This is an Open Access article, which permits unrestricted non-commerical use, distribution, and reproduction in any medium, provided the original work is properly cited. Original Article pISSN: 2093-940X, eISSN: 2233-4718 Journal of Rheumatic Diseases Vol. 27, No. 3, July, 2020 https://doi.org/10.4078/jrd.2020.27.3.168
INTRODUCTION
Osteoarthritis (OA) is the most common degenerative disease related to the degradation of articular cartilage. Many studies have been performed to evaluate the relationship between OA and mitochondrial DNA (mtDNA). Changes in intracellular signals such as in mitochondrial respiratory chain activity in chondrocytes may be associated with degenerative changes in cartilage affected by low-grade chronic inflammation [1].
In a previous study, we examined whether mtDNA hap-logroup B contributed to the development of knee OA in Koreans, measured as radiologic changes for approximately 8 years in a large-scale prospective cohort [2]. Identifying the relationship between the haplogroup of mitochondria and development of OA is important for identifying the risk factors of chronic disease [3][4][5][6][7][8].
However, considering the nature of OAs, which are characterized by very long-term changes, it is also important to identify factors that can predict which patients with OA are experiencing rapid OA progression. Various cohort studies have reported that specific mtDNA haplogroups are also associated with the radiographic progression of OA [9][10][11]. They suggested that haplogroups associated with progression differed from the mtDNA haplogroup involved in OA development. Therefore, to determine which mtDNA haplogroups are associated with OA progression in Koreans, we developed a new study design based on a previous study protocol [2]. The aim of this study was to investigate Asian mtDNA haplogroups associated with the progression of knee OA in participants in a prospective ongoing communitybased cohort in Korea.
Study design and participants
As described in our previous study [2], we used the Ansung cohort of an ongoing, prospective cohort study that is part of the Korean Genome and Epidemiology Study [12]. In the present study, mtDNA haplogroups related to the progression of OA were examined by modifying the experimental design of the previous study. Briefly, epidemiologic data and Kellgren-Lawrence (K/L) scores of the knee radiographs were obtained from the second follow-up (2005∼2006) and sixth follow-up (2013∼ 2014) of this cohort. The K/L scores were measured by an orthopedist (KKI) and radiologist (SY) at the second follow-up visit with excellent inter-observer correlation coefficients and by a radiologist (SY) at the sixth follow-up visit with excellent intra-observer correlation coefficients [13]. The institutional review boards of all involved institutions approved this study (approval no. HYUH 2015-12-022).
Overall, there were 5,018 participants, and we obtained DNA samples from 1,115 participants ( Figure 1). We defined early OA as a higher K/L score for both knees of 1 or 2 at the second follow-up to identify progression, rather than as a criterion for the development of OA, as in previous studies of the association between mtDNA and OA development [11]. Among the participants, 405 met the definition of early OA at the second follow-up and were divided into two groups: K/L score change ≤1 in both knees (non-progression group, n=143) and K/L score change ≥2 in either knee or arthroplasty (progression group, n=166) at the sixth follow-up. All missing values for the K/L score at the sixth follow-up were excluded (n=96).
Statistical analysis
Differences between the non-progression and progression groups at the sixth follow-up were investigated by Student t-test and the Pearson chi-square test. Multiple logistic regression was used to determine the relative risk (RR) of mtDNA haplogroups for OA by adjusting for sex, age, and body mass index (BMI) because the incidence of knee OA is high in women and the elderly and obesity is a risk factor for OA [15]. Smoking and metabolic syndrome were excluded from the adjusted model because these factors are correlated with sex and BMI, respectively. p-values<0.05 indicated statistical significance. All statistical analyses were performed using PASW software version 18.0 (IBM Co., Armonk, NY, USA).
Baseline clinical characteristics
The clinical characteristics of the participants are described in Table 1. There were no significant differences in age between the non-progression and progression groups. The number of females was significantly higher in the progression group than in the non-progression group (88.6% vs. 76.9%, respectively). The rates of smoking, drinking, and diabetes, and hypertension were not significantly different between groups. However, the BMI and rates of metabolic syndrome were significantly higher in the progression group than in the non-progression group (26.74±3.21 vs. 25.33±3.26 and 77.7% vs. 62.0%, respectively).
mtDNA haplogroups associated with non-progression and progression of OA
The haplogroup frequencies of the non-progression and progression groups and RRs are shown in Table 2. Among the haplogroups, haplogroups B and D4 showed the highest frequencies (15.9% and 10.0%, respectively). In multiple logistic regression analysis, there was no significant RR for the progression of OA in each haplogroup in the unadjusted model, adjusted model for age, sex, and BMI, and adjusted model for age, sex, BMI, smoking, and metabolic syndrome. Among the haplogroups, the proportion of non-progression patients in haplogroup D4 was likely higher than that in patients showing progression; however, there was also no significant difference between the two groups (
DISCUSSION
Our previous study suggested that participants with haplogroup B had a higher risk of OA development [2]. In the present study, we observed no significant relationship between the haplogroups and OA progression. Haplogroup B, which was associated with the development of OA in our previous study [2], appeared to be related to OA progression but did not show a significant difference in the present study. Haplogroup D4 showed a low frequency in the progression group but the value was not significant.
Several studies in western countries have described the relationship between OA progression and mtDNA haplogroups. Soto-Hermida et al. [10] found that patients with haplogroup T had the lowest increase in K/L score (hazard ratio=0.499; 95% confidence interval [95% CI]: 0.261∼0.815) and in other radiographic indicators for progression such as joint space narrowing, osteophytes, and subchondral sclerosis. They also studied OA progression and mtDNA haplogroups in a Spanish cohort [11]. Patients in cluster TJ showed slower radio- graphic OA progression than patients in cluster KU (hazard ratio=1.711; 95% CI: 1.037∼2.823).
In a case-control study of Asians, Fang et al. [16] reported that haplogroup G increased the risk of OA occurrence (OA group 4.3% vs. control 1.4%, odds ratio [OR]=3.834; p=0.03) and patients with haplogroup G showed a higher severity of progression (K/L score 4) of knee OA (OR=10.870, p=0.007). Additionally, they showed that haplogroup D4/D4a was related to the higher-severity OA. Although the designs of their studies differed from those of our cohort study, the frequency of haplogroup D4 may be lower in the progression group than in the non-progression group. In East Asians, the frequent sequence variations in the Korean population were very similar to those in Japanese and Northern Chinese populations [17]. However, the frequency of the haplogroups related to disease may vary by country.
In the previous study investigating the progression of OA, a K/L score change ≥1 was defined as OA progression [10,11]. However, we defined K/L score change >2 as a progression after approximately 8 years among patients who had a higher K/L score in both knees of 1 or 2 at the second (baseline) follow-up. Even when using this strict definition of OA progression, no meaningful haplogroup was identified. This result is thought to be related to the small number of participants defined as having early OA. However, this is the only study in Korea to evaluate the association between knee OA progression and mtDNA haplogroups. Additional large-scale studies are necessary to identify the mtDNA haplogroup related to OA, which will improve the early diagnosis and prevention in patients at a high risk of OA. Several studies have suggested that haplogroup D4 is related to type 2 diabetes mellitus (DM). Liou et al. [18] suggested that haplogroup B4 was significantly associated with DM (OR 1.54 [95% CI 1.18∼2.02], p< 0.001), whereas haplogroup D4 showed borderline resistance against type 2 DM (OR 0.68 [95% CI 0.49∼ 0.94], p=0.02) in a Chinese population in Taiwan [18]. However, Jiang et al. [19] suggested that haplogroup D4 is associated with an increased risk of developing type 2 DM (OR 1.47 [95% CI 1.22∼1.77], p<0.01) in a Uyghur population in China and that the 3010G>A variant is likely involved in the pathogenesis of type 2 DM. Fuku et al. [20] also suggested that haplogroup D4b in Korean men was associated with an increased risk of DM (OR 3.55 [95% CI 1.65∼8.34], p<0.01). Although the relationship between DM and haplogroup D4 shows variable results, considering that DM is associated with OA [21], haplogroup D4 may be associated with OA in Koreans. Our study had several limitations. First, we investigated the progression of knee OA according to the K/L score change in knee radiographs after approximately 8 years. This follow-up period may not be sufficient to observe the progression of OA on knee radiographs. Long-term research designs using elaborate degenerative change screening methods are required to acquire more participants. Second, knee OA is generally defined as a K/L score of 2 or more; however, in our study, participants with a K/L score ≥1 at baseline were defined as having OA, as described previously [11]. Although the definition we used led to a larger number of participants, the sample size was still too small to obtain meaningful results. Third, although the parameters were adjusted for OA progression, we also need to consider variables related to risk factors for OA, such as anatomic factors, bone density, and physical activity. Fourth, functional scores such as the Western Ontario and McMaster Universities (WOMAC) is important for evaluating dysfunction in patients with OA. However, this information was not available for the Ansung cohort. The WOMAC score can complement the definition of progression by the K/L score.
CONCLUSION
In conclusion, no mtDNA haplogroup was found to be associated with the progression of OA in Koreans. Although not significant, haplogroup D4 may be asso-ciated with slower progression of OA. Large-scale studies are needed to determine the relationship between mtDNA haplogroups and OA. | 2020-07-16T09:03:07.614Z | 2020-07-01T00:00:00.000 | {
"year": 2020,
"sha1": "16150f8be74623253ab561ac1e6e4499d48e2ca4",
"oa_license": null,
"oa_url": "http://www.jrd.or.kr/journal/download_pdf.php?doi=10.4078/jrd.2020.27.3.168",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "a12d0cdcecc4dab2b0d3560efbf49fc88e0a27a3",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Biology"
]
} |
234361793 | pes2o/s2orc | v3-fos-license | Mortality among Hospitalized Dengue Patients with Comorbidities in Mexico, Brazil, and Colombia
Abstract. Dengue patients with comorbidities may be at higher risk of death. In this cross-sectional study, healthcare databases from Mexico (2008–2014), Brazil (2008–2015), and Colombia (2009–2017) were used to identify hospitalized dengue cases and their comorbidities. Case fatality rates (CFRs), relative risk, and odds ratios (OR) for in-hospital mortality were determined. Overall, 678,836 hospitalized dengue cases were identified: 68,194 from Mexico, 532,821 from Brazil, and 77,821 from Colombia. Of these, 35%, 5%, and 18% were severe dengue, respectively. Severe dengue and age ≥ 46 years were associated with increased risk of in-hospital mortality. Comorbidities were identified in 8%, 1%, and 4% of cases in Mexico, Brazil, and Colombia, respectively. Comorbidities increased hospitalized dengue CFRs 3- to 17-fold; CFRs were higher with comorbidities regardless of dengue severity or age. The odds of in-hospital mortality were significantly higher in those with pulmonary disorders (11.6 [95% CI 7.4–18.2], 12.7 [95% CI 9.3–17.5], and 8.0 [95% CI 4.9–13.1] in Mexico, Brazil, and Colombia, respectively), ischemic heart disease (23.0 [95% CI 6.6–79.6], 5.9 [95% CI 1.4–24.6], and 7.0 [95% CI 1.9–25.5]), and renal disease/failure (8.3 [95% CI 4.8–14.2], 8.0 [95% CI 4.5–14.4], and 9.3 [95% CI 3.1–28.0]) across the three countries; the odds of in-hospital mortality from dengue with comorbidities was at least equivalent or higher than severe dengue alone (4.5 [95% CI 3.4–6.1], 9.6 [95% CI 8.6–10.6], and 9.0 [95% CI 6.8–12.0). In conclusion, the risk of death because of dengue increases with comorbidities independently of age and/or disease severity.
INTRODUCTION
Dengue incidence has increased 30-fold in the last 50 years, with geographic expansion to new countries and, more recently, from urban to rural settings. 1 The disease is currently endemic in more than 100 countries, with the Americas, Southeast Asia, and the Western Pacific the most affected regions. 2 The Americas had 14% (13 million infections) of apparent dengue infections worldwide in 2010, over half of which occurred in Brazil and Mexico. 3 In 2017, there were 89,893 notified dengue cases in Mexico, 252,054 in Brazil, and 26,279 in Colombia. 4 However, the true magnitude of the dengue burden is likely underestimated. 5 Early detection and access to medical care can reduce fatality rates to less than 1%. 2 Between 2014 and 2017, the annual dengue case fatality rate (CFR) ranged from 0.02% to 0.04% in Mexico, 0.04% to 0.07% in Brazil, and 0.06% to 0.16% in Colombia. 4 There is no specific treatment for dengue. In dengue endemic regions, preventative measures include vector control, avoidance of getting bitten, and vaccination. The recombinant, live, attenuated, tetravalent dengue vaccine (Dengvaxia ® ; CYD-TDV) 6 is indicated for the prevention of dengue disease in individuals confirmed to be dengue-seropositive aged 9-16 years or 9-45 years depending on specific country/ regional approval. 7,8 Individuals who are dengue-seronegative should not be vaccinated, as they are at increased risk of severe dengue following vaccination. Currently, the dengue vaccine is registered in 19 countries in Asia and Latin America, as well as in eligible parts of the European Union and the United States. 9 Underlying chronic disorders may have the potential to contribute to the severity of physiological responses to dengue infection or vice versa (i.e., the physiological responses to dengue infection may exacerbate some pre-existing comorbidities), resulting in a worse outcome. A number of small, retrospective, case-control, and case-review studies have identified some comorbidities as possible risk factors that might influence development of severe dengue and denguerelated mortality. [10][11][12][13][14][15] However, there have been few largescale studies assessing the impact of comorbidities on the CFR from dengue. The aims of this study were to examine dengue-related hospitalization and CFRs in Mexico, Brazil, and Colombia using health system databases, and to assess the impact of comorbidities on in-hospital dengue mortality. A greater understanding of the role of underlying comorbidities in the development of severe outcomes would help better target dengue vaccination strategies as well as clinical monitoring to ensure prompt, aggressive supportive therapy for those at high-risk, and thus lead to a reduction of denguerelated mortality.
METHODS
Data sources. Anonymized data from three health system databases were used in this study. The Mexican Subsistema Automatizado de Egresos Hospitalarios (SAEH) is the main hospital discharge database for all Ministry of Health hospitals in Mexico, representing 38.3% of total services provided in the country. 16 During the study period 2008-2014, the SAEH database included data on 19.2 million hospital admission records from 817 hospitals. 17 We previously used the same dataset in another analysis assessing the burden of dengue on hospital services in Mexico. 18 The Brazilian Hospital Information System of the Unified Health System (SIH/SUS) covers 70-80% of hospital admissions in Brazil. During the study period 2008-2015 the SIH/SUS database included data on 92 million hospital admission records from 5,983 hospitals. 19 The Colombian Registro Individual de Prestaciones de Salud (RIPS) database, maintained by the Colombian Ministry of Health, contains information regarding hospitalizations, services, and supplies provided, as well as medicine and outpatient care. During the study period 2009-2017, the RIPS database included data on 13.5 million hospital admission records from 11,208 hospitals (approximately 70-75% of the services provided).
Dengue is a notifiable disease in the three countries. Primary and secondary diagnosis, based on the International Classification of Diseases, 10th Revision (ICD-10) codes 20 were used across all three countries to identify dengue cases and comorbidities from the databases. Dengue cases were classified as either non-severe (ICD-10 code = A90; classical dengue) or severe dengue (ICD-10 code = A91; dengue hemorrhagic fever); the dengue diagnosis code position (primary or secondary) was not taken into consideration in this analysis. The number of available fields for reporting of secondary diagnosis codes varied by country, and appeared as follows over the duration of the study in Brazil and Mexico: Brazil (2008-2014: 1 field; 2015: 9 fields); Mexico (2008-2009: 6 fields; 2010-2014: unlimited fields); Colombia (3 fields). For consistency within each country analysis, only the first secondary code was considered for all years in Brazil; there were up to six secondary codes considered for Mexico, and up to three secondary codes for Colombia. Comorbidities were identified from a preliminary analysis of ICD-10 codes (the first three characters) associated with in-hospital mortality in patients with dengue. All identified ICD-10 codes for comorbidities were then grouped into larger categories: diabetes, HIV, heart failure, hypertension, ischemic heart disease, dyslipidemia, obesity, pulmonary disorders, renal disease or failure, stroke, urinary disorders, and infectious diseases (excluding dengue) (Supplemental Table S1). To exclude potential bias in the analysis, codes considered as symptoms of dengue or severe dengue and its complications were excluded, such as fever, headache, and dehydration.
Outcome measures. We recorded the number of cases of hospitalized dengue, non-severe and severe, and the proportion of cases with a specified comorbidity, in each of the Mexican, Brazilian, and Colombian databases. CFRs were calculated as the proportion of recorded cases of dengue that were fatal during the study period (only in-hospital mortality during the same hospitalization was considered), and they were calculated separately for non-severe and severe dengue cases with and without comorbidities. Relative risk (RR) was calculated as the ratio of the CFRs in hospitalized dengue cases with comorbidities to those without comorbidities. Odds ratios (ORs) were calculated to determine the impact of comorbidities, dengue severity, age on admission, and year of admission on in-hospital mortality and intensive care unit (ICU) admission. ORs were derived from multivariate regression analysis, whereas RRs were based on the univariate analysis; of note, ORs are like RRs when the event is rare.
Statistical analyses. Patients were stratified according to age on hospital admission: 0-8 years, 9-45 years, 46-60 years, or ³ 61 years. The RR and associated 95% confidence intervals (CIs) were calculated according to standard formulae, 21 and P values were calculated using Fisher's exact tests. To ensure reliability/robustness of estimated CFRs and RRs, comorbidities by age group were only reported for age groups with at least five cases and one death, and a P value < 0.05 (Fisher's exact test).
The effect of risk factors on the binary outcome measures of in-hospital mortality and ICU admission were examined in random-effects multivariate logistic regression models, including a random intercept for hospitals. Risk factors included in the multivariate logistic regression models were comorbidities identified in the univariate analysis in at least two age groups in at least two databases. Infectious diseases were excluded to prevent potential confounding effects because of the presence of differential diagnoses for dengue/other infections in this category. Dengue severity, age, and year of admission were also included in the models. The age stratum of the patient on admission (the reference category was age 9-45 years, which corresponds to the indicated age for dengue vaccination in Mexico and Brazil [the vaccine is not yet registered in Colombia]) and the year of admission in the database (2008 in Mexico and Brazil, and 2009 in Colombia as reference category) were also included in models to adjust for the potential confounding effects of patient age and admission year. The coefficients derived from these logistic regressions were exponentiated to obtain adjusted ORs and associated 95% CIs and P values.
Analyses were performed using the KNIME 22 analytic platform (Knime: 3.5.2 integrated with KEM ® , 23 Ariana Pharmaceuticals) data mining tools, MySQL database and R statistical software, the glm function of the stats base package of the R statistical software (R: 3.4.3 "Kite-Eating Tree"), and the melogit command for multilevel mixed-effects logistic regression in Stata 15.1 ® .
RESULTS
Cases of hospitalized dengue. Overall, 678,836 hospitalized dengue cases were identified in the three databases assessed across the three countries. There were 68,194 hospitalized dengue cases identified from the Mexican database during 2008-2014, of which 44,357 (65%) were reported as non-severe dengue and 23,837 (35%) as severe dengue; and there were 267 in-hospital deaths among these cases (Supplemental Table S2). In the Brazilian database, 532,821 hospitalized dengue cases were identified during 2008-2015, of which 505,697 (95%) were reported as non-severe dengue and 27,124 (5%) as severe, and there were 2698 in-hospital deaths among these cases (Supplemental Table S3). From the Colombian database, 77,821 hospitalized dengue cases were identified during 2009-2017, of which 63,579 (82%) were reported as non-severe dengue and 14,242 (18%) as severe, and there were 260 in-hospital deaths among these cases (Supplemental Table S4).
Prevalence of comorbidities. Of the hospitalized dengue cases in Mexico, there was an additional diagnosis of at least one of the specified comorbidities in 4,047 (9%) of the nonsevere dengue cases and 1,672 (7%) of the severe dengue cases (Supplemental Table S2). In Brazil, comorbidities were seen in 3,721 (0.7%) of the non-severe dengue cases and 283 (1%) of the severe cases (Supplemental Table S3); and in Colombia, comorbidities were seen in 2,505 (4%) of the nonsevere cases and 474 (3%) of the severe cases (Supplemental Table S4). In general, the prevalence of comorbidities was lowest in the 9-to 45-year age group and increased with age ( Figure 1). Comorbidities with the highest prevalence in all three countries were other infectious diseases (Supplemental Table S5 summarizes the top [accounting for 95% of codes] ICD-10 codes in the infectious disease A00-A99 comorbidity category reported in this study), diabetes, urinary disorders, pulmonary disorders, and hypertension (Supplemental Tables S2, S3, and S4). When the type of comorbidity was compared between hospitalized dengue cases and other hospitalized non-dengue cases, there was a much higher prevalence of other infectious diseases, pulmonary disorders, and urinary disorders among dengue cases (Supplemental Figure S1).
Case fatality rates. The CFRs for hospitalized dengue were higher in the presence of common comorbidities in Mexico, Brazil, and Colombia, regardless of dengue severity or age ( Figure 2). However, the highest CFRs were seen in individuals with both severe dengue and comorbidities in the different age groups, reaching 5.9% for the 0-to 8-year age group in Mexico, 32.6% for the ³ 60-year age group in Brazil, and 15.4% for the 46-to 60-year group in Colombia. In comparison, CFRs for severe dengue without comorbidities across the age groups were 0.4-0.6%, 2.4-10.3%, and 0.5-3.1% in Mexico, Brazil, and Colombia, respectively. Comorbidity with renal disease or failure, pulmonary disorders, and infectious diseases increased hospitalized dengue CFR at any age in Mexico, which was also the case with renal disease or failure, and infectious diseases in Brazil (Supplemental Table S6). The RR of death among hospitalized dengue patients with comorbidities to those without comorbidities was higher across all ages, with CFRs 7-17 times higher in Mexico, 5-12 times higher in Brazil, and 3-13 times higher in Colombia (Figure 3).
Impact of risk factors on outcomes. The risk of in-hospital mortality was significantly higher among hospitalized dengue patients with pulmonary disorders, ischemic heart disease, and renal disease/failure comorbidities versus those without these comorbidities, and the risk was consistent across the three countries (Figure 4). Age ³ 46 years at admission versus 9-45 years was also associated with higher risk for in-hospital mortality in all countries, with the greatest risk for the oldest group (³ 61 years) (Figure 4). In Brazil, there was a higher risk for in-hospital mortality from 2011 to 2015 relative to 2008 (reference year).
Data on ICU admission were unavailable in Colombia. In general, ICU admission rates for dengue cases (all dengue cases, dengue only, or dengue with comorbidity) were 4.5-to 9.5-fold lower in Mexico than Brazil (Supplemental Table S2 and S3). Pulmonary disorders, ischemic heart disease, and renal disease/failure comorbidities were also significant risk factors for ICU admission, along with diabetes in Brazil ( Figure 4). In Mexico, renal disease/failure and older age were not significantly associated with an elevated risk of ICU admission, which may be because of the small number of admissions over this period: 122 of 68,194 hospitalized dengue cases (Supplemental Table S2).
DISCUSSION
Although the proportion of hospitalized dengue cases with associated comorbidities was relatively small in the three countries assessed, the impact on mortality was significant. The presence of comorbidities increased the CFRs of hospitalized dengue by 3-to 17-fold compared with cases with no comorbidities. Moreover, the CFRs for hospitalized dengue were higher in the presence of common comorbidities regardless of dengue severity or age. Crucially, our study showed that the risk of death in hospitalized dengue cases was consistently increased across the three countries in cases with comorbid pulmonary disorders, ischemic heart disease, and/or renal disease/failure. These comorbidities were also significant risk factors for ICU admission, along with diabetes in Brazil, but the data on ICU admissions were generally limited in Mexico or unavailable for Colombia. We confirm other studies demonstrating that renal failure or renal insufficiency 12,24 and ischemic heart disease 25 increase the risk of severe dengue and/or in-hospital mortality. In contrast, despite odds ratios above 1 in some cases, we were unable to confirm that diabetes, [11][12][13]25,26 hypertension, 12,14 secondary infectious diseases, 12 asthma, 26 and allergies 13,14 significantly increased hospitalized dengue CFR. Given the increasing incidence of many of these comorbidities globally, 27 and locally to Mexico, Brazil, and Colombia, 28 the number of dengue cases with these comorbidities will likely increase in future years, leading to greater hospital resource use and cost, in addition to increasing in-hospital death rates. The global average cost (direct and indirect) per hospitalized dengue case was estimated (in 2013 United States dollars) to be $70.10, with costs varying by region depending on income, from $56 in low income regions to $1,146 in high income regions. 29 The costs per fatal case are substantial, estimated at $84,730 for children and $75,820 for adults. Thus strategies that reduce hospitalized cases, and dengue-related mortality in particular, would likely lead to considerable cost savings.
We also showed that severe dengue was associated with an increased risk of in-hospital mortality and ICU admissions (where data was available) versus non-severe dengue in all three countries. Older age in individuals above 45 years was also a risk factor for in-hospital mortality, as well as ICU admissions in Mexico and Brazil. Older age has been previously shown to lead to higher fatality rates in dengue cases 12,[30][31][32] and increased length of hospital stay. 33 In addition, the prevalence of comorbidities was lowest in the 9-to 45-year age group and increased with age, which predisposes the elderly to increased risk of dengue mortality. Of note, in Brazil, there was a higher risk of dengue mortality observed from 2011 to 2015 relative to 2008 (reference year) in our study. It is possible that the increased recirculation of dengue serotype 1 in 2010, after many years of relatively low circulation rates, may have increased the number of serious manifestations of the disease in the subsequent years. 34 Differences in CFRs among the three countries may reflect differing practices in the management of dengue (or experience with the illness and resultant accuracy of ICD coding), as well as differing classification of severe dengue 35 and temporal circulation of the dengue serotypes.
Differences among countries concerning treatment guidelines, decision to hospitalize a patient, and the resulting clinical profiles of hospitalized dengue cases could all affect inhospital mortality. The results of our study give support to this possibility, because we observed substantial differences in age-specific CFRs across the three countries. However, the results of the multivariate analysis show that the direction and the strength of the association between mortality and comorbidities did not markedly vary among countries. This suggests that the potential variations in the clinical profile of hospitalized dengue cases did not substantially interfere in such associations, provided that the analysis was adjusted for age and dengue severity.
Several other studies have also implicated underlying comorbidities (in particular, pre-existing heart disease and diabetes) in severe outcomes of other arboviral diseases, including infection with West Nile, chikungunya, and tickborne encephalitis viruses. [36][37][38] In general, the etiological relationship between pre-existing comorbidities and disease severity remains to be fully elucidated. It is possible that the physiological responses to the viral infections may exacerbate some pre-existing comorbidities, but other comorbidities may contribute to the severity of the physiological responses to the infection.
There are several limitations to our study that need to be considered when making generalizations to other dengue endemic countries. The database used in Mexico captured mortality in public Ministry of Health hospitals only. In contrast, in the Brazilian database, hospitalizations were from public and private hospitals that provide services for the government, covering 70-80% of total hospital admissions in the country. However, the RIPS database included information from both private and public health provider institutions that are obliged to report the services and supplies provided to any patient (whether hospitalized or not) to the Colombian Ministry of Health. Thus, these findings may not be applicable across the broader Latin American population. In addition, the number of comorbidities reported for a hospital admission in Brazil was limited to one principal diagnosis and one secondary diagnosis for most of the study period assessed (see Methods), whereas multiple comorbidities were reported for Mexico and Colombia. The differences in the reporting of comorbidities (including reporting practices) may in part explain the 4-to 8-fold higher prevalence of comorbidities reported in the latter two countries compared with Brazil. Caution is encouraged in the interpretation of these data because of the relatively great uncertainty, as conveyed by the CIs, in the estimates provided.
The true in-hospital mortality because of dengue may also be underestimated because of variability in reporting requirements in the different countries and underdiagnosis owing to the nonspecific clinical presentation of the disease. In addition, the databases assessed were primarily for administrative/ reimbursement purposes, and there was no independent validation or confirmation of the cases. Differences in CFRs among the three countries may reflect differing practices in the management of dengue (or experience with the illness and resultant accuracy of ICD coding), as well as differing classification of severe dengue, 35 and temporal circulation of the dengue serotypes. For all countries, the analysis was based on a combination of clinical and/or laboratory (virologically confirmed) dengue diagnoses. Viral infectious diseases reported as a risk factor for in-hospital dengue mortality may be confounders in the analysis, and it is not clear if they represent co-infections (e.g., Zika, yellow fever, Chikungunya) or differential diagnoses for dengue/other infections. Ideally, the analysis should have examined bacterial, viral, parasitic, and tuberculosis infections separately, but this information was not available. There is potential for FIGURE 3. Relative risk of in-hospital mortality for all hospitalized dengue cases with comorbidities relative to cases with no comorbidities, stratified by age group. Shown are relative risks for all dengue cases, nonsevere, and severe (proportion of reported cases of dengue that were fatal during the study period for cases with dengue with comorbidity vs cases with dengue alone).
reporting bias as comorbidities are more likely to be documented, and more likely to be severe, in a hospital setting. The severity of comorbidities may bias the CFR but was not determined in this study. It is also possible that some of the reported comorbidities such as pulmonary disorders or renal disease/failure may have been a complication of dengue, but some of these complications may have been grouped as underlying chronic conditions. Nonetheless, our study highlights the importance of comorbidities in dengue deaths and the need for better protection measures against dengue infection for . NC = noncalculable. Risk factors included in the multivariate logistic regression models were comorbidities identified in the univariate analysis in at least two age groups in at least two databases. Infectious diseases were excluded to prevent potential confounding effects because of the presence of differential diagnoses for dengue/other infections in this category. Dengue severity, age, and year of admission were also included in the models. patients with comorbidities in the absence of specific antiviral treatments.
Effective allocation of resources to strategies such as vaccination and other general protection measures against mosquito bites such as vector control (social/environmental) or personal protection (use of protective clothing, insect repellent or nets), as well as surveillance for vector-borne infections will be important in preventing infections. 39 Dengvaxia is currently the only licensed dengue vaccine to date, but its use is restricted to those with evidence of prior dengue infection(s) (i.e., dengue seropositive) 7,8 so as to minimize the risk of severe dengue by avoiding vaccination of those without prior dengue infection (i.e., dengue seronegative). 40 Thus, determining the recipient's serostatus before administration of the vaccine remains a high priority. 41,42 In addition, the vaccine has variable efficacy against the four dengue serotypes (lower for serotypes 1 and 2 than for serotypes 3 and 4), [43][44][45] with an overall efficacy of 76% against symptomatic, virologically confirmed dengue up to 25 months after the first vaccination in those with evidence of prior dengue infection(s) aged ³ 9 years. 40 Nonetheless, the overall number of infections would likely be unaffected because only seropositives would be targeted for vaccination. 41,42,46 A combination of sustained vector control and vaccination would be more effective in suppressing and maintaining the number of cases at very low levels than vaccination alone. 46 In conclusion, our retrospective study demonstrates that the risk of death because of dengue in adult populations in Mexico, Brazil, and Colombia increases with comorbidities independently of age and/or disease severity. Worldwide, there is an increasing elderly population and high prevalence of comorbidities in dengue-endemic countries. These data support the need for prompt diagnosis and adequate care for the management of patients with comorbidities and dengue, as well as the use of preventive measures, such as dengue vaccination and vector control. 47 | 2021-05-12T06:16:52.018Z | 2021-05-10T00:00:00.000 | {
"year": 2021,
"sha1": "d7bdb89070e564307293ce3e7b1b402f34a60281",
"oa_license": "CCBY",
"oa_url": "https://www.ajtmh.org/downloadpdf/journals/tpmd/105/1/article-p102.pdf",
"oa_status": "BRONZE",
"pdf_src": "PubMedCentral",
"pdf_hash": "935743d201e257b33606d3c0dc1248130950a19f",
"s2fieldsofstudy": [
"Medicine",
"Political Science"
],
"extfieldsofstudy": [
"Medicine"
]
} |
266310001 | pes2o/s2orc | v3-fos-license | Fusarium Wilt Invasion Results in a Strong Impact on Strawberry Microbiomes
Plant-endophytic microbes affect plant growth, development, nutrition, and resistance to pathogens. However, how endophytic microbial communities change in different strawberry plant compartments after Fusarium pathogen infection has remained elusive. In this study, 16S and internal transcribed spacer rRNA amplicon sequencing were used to systematically investigate changes in the bacterial and fungal diversity and composition in the endophytic compartments (roots, stems, and leaves) of healthy strawberries and strawberries with Fusarium wilt, respectively. The analysis of the diversity, structure, and composition of the bacterial and fungal communities revealed a strong effect of pathogen invasion on the endophytic communities. The bacterial and fungal community diversity was lower in the Fusarium-infected endophytic compartments than in the healthy samples. The relative abundance of certain bacterial and fungal genera also changed after Fusarium wilt infection. The relative abundance of the beneficial bacterial genera Bacillus, Bradyrhizobium, Methylophilus, Sphingobium, Lactobacillus, and Streptomyces, as well as fungal genera Acremonium, Penicillium, Talaromyces, and Trichoderma, were higher in the healthy samples than in the Fusarium wilt samples. The relative abundance of Fusarium in the infected samples was significantly higher than that in the healthy samples, consistent with the field observations and culture isolation results for strawberry wilt. Our findings provide a theoretical basis for the isolation, identification, and control of strawberry wilt disease.
Introduction
Endophytic microbes colonize the plant endophytic compartment and considerably influence plants' growth, development, nutrition, and resistance to pathogens [1][2][3][4][5].In plants, microbial diversity varies across different niches, with the endophytic microbiome of plant roots being primarily absorbed from the soil and transported to the stems and leaves through extracellular vesicles in the xylem vessels [6,7].Unlike the process of the endophytic colonization of roots, stems, and leaves, that of the formation of rhizosphere microbial communities appears to be stable and controllable [8,9].The microbes that typically exist in root and stem tissues are either candidate symbionts or potential pathogens [10].Therefore, understanding the composition of endophytic microbial communities in plant roots, stems, and leaves is essential for the development and application of agricultural biological fertilizers and the regulation of plant diseases [9,11,12].Strawberries (Fragaria × ananassa) are a widely cultivated fruit with high nutritional and economic value [13,14].Because of the continuous expansion of strawberry plantation areas and the use of long-term continuous cropping, strawberries have become susceptible to several diseases, such as Fusarium wilt [15,16].Fusarium wilt is caused by a soil-borne pathogen called Fusarium oxysporum, which primarily infects plants such as strawberry, cucumber, tomato, banana, and celery; has a wide range of hosts; and results in major crop losses in fruits and vegetables worldwide [17][18][19][20][21]. Strawberry wilt primarily occurs during the seedling, flowering, and harvest stages of strawberries, with seedlings and soil carriers being the main reasons for recurrence [22,23].This disease typically infects the roots, stems, leaves, fruit stems, and petioles, and results in delayed plant development, wilt, crown discoloration, and ultimately, plant death [24][25][26][27].
High-throughput sequencing has been used to elucidate the structure of microbial communities and estimate the influence of pathogen infection on these communities.Studies have revealed significant differences in the bacterial and fungal communities in rhizosphere soil surrounding anthracnose-infected strawberries, powdery-mildew-infected strawberries, and healthy strawberry plants [28,29].They have also indicated significant differences in the bacterial and fungal communities in the rhizosphere soil surrounding healthy bananas and banana plants with Fusarium wilt [30].Another study has revealed significant differences in the bacterial community diversity and composition between healthy tobacco plants and tobacco with bacterial root and stem wilt [10].However, no study has fully elucidated the changes that occur in the bacterial and fungal microbial communities in the endophytic compartments (roots, stems, and leaves) of strawberry plants during Fusarium wilt.Therefore, in the present study, we used 16S and internal transcribed spacer (ITS) rRNA amplicon sequencing to evaluate the changes that occur in the bacterial and fungal communities in the roots, stems, and leaves of healthy and Fusarium wilt-infected strawberry plants, respectively.We also evaluated the relative importance of these bacterial and fungal communities in the invasion process of Fusarium wilt.
Microbial Community Diversity and Structure of Healthy and Infected Samples
To compare the endophytic microbial communities in healthy and wilt-infected strawberry plants, we analyzed the bacterial and fungal communities present in the roots, stems, and leaves of such plants by sequencing the V5-V7 region of the 16S rRNA gene and the ITS1 region of the ITS, respectively.A total of 3,103,035 16S sequences and 3,804,986 ITS sequences obtained from 60 samples were analyzed.After chimeric and organellar sequences were excluded, the sequences were grouped into 1705 bacterial OTUs and 421 fungal OTUs, which were clustered at an identity threshold of 97%.
The alpha diversity indices (Shannon's and Simpson's indices) and community richness indices (Chao1, Ace, and Sobs indices) of the bacteria and fungi in the healthy and wilt-infected strawberry roots, stems, and leaves are summarized in Table S1.In the infected root, stem, and leaf samples, the Shannon index and community richness index of the bacteria and fungi exhibited a downward trend compared with those in the healthy root, stem, and leaf samples.The community richness index of the bacteria in the healthy root samples was significantly higher than that in the infected samples (Student's t test, p < 0.05), and the Shannon index and community richness index of the bacteria in the healthy stem samples were significantly higher than those in the infected samples (Shannon's index: p < 0.05, Student's t test; Chao1 index: p < 0.001, Student's t test; Figure 1A,B; Table S1).The Shannon index of the fungi in the healthy root, stem, and leaf samples was significantly higher than that in the infected samples (roots: p < 0.05, Student's t test; stems: p < 0.001, Student's t test; leaves: p < 0.001, Student's t test), and the community richness index in the healthy root and stem samples was significantly higher than that in the infected samples (roots: p < 0.001, Student's t test; stems: p < 0.01, Student's t test; Figure 1C,D; Table S1).These results indicate that the number of endophytic bacterial and fungal species in the healthy plants was higher than that in the infected plants.S1).These results indicate that the number of endophytic bacterial and fungal species in the healthy plants was higher than that in the infected plants.A principal coordinate analysis combined with an analysis of similarities revealed that compartment and Fusarium infection affected the composition of the bacterial and fungal communities (Figure 2A,B).The bacterial community structure in the leaves (R = 0.165, p < 0.05) of the healthy and infected strawberries underwent the most significant change, whereas the fungal community structure in the stems (R = 0.296, p < 0.05) of the healthy and infected strawberries underwent the most significant change (Figure S1A,B).A principal coordinate analysis combined with an analysis of similarities revealed that compartment and Fusarium infection affected the composition of the bacterial and fungal communities (Figure 2A,B).The bacterial community structure in the leaves (R = 0.165, p < 0.05) of the healthy and infected strawberries underwent the most significant change, whereas the fungal community structure in the stems (R = 0.296, p < 0.05) of the healthy and infected strawberries underwent the most significant change (Figure S1A,B).S1).These results indicate that the number of endophytic bacterial and fungal species in the healthy plants was higher than that in the infected plants.A principal coordinate analysis combined with an analysis of similarities revealed that compartment and Fusarium infection affected the composition of the bacterial and fungal communities (Figure 2A,B).The bacterial community structure in the leaves (R = 0.165, p < 0.05) of the healthy and infected strawberries underwent the most significant change, whereas the fungal community structure in the stems (R = 0.296, p < 0.05) of the healthy and infected strawberries underwent the most significant change (Figure S1A,B).
Differences in Bacterial and Fungal Taxa in Healthy and Infected Samples
In the healthy and infected samples, the bacterial OTUs were classified into 36 phyla, 103 classes, 237 orders, 397 families, and 765 genera, and the fungal OTUs were classified into 4 phyla, 14 classes, 28 orders, 48 families, and 63 genera.Because 79.8% and 93.5% of the OTUs in the 16S and ITS data sets were identified as phylum Proteobacteria and phylum Ascomycota, respectively, these groups were further split into classes.
Six dominant bacterial phyla and Proteobacteria classes, with a relative abundance of ≥0.1%, were detected (Figure 3A).Among the Proteobacteria classes, Gammaproteobacteria and Alphaproteobacteria were the most dominant, with relative abundances of 63.7% and 16.0%, respectively.Alphaproteobacteria were more abundant in the infected roots than in the healthy roots, and Gammaproteobacteria were more abundant in the infected stems and leaves than in the healthy stems and leaves (p < 0.05, Student's t test; Figure 3A).A total of 11 dominant fungal phyla and Ascomycota classes, with a relative abundance of ≥0.1%, were detected (Figure 3B).Among the Ascomycota classes, Sordariomycetes and Dothideomycetes were the most dominant, with relative abundances of 70.0% and 9.8%, respectively.Sordariomycetes were more abundant in the infected roots, stems, and leaves than in the healthy roots, stems, and leaves (roots: p < 0.01, Student's t test; stems: p < 0.001, Student's t test; leaves: p < 0.05, Student's t test), and Basidiomycota were less abundant in the infected roots, stems, and leaves than in the healthy roots, stems, and leaves (roots: p < 0.01, Student's t test; stems and leaves: p < 0.05, Student's t test; Figure 3B).Analysis of similarities (ANOSIM) conducted to test for differences in community composition resulting from compartment and health status.R values are presented and labeled with asterisks: ** p < 0.01 and *** p < 0.001.
Differences in Bacterial and Fungal Taxa in Healthy and Infected Samples
In the healthy and infected samples, the bacterial OTUs were classified into 36 phyla, 103 classes, 237 orders, 397 families, and 765 genera, and the fungal OTUs were classified into 4 phyla, 14 classes, 28 orders, 48 families, and 63 genera.Because 79.8% and 93.5% of the OTUs in the 16S and ITS data sets were identified as phylum Proteobacteria and phylum Ascomycota, respectively, these groups were further split into classes.
Six dominant bacterial phyla and Proteobacteria classes, with a relative abundance of ≥0.1%, were detected (Figure 3A).Among the Proteobacteria classes, Gammaproteobacteria and Alphaproteobacteria were the most dominant, with relative abundances of 63.7% and 16.0%, respectively.Alphaproteobacteria were more abundant in the infected roots than in the healthy roots, and Gammaproteobacteria were more abundant in the infected stems and leaves than in the healthy stems and leaves (p < 0.05, Student's t test; Figure 3A).A total of 11 dominant fungal phyla and Ascomycota classes, with a relative abundance of ≥0.1%, were detected (Figure 3B).Among the Ascomycota classes, Sordariomycetes and Dothideomycetes were the most dominant, with relative abundances of 70.0% and 9.8%, respectively.Sordariomycetes were more abundant in the infected roots, stems, and leaves than in the healthy roots, stems, and leaves (roots: p < 0.01, Student's t test; stems: p < 0.001, Student's t test; leaves: p < 0.05, Student's t test), and Basidiomycota were less abundant in the infected roots, stems, and leaves than in the healthy roots, stems, and leaves (roots: p < 0.01, Student's t test; stems and leaves: p < 0.05, Student's t test; Figure 3B).At the genus level, 32 dominant bacterial groups, with a relative abundance of ≥0.3%, were detected.Allorhizobium, Neorhizobium, Pararhizobium, Rhizobium, and Delftia were more abundant in the infected roots than in the healthy roots, whereas Pelomonas, Methylophilus, and Bradyrhizobium were more abundant in the healthy roots than in the infected roots.Pantoea were more abundant in the infected stems than in the healthy stems, whereas Novosphingobium, Sphingobium, Methylophilus, unclassified Xanthomonadaceae, and Variovorax were more abundant in the healthy stems than in the infected stems.Pseudomonas, Pantoea, Klebsiella, and unclassified Enterobacterales were more abundant in the infected leaves than in the healthy leaves, whereas unclassified Xanthomonadaceae were more abundant in the healthy leaves than in the infected leaves (Figure 4A).A total of 16 dominant fungal genera, with a relative abundance of ≥0.3%, were identified.Fusarium were more abundant in the infected roots, stems, and leaves than in the healthy roots, stems, and leaves (stems: p < 0.001, Student's t test; leaves: p < 0.01, Student's t test), whereas Alternaria were more abundant in the healthy stems than in the infected stems (Figure 4B,C).
plants; HL: leaf samples of healthy strawberry plants; IL: leaf samples of infected strawberry plants.
At the genus level, 32 dominant bacterial groups, with a relative abundance of ≥0.3%, were detected.Allorhizobium, Neorhizobium, Pararhizobium, Rhizobium, and Delftia were more abundant in the infected roots than in the healthy roots, whereas Pelomonas, Methylophilus, and Bradyrhizobium were more abundant in the healthy roots than in the infected roots.Pantoea were more abundant in the infected stems than in the healthy stems, whereas Novosphingobium, Sphingobium, Methylophilus, unclassified Xanthomonadaceae, and Variovorax were more abundant in the healthy stems than in the infected stems.Pseudomonas, Pantoea, Klebsiella, and unclassified Enterobacterales were more abundant in the infected leaves than in the healthy leaves, whereas unclassified Xanthomonadaceae were more abundant in the healthy leaves than in the infected leaves (Figure 4A).A total of 16 dominant fungal genera, with a relative abundance of ≥0.3%, were identified.Fusarium were more abundant in the infected roots, stems, and leaves than in the healthy roots, stems, and leaves (stems: p < 0.001, Student's t test; leaves: p < 0.01, Student's t test), whereas Alternaria were more abundant in the healthy stems than in the infected stems (Figure 4B,C).
Potential Bacterial Metabolic Function and Fungal Functional Guilds of Healthy and Infected Samples
A Kyoto Encyclopedia of Genes and Genomes pathway analysis was used to predict the potential functional profiles of the bacterial communities in healthy and infected strawberry samples in PICRUSt.The results indicate that most of the predicted protein sequences in the strawberry samples were clustered into metabolism (74.35%), environmental information processing (8.59%), cellular processes (5.85%), and genetic information processing (5.06%).A total of eleven, three, and two pathways were identified for
Potential Bacterial Metabolic Function and Fungal Functional Guilds of Healthy and Infected Samples
A Kyoto Encyclopedia of Genes and Genomes pathway analysis was used to predict the potential functional profiles of the bacterial communities in healthy and infected strawberry samples in PICRUSt.The results indicate that most of the predicted protein sequences in the strawberry samples were clustered into metabolism (74.35%), environmental information processing (8.59%), cellular processes (5.85%), and genetic information processing (5.06%).A total of eleven, three, and two pathways were identified for metabolism, genetic information processing, and environmental information processing and cellular processes, respectively (Figure 5A).The relative abundance of the sequences related to amino acid metabolism and lipid metabolism in the IS and IL was significantly lower in the IS and IL than in the HS and HL, indicating that wilt decreased the degradation of these complex compounds.By contrast, the relative abundance of sequences related to membrane transport was higher in the IS and IL than in the HS and HL.In addition, the relative abundance of the metabolism of other amino acids, cellular community prokaryotes, and cell motility sequences was higher in the IL than in the HL, indicating that wilt increased the degradation of these complex compounds (Figure 5A).
tion, the relative abundance of the metabolism of other amino acids, cellular community prokaryotes, and cell motility sequences was higher in the IL than in the HL, indicating that wilt increased the degradation of these complex compounds (Figure 5A).
OTUs assigned to a guild with a confidence ranking of "highly probable" or "probable" were retained in the analysis, whereas those with a confidence ranking of "possible" were regarded as unclassified [9].After annotating the relative abundance of 18 fungal functional guilds, we discovered that the relative abundance of several functional guilds differed between the healthy and wilt-infected samples (Figure 5B).The relative abundance of plant pathogens in the infected root, stem, and leaf samples was higher than that in the healthy root, stem, and leaf samples (roots and stems: p < 0.01, Student's t test).The OTU confidence level of Fusarium was "possible"; the relative abundance of plant pathogens did not include Fusarium.However, the relative abundance of undefined saprotrophs in the infected root, stem, and leaf samples was lower than that in the healthy root, stem, and leaf samples (roots and leaves: p < 0.05, Student's t test).In addition, the relative abundance of endophytes in the infected root and stem samples was lower than that in the healthy root and stem samples (Figure 5B).OTUs assigned to a guild with a confidence ranking of "highly probable" or "probable" were retained in the analysis, whereas those with a confidence ranking of "possible" were regarded as unclassified [9].After annotating the relative abundance of 18 fungal functional guilds, we discovered that the relative abundance of several functional guilds differed between the healthy and wilt-infected samples (Figure 5B).The relative abundance of plant pathogens in the infected root, stem, and leaf samples was higher than that in the healthy root, stem, and leaf samples (roots and stems: p < 0.01, Student's t test).The OTU confidence level of Fusarium was "possible"; the relative abundance of plant pathogens did not include Fusarium.However, the relative abundance of undefined saprotrophs in the infected root, stem, and leaf samples was lower than that in the healthy root, stem, and leaf samples (roots and leaves: p < 0.05, Student's t test).In addition, the relative abundance of endophytes in the infected root and stem samples was lower than that in the healthy root and stem samples (Figure 5B).
Differences in Fungal Isolation Taxa between Healthy and Infected Samples
Differences were observed between the results of the culture-dependent and cultureindependent analyses used for determining the fungal community compositions of the healthy and infected samples (Figures 4B and 6A-C).In the culture-dependent analysis, 62 fungal strains were isolated from the healthy strawberry samples, and 40 fungal strains were isolated from the infected strawberry samples, with F. oxysporum accounting for 64.52% and 97.50%, respectively, of these samples.Only one species (F.oxysporum) was identified in the roots, stems, and leaves of the healthy and infected strawberry samples, and six, four, and six unique fungal species were identified in the roots, stems, and leaves, respectively, of the healthy strawberry samples (Figure 6A,B).
Student's t test revealed significant differences in relative abundance of bacterial metabolic function and fungal functional guilds (* p < 0.05, ** p < 0.01, and *** p < 0.001) between healthy strawberries and strawberries with Fusarium wilt (n = 10).HR: root samples of healthy strawberry plants; IR: root samples of infected strawberry plants; HS: stem samples of healthy strawberry plants; IS: stem samples of infected strawberry plants; HL: leaf samples of healthy strawberry plants; IL: leaf samples of infected strawberry plants.
Differences in Fungal Isolation Taxa between Healthy and Infected Samples
Differences were observed between the results of the culture-dependent and cultureindependent analyses used for determining the fungal community compositions of the healthy and infected samples (Figures 4B and 6A-C).In the culture-dependent analysis, 62 fungal strains were isolated from the healthy strawberry samples, and 40 fungal strains were isolated from the infected strawberry samples, with F. oxysporum accounting for 64.52% and 97.50%, respectively, of these samples.Only one species (F.oxysporum) was identified in the roots, stems, and leaves of the healthy and infected strawberry samples, and six, four, and six unique fungal species were identified in the roots, stems, and leaves, respectively, of the healthy strawberry samples (Figure 6A,B).
According to our results, the proportion of beneficial fungal genera, such as Trichoderma spp., Penicillium spp., and Talaromyces spp., in the strawberries with Fusarium wilt was significantly lower, whereas the proportion of F. oxysporum was significantly higher, reaching 100% in the stems and leaves (Figure 6B,C).The proportion of F. oxysporum in the infected strawberry roots, stems, and leaves was significantly higher than that in the healthy strawberry roots, stems, and leaves, which is consistent with the trend of change in Fusarium identified through ITS high-throughput sequencing (Figures 4C and 6C).According to our results, the proportion of beneficial fungal genera, such as Trichoderma spp., Penicillium spp., and Talaromyces spp., in the strawberries with Fusarium wilt was significantly lower, whereas the proportion of F. oxysporum was significantly higher, reaching 100% in the stems and leaves (Figure 6B,C).The proportion of F. oxysporum in the infected strawberry roots, stems, and leaves was significantly higher than that in the healthy strawberry roots, stems, and leaves, which is consistent with the trend of change in Fusarium identified through ITS high-throughput sequencing (Figures 4C and 6C).
Discussion
Understanding the taxa and distribution of microbial communities in plants is crucial to the prevention and control of plant diseases [10,30,31].Several species of Fusarium, particularly F. oxysporum, are well-known plant pathogens that result in wilting and economic losses in various plants [32][33][34].In this study, our results indicated that, according to the Shannon and Chao1 indices, the diversity of the bacterial and fungal communities in the infected roots, stems, and leaves of the strawberry plants was lower than that in the healthy samples (Figure 1A-D), presumably because the plants' vascular tissues were disrupted after infection with Fusarium wilt.This process led to the absolute dominance of F. oxysporum, with increased colonization, which altered the composition and distribution of the plants' microbiome.These results are consistent with those of previous studies, which indicated that the diversity of bacterial and fungal communities in infected rhizosphere soil surrounding healthy strawberry plants was higher than that in powdery-mildew-infected samples [29].Therefore, the high diversity of endophytic bacteria and fungi observed in healthy strawberry plants may play a key role in initiating host defenses against pathogen invasion, which may in turn increase the host's resistance to pathogen invasion [35,36].
Overall, our results reveal a strong variation in the microbial taxonomic composition of the roots, stems, and leaves of healthy strawberry plants and Fusarium wilt strawberry plants.Alphaproteobacteria were more abundant in infected roots than in healthy roots, whereas Gammaproteobacteria were more abundant in infected stems and leaves than in healthy stems and leaves.The relative abundance of Firmicutes, Actinobacteria, and Chloroflexi was lower in infected rhizosphere roots, stems, and leaves than in healthy rhizosphere roots, stems, and leaves (Figure 3A).Multiple studies have indicated that Chloroflexi, Firmicutes, Actinobacteria, and Proteobacteria are associated with disease suppression [37][38][39].In the current study, Sordariomycetes were more abundant in infected roots, stems, and leaves than in healthy samples, whereas Basidiomycota were less abundant in infected roots, stems, and leaves than in healthy samples (Figure 3B).These results are consistent with previous findings indicating that the relative abundance of Basidiomycota in the infected rhizosphere soil of healthy strawberry plants was higher than that in powdery-mildew-infected samples [29].
After infection with Fusarium wilt, the relative abundance of certain bacterial and fungal genera notably changed.For example, the relative abundance of Pelomonas, Sphingobium, Ralstonia, Comamonas, Methylophilus, Lactobacillus, Streptomyces, Bacillus, Exiguobacterium, Aquabacterium, and Bradyrhizobium decreased in the infected roots, stems, and leaves, whereas the relative abundance of Pseudomonas and Microbacterium exhibited the opposite trend (Figure 4A).These changes in the composition of different taxonomic groups were presumably due to Fusarium invasion.Generally, Lactobacillus, Streptomyces, Bacillus, Pseudomonas, and Microbacterium have the potential to prevent and control Fusarium pathogens [40][41][42][43][44][45][46].These changes observed in the microbial composition of strawberry roots, stems, and leaves were presumably due to the plants' resistance to F. oxysporum invasion and expansion, thus explaining their resistance to wilting.In terms of fungi, we detected Fusarium in all samples, with the proportions in healthy roots and infected roots, healthy stems and infected stems, and healthy leaves and infected leaves being 65.04% and 82.48%, 44.32% and 98.87%, and 8.94% and 53.94%, respectively (Figure 4B,C).These findings are consistent with the fact that Fusarium is the most commonly isolated genus in plant samples [47,48].We also detected a higher abundance of Fusarium in the infected Plants 2023, 12, 4153 9 of 15 plant samples, which is consistent with this study's field observations and culture isolation results for strawberry wilt (Figures 4C and 6B,C).In the infected samples, the relative abundance of Rhodotorula, Acremonium, Apiotrichum, Alternaria, Debaryomyces, Cadophora, and Sarocladium exhibited a decreasing trend, which was presumably because Fusarium occupied a highly favorable ecological niche in the infected samples.The presence of endophytic Fusarium in the healthy samples may be a potential pathogenic fungus, which can be infected when the opportunity is appropriate, or it may not be pathogenic.
According to the PICRUSt results regarding bacterial metabolic function, most of the predicted protein sequences in the strawberry samples were clustered into metabolism (74.35%), environmental information processing (8.59%), cellular processes (5.85%), and genetic information processing (5.06%).In the metabolism cluster, global and overview maps, carbohydrate metabolism, and amino acid metabolism served as the primary pathways between healthy and infected strawberries.Compared with that observed in the roots, the bacterial function changes observed in the stems and leaves were more significant after infection with Fusarium wilt.The relative abundances of sequences related to amino acid metabolism and lipid metabolism were significantly lower in the infected stems and leaves than in the healthy stems and leaves (Figure 5A).A proteomic analysis revealed that the volatile organic compounds produced by biocontrol Bacillus amyloliquefaciens SQR-9 reduced the carbohydrate and amino acid metabolism of the protein of the tomato wilt pathogen Ralstonia solanacearum [49].Moreover, the exogenous addition of mannitol and trehalose increased the production of chlamydospores of biocontrol fungal strains of T. harzianum T4, and enhanced their stress resistance by regulating lipid metabolism, indicating that lipid metabolism is an essential component of chlamydospore production and affects the stress resistance of chlamydospores [50].These changes observed in the amino acid and lipid metabolism pathways may be related to the resistance of bacterial communities in strawberry stems and leaves to Fusarium infection.
In terms of the FUNGuild function prediction, a confidence level of "possible" or "highly probable" was selected to ensure prediction accuracy, whereas a confidence level of "possible", such as Fusarium, was excluded.After these criteria were applied, the relative abundance of plant pathogens remained higher in the infected root, stem, and leaf samples than in the healthy root, stem, and leaf samples (Figure 5B).These results indicate that Fusarium infection may result in the accumulation of other plant pathogens in infected plants, presumably because of the destruction of the plant's immune system due to Fusarium infection, which enables the entry of other pathogens to the plant tissues.For example, the co-inoculation of Fusarium and Phytophthora sojae into soybeans may increase the rate of infection with P. sojae [51].
After tissue separation, we discovered that the proportion of F. oxysporum in the infected strawberry roots, stems, and leaves was significantly higher than that in the healthy samples.We also discovered that the proportion of beneficial fungal genera, such as Trichoderma spp., Penicillium spp., and Talaromyces spp., in the strawberries with Fusarium wilt significantly decreased (Figure 6B,C).Previous studies have shown that several isolates of these fungal genera have been investigated as potential antagonistic agents against Fusarium pathogens [43,46,52].These findings are consistent with that of the reduced diversity in the infected samples compared with that in the healthy samples, as detected using ITS high-throughput sequencing (Figure 1B).They are also consistent with previous research indicating that infection with plant pathogens may lead to changes in the endophytic community, thus resulting in a decrease in microbial diversity [53,54].These changes may be due to the dominance of plant pathogens in plant tissues, whose presence may prevent the recruitment of beneficial microbes [9].
It should be mentioned that F. oxysporum is a fungal pathogen which produces high levels of mycotoxins, mainly including fusaric acid (FA), moniliformin, and fusarins [55][56][57][58].These mycotoxins are suspected to be potent pathogenicity factor in plant disease development, including inhibiting plant growth and causing plant wilt [57,59].Infected strawberry plants contained a large amount of F. oxysporum, which may produce high levels of my-cotoxins that could have a strong impact on the fungal and bacterial communities of the strawberry plant.
Sample Collection
On 29 June 2021, strawberry samples were collected using random sampling during the seedling stage from a strawberry greenhouse (temperature: 26-28 • C, humidity: 60-80%, and plant spacing: 50 cm) located in Jiangsu Agricultural Expo Park, Jiangsu Province, China (32 • 02 ′ N, 119 • 26 ′ E).These strawberry samples belonged to the cultivar Beni Hoppe, which had been continuously planted for 2 years.A total of 60 samples were collected, including 10 root samples of healthy strawberries (HR), 10 root samples of wilt-infected strawberries (IR), 10 stem samples of healthy strawberries (HS), 10 stem samples of wilt-infected strawberries (IS), 10 leaf samples of healthy strawberries (HL), and 10 leaf samples of wilt-infected strawberries (IL).Each sample was separately collected, placed in a sterile plastic bag, and transported on ice to a laboratory.
Sample Preparation and DNA Extraction
Briefly, all strawberries were rinsed with running water to remove soil residue and dust.Subsequently, the roots, stems, and leaves of each strawberry plant were cut off and evenly mixed, and 2 g of each sample was randomly weighed for surface disinfection.To ensure the removal of all epiphytic microbes, 100 mL of sterile water and two drops of Tween 20 were added, and the mixture was shaken at 220 rpm at 25 • C for 20 min; treated with sterile water for 20 s, 70% (v/v) ethanol for 30 s, and 2.5% (v/v) sodium hypochlorite solution for 2 min; and finally rinsed with sterile water three or four times.Each part of the root, stem, and leaf was further cut into shorter segments (0.25 cm), and each sample was divided into two portions for culture-dependent and culture-independent analysis, as described in the following: In the culture-dependent analysis of fungi, five segments of each sample were randomly selected and placed on a plate containing potato dextrose agar (PDA) with three replicates and were cultured at 25 • C. Ampicillin (50 mg/L) and rifampicin (50 mg/L) were added to all media in advance to inhibit the growth of bacteria.After 3-5 days, when a mycelium emerged from the tissue block, a small piece of medium growing on the edge of the medium was carefully transferred together with the mycelium to a new plate containing PDA.On the basis of the type of mycelia, colony color, and growth rate, these pure fungal cultures were preliminarily divided into morphological taxa.After 7 days of growth, mycelial DNA was extracted using a DNAsecure Plant Kit (Tiangen, Beijing, China) as per the manufacturer's instructions.Subsequently, PCR amplification was performed using the primers ITS1 and ITS4 in accordance with previously outlined protocols [60] (Table S2).We amplified the translation elongation factor 1-α (EF1-α/TEF1) and β-tubulin (BenA) to confirm species of the Fusarium [61], Trichoderma [62], and Talaromyces, respectively [63] (Table S2).The PCR reaction system and amplification program were carried out according to the established protocols [60].After the PCR products were sequenced, their sequences were compared using the NCBI database (BLASTN, http://www.ncbi.nlm.nih.gov(accessed on 25 February 2022)) for species identification.
In the culture-independent analysis method, each remaining sample was placed in a sterilization mortar, soaked with liquid nitrogen, and ground with a pestle.DNA was subsequently extracted using the aforementioned kit in accordance with the manufacturer's instructions.The DNA concentration and purity were quantified using a NanoDrop 2000 spectrophotometer (Thermo Fisher Scientific, Wilmington, DE, USA).
We estimated the fungal and bacterial community diversity (Shannon's and Simpson's indices) and community richness (Chao1, Ace, and Sobs indices) in the healthy and wiltinfected strawberry roots, stems, and leaves using mothur v.1.30.2 (http://www.mothur.org/wiki/Calculators (accessed on 25 February 2022)) [74].A principal coordinates analysis (PCoA) of the Bray-Curtis distances was performed with the vegan package in R v.3.3.1.ANOSIMs based on the Bray-Curtis distances were performed to evaluate the significant differences between healthy and wilt-infected strawberries and compartments via the vegan package of R v.3.3.1.
Statistical Analysis
Student's t test was used to compare the alpha diversity indices, the relative abundance of bacterial and fungal taxa (phylum and genus), and the relative abundance of bacterial metabolic function and fungal functional guilds in the healthy and infected roots, stems, and leaves (p < 0.05).All statistical analyses were conducted using IBM SPSS Statistics v.20.0 (IBM, Armonk, NY, USA).
Conclusions
In this study, we investigated the changes that occur in the microbial communities of strawberry plants after infection with Fusarium.Our results reveal significant variations in the bacterial and fungal communities in the roots, stems, and leaves of healthy and infected samples.After infection with Fusarium, F. oxysporum rapidly occupied the ecological niche of the strawberry plants, resulting in significant changes in the composition of the microbial communities in the roots, stems, and leaves.The diversity of the bacterial and fungal communities decreased, and the number of beneficial microorganisms within the plants also decreased, thereby enabling other plant pathogens to enter the plants.Overall, our findings can serve as a theoretical basis and biocontrol resource for the prevention and control of Fusarium wilt in strawberry plants.
Supplementary Materials:
The following supporting information can be downloaded at https: //www.mdpi.com/article/10.3390/plants12244153/s1: Figure S1: PCoA based on Bray-Curtis differential analysis indicating effect of Fusarium infection on composition of bacterial (A) and fungal (B) communities in strawberry roots, stems, and leaves.ANOSIM conducted to test for differences in community composition resulting from Fusarium infection.R values labeled with asterisks: * p < 0.05; Table S1: Alpha diversity of bacteria and fungi in healthy and infected strawberry roots, stems, and leaves.Values presented as means ± standard deviations for healthy and infected strawberry roots, stems, and leaves (n = 10).Uppercase letters representing p < 0.01 level and lowercase letters representing p < 0.05 level, different letters represent significant differences according to Student's t test; Table S2: The primers used in this study.
Author Contributions: Data curation, methodology, formal analysis, software, writing-original draft, H.Y.; data curation, methodology, writing-review and editing, X.Z.; data curation, methodology, X.S.; writing-review and editing, X.Q. and J.C.; resources, funding acquisition, Y.W. and Z.Y.; writing-review and editing, W.Y., G.Z. and S.J.All authors have read and agreed to the published version of the manuscript.
Figure 1 .
Figure 1.Alpha diversity of bacterial (A,B) and fungal (C,D) communities in the roots, stems, and leaves of healthy strawberry plants and strawberries with Fusarium wilt.Diversity estimated using Shannon's index (A,C) and Chao1 index (B,D).Statistical analysis conducted using Student's t test (* p < 0.05, ** p < 0.01, and *** p < 0.001).HR: root samples of healthy strawberry plants; IR: root samples of infected strawberry plants; HS: stem samples of healthy strawberry plants; IS: stem samples of infected strawberry plants; HL: leaf samples of healthy strawberry plants; IL: leaf samples of infected strawberry plants.
Figure 2 .
Figure 2. Principal coordinate analysis (PCoA) of bacterial (A) and fungal (B) communities in healthy strawberry plants and strawberries with Fusarium wilt, with Bray-Curtis dissimilarities.
Figure 1 .
Figure 1.Alpha diversity of bacterial (A,B) and fungal (C,D) communities in the roots, stems, and leaves of healthy strawberry plants and strawberries with Fusarium wilt.Diversity estimated using Shannon's index (A,C) and Chao1 index (B,D).Statistical analysis conducted using Student's t test (* p < 0.05, ** p < 0.01, and *** p < 0.001).HR: root samples of healthy strawberry plants; IR: root samples of infected strawberry plants; HS: stem samples of healthy strawberry plants; IS: stem samples of infected strawberry plants; HL: leaf samples of healthy strawberry plants; IL: leaf samples of infected strawberry plants.
Figure 1 .
Figure 1.Alpha diversity of bacterial (A,B) and fungal (C,D) communities in the roots, stems, and leaves of healthy strawberry plants and strawberries with Fusarium wilt.Diversity estimated using Shannon's index (A,C) and Chao1 index (B,D).Statistical analysis conducted using Student's t test (* p < 0.05, ** p < 0.01, and *** p < 0.001).HR: root samples of healthy strawberry plants; IR: root samples of infected strawberry plants; HS: stem samples of healthy strawberry plants; IS: stem samples of infected strawberry plants; HL: leaf samples of healthy strawberry plants; IL: leaf samples of infected strawberry plants.
Figure 2 .
Figure 2. Principal coordinate analysis (PCoA) of bacterial (A) and fungal (B) communities in healthy strawberry plants and strawberries with Fusarium wilt, with Bray-Curtis dissimilarities.
Figure 2 .
Figure 2. Principal coordinate analysis (PCoA) of bacterial (A) and fungal (B) communities in healthy strawberry plants and strawberries with Fusarium wilt, with Bray-Curtis dissimilarities.Analysis of similarities (ANOSIM) conducted to test for differences in community composition resulting from compartment and health status.R values are presented and labeled with asterisks: ** p < 0.01 and *** p < 0.001.
Figure 3 .
Figure 3. Relative abundance of most abundant (>0.1%) bacterial phyla and Proteobacteria classes (−) (A) and fungal phyla and Ascomycetes classes (−) (B) in each compartment between healthy strawberries and strawberries with Fusarium wilt.Student's t test used to identify significant differences between healthy strawberries and strawberries with Fusarium wilt (* p < 0.05, ** p < 0.01, and
Figure 3 .
Figure 3. Relative abundance of most abundant (>0.1%) bacterial phyla and Proteobacteria classes (−) (A) and fungal phyla and Ascomycetes classes (−) (B) in each compartment between healthy strawberries and strawberries with Fusarium wilt.Student's t test used to identify significant differences between healthy strawberries and strawberries with Fusarium wilt (* p < 0.05, ** p < 0.01, and *** p < 0.001).HR: root samples of healthy strawberry plants; IR: root samples of infected strawberry plants; HS: stem samples of healthy strawberry plants; IS: stem samples of infected strawberry plants; HL: leaf samples of healthy strawberry plants; IL: leaf samples of infected strawberry plants.
Figure 4 .
Figure 4. Relative abundance of dominant bacterial (A) and fungal (B) genera in roots, stems, and leaves of healthy strawberries and strawberries with Fusarium wilt (10 replicates).Figure depicts bacterial and fungal genera with relative abundance of >0.3%.Cell colors represent log2 fold change in relative abundance compared with control treatment, with brown indicating increasing trend and cyan indicating decreasing trend.(C) Relative abundance of the fungal genera Fusarium.Student's t test revealed significant differences in relative abundance of bacterial (A) and fungal (B) genus and Fusarium (C) (* p < 0.05, ** p < 0.01, and *** p < 0.001) between healthy strawberries and strawberries with Fusarium wilt (n = 10).HR: root samples of healthy strawberry plants; IR: root samples of infected strawberry plants; HS: stem samples of healthy strawberry plants; IS: stem samples of infected strawberry plants; HL: leaf samples of healthy strawberry plants; IL: leaf samples of infected strawberry plants.
Figure 4 .
Figure 4. Relative abundance of dominant bacterial (A) and fungal (B) genera in roots, stems, and leaves of healthy strawberries and strawberries with Fusarium wilt (10 replicates).Figure depicts bacterial and fungal genera with relative abundance of >0.3%.Cell colors represent log 2 fold change in relative abundance compared with control treatment, with brown indicating increasing trend and cyan indicating decreasing trend.(C) Relative abundance of the fungal genera Fusarium.Student's t test revealed significant differences in relative abundance of bacterial (A) and fungal (B) genus and Fusarium (C) (* p < 0.05, ** p < 0.01, and *** p < 0.001) between healthy strawberries and strawberries with Fusarium wilt (n = 10).HR: root samples of healthy strawberry plants; IR: root samples of infected strawberry plants; HS: stem samples of healthy strawberry plants; IS: stem samples of infected strawberry plants; HL: leaf samples of healthy strawberry plants; IL: leaf samples of infected strawberry plants.
Figure 5 .
Figure 5. Relative abundance of bacterial (A) and fungal (B) predicted functional groups (guilds) in healthy and infected roots, stems, and leaves inferred using PICRUSt2 and FUNGuild, respectively.
Figure 5 .
Figure 5. Relative abundance of bacterial (A) and fungal (B) predicted functional groups (guilds) in healthy and infected roots, stems, and leaves inferred using PICRUSt2 and FUNGuild, respectively.Student's t test revealed significant differences in relative abundance of bacterial metabolic function and fungal functional guilds (* p < 0.05, ** p < 0.01, and *** p < 0.001) between healthy strawberries and strawberries with Fusarium wilt (n = 10).HR: root samples of healthy strawberry plants; IR: root samples of infected strawberry plants; HS: stem samples of healthy strawberry plants; IS: stem samples of infected strawberry plants; HL: leaf samples of healthy strawberry plants; IL: leaf samples of infected strawberry plants.
Figure 6 .
Figure 6.(A) Venn diagram of common and unique fungal species isolated using tissue separation from healthy and infected roots, stems, and leaves.(B) Relative abundance of fungal species
Figure 6 .
Figure 6.(A) Venn diagram of common and unique fungal species isolated using tissue separation from healthy and infected roots, stems, and leaves.(B) Relative abundance of fungal species Fusarium oxysporum.(C) Relative abundance of fungal species isolated from healthy and infected roots, stems, and leaves.HR: root samples of healthy strawberry plants; IR: root samples of infected strawberry plants; HS: stem samples of healthy strawberry plants; IS: stem samples of infected strawberry plants; HL: leaf samples of healthy strawberry plants; IL: leaf samples of infected strawberry plants.
Funding:
This work was supported by the Natural Science Foundation of the Jiangsu Higher Education Institutions of China (23KJB210009; 21KJB210012), the Science Fund of the Jiangsu Vocational College of Agriculture and Forestry (2021kj22; 2021kj39), the Yafu Technology Innovation and Service Major Project (2023kj02), and the Unveiling and Leading Projects (2022kj05) of the Jiangsu Vocational College of Agriculture and Forestry. | 2023-12-16T16:38:35.847Z | 2023-12-01T00:00:00.000 | {
"year": 2023,
"sha1": "5a3161a264895c156ea63736e1ab6ac782df8216",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2223-7747/12/24/4153/pdf?version=1702473876",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "f2800602f6c43bee70c026c2400588a78205a8e1",
"s2fieldsofstudy": [
"Environmental Science",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
195887194 | pes2o/s2orc | v3-fos-license | Disentangling the effects of reward value and probability on anticipatory event-related potentials
Optimal decision-making requires humans to predict the value and probability of prospective (rewarding) outcomes. The aim of the present study was to evaluate and dissociate the cortical mechanisms activated by information on an upcoming potentially rewarded target stimulus with varying probabilities. Electro-cortical activity was recorded during a cued Go/NoGo experiment, during which cue letters signaled upcoming target letters to which participants had to respond. The probability of target letter appearance after the cue letter and the amount of money that could be won for correct and fast responses were orthogonally manipulated across four task blocks. As expected, reward availability affected a prefrontally distributed reward-related positivity, and a centrally distributed P300-like event-related potential (ERP). Moreover, a late prefrontally distributed ERP was affected by probability information. These results show that information on value and probability, respectively, activates separate mechanisms in the cortex. These results contribute to a further understanding of the neural underpinnings of normal and abnormal reward processing.
Introduction
Optimal decision-making requires humans to predict the value and probability of prospective (rewarding) outcomes (Glimcher and Rustichini, 2004). These predictions have direct implications for subsequent behavior, which is based upon cortical activity. In the present study we evaluate the cortical mechanisms activated in a context of anticipating reward with varying probabilities. We furthermore investigate the extent to which these cortical activations interact and to which they are independent.
Neuroimaging studies have investigated the representation of anticipated subjective value (or reward) in the human brain. Parts of the ventro-medial prefrontal cortex including the medial orbito-frontal cortex (mOFC) and the rostral anterior cingulate cortex (rACC) (Breiter et al., 2001;Howard et al., 2015;Kable and Glimcher, 2007;Padmala and Pessoa, 2011;Smith et al., 2009), as well as more posterior regions of the cingulate cortex (Kable and Glimcher, 2007;Kirsch et al., 2003;Padmala and Pessoa, 2011;Smith et al., 2009) show activity during the anticipation of reward. These cortical regions act in concert with subcortical structures, such as the ventral striatum and midbrain (e.g. Breiter et al., 2001;Knutson et al., 2005;Smith et al., 2009;Yacubian et al., 2006; for reviews see: Haber and Knutson, 2010;O'Doherty, 2004;Rushworth and Behrens, 2008). Some of these studies also investigated the effect of increasing the anticipated probability of a reward and reported affected regions in the posterior cingulate cortex (PCC) (Knutson et al., 2005) and medial prefrontal cortex (Knutson et al., 2005;Yacubian et al., 2006), as well as in the dorsal ACC (dACC) (Smith et al., 2009).
Results of the neuroimaging study by Knutson and colleagues (2005) indicate that the dACC may integrate reward and probability information. However, in that study as well in the other neuroimaging studies discussed above, main effects of reward value and probability manipulations abound as well. This prompts the crucial question of the temporal relations between activity related to either the value or probability manipulation on the one hand, and activity related to their interaction on the other. The event-related potential (ERP) technique has a much higher temporal resolution compared to neuroimaging. It is therefore much better suited in temporally separating sequential neural activities within the brief time windows that characterize real-life decision-making.
Recently, a number of studies have investigated the effects of reward value (but not so much probability) on ERPs during outcome anticipation. It was found that reward cues (highly) predictive of an upcoming monetary reward (monetary incentive delay task: Donamayor et al., 2012;Flores et al., 2015; passive gambling task: Holroyd et al., 2011) Yu and Zhou, 2006) than cues (highly) predictive of no reward or a loss, with a latency between 200 and 300 ms after the reward announcing cue and a fronto-central scalp distribution. A reward-sensitive ERP with a similar latency was observed during choice presentation (so before the feedback stage) in a gambling task after subjects learned which of the choice options yielded reward (Krigolson, Hassall and Handy, 2014). This reward-related positivity has been labelled "reward positivity" (Holroyd et al., 2011). It is observed not only in response to reward-predicting cues but also in response to reward delivery (Holroyd et al., 2011). Especially in the latter context it has also been described as a mirror inverse of the feedback-related negativity (FRN; Krigolson, 2018;Proudfit, 2015). Also in the context of negative feedback and errors, cues predicting errors or non-reward have been observed to elicit a larger FRN/error-related negativity (ERN) relative to cues predicting correct responses or reward (Baker and Holroyd, 2009;Krigolson and Holroyd, 2007).
With respect to probability, a prior study by our group (Bekker et al., 2004) showed that cues highly predictive of an upcoming target, and therefore containing highly relevant information, elicit a larger posterior P300 ERP than cues less predictive of an upcoming target. This finding is line with a large body of research showing sensitivity of the P300 to internal updating of the subjective probability of relevant events or outcomes (Duncan-Johnson and Donchin, 1982). In another study, probability was manipulated within a reward-anticipation context (Yu et al., 2011). Here, cues signaling a 100% certain future reward elicited a smaller FRN/larger RRP, relative to cues signaling less than 100% certainty (ranging from 0 to 87.5%).
ERP studies investigating the effect of both reward value and probability manipulations during reward anticipation are scarce. Furthermore, it remains to be determined whether ERPs elicited by reward value and probability manipulations are dissociable. In the current study, reward value and probability were orthogonally manipulated across four task blocks during a cued Go/NoGo experiment (CGN task) (adapted from Bekker et al., 2004) in which cues signaled upcoming targets to which participants had to respond. Unlike paradigms in which performance was based on choices between different reward-probability conditions (e.g. Krigolson et al., 2014;Smith et al., 2009;Yacubian et al., 2006), in the present study cued reward value and probability information was decoupled from task requirements such as choosing a response. This enabled us to isolate specific activations related to reward value and probability from those of task requirements. For example, when a certain response choice is directed at an option of a certain reward being obtained with a certain probability, the probability level directly affects the expected reward value. In our design, probability only concerns whether the action will have to be performed at all. This contributes to strong orthogonality, while at the same time reward obtainment is still dependent on producing the adequate response. This latter feature is important at least with respect to the FRN/ERN (Yeung et al., 2005).
The main aims of the current study were: (1) to gain understanding of the temporal profile of cortical mechanisms activated in a reward anticipation context; (2) to investigate the extent to which activations related to reward value and probability manipulations interact when both are completely orthogonally manipulated. Our main focus was on anticipatory activity within a 180-500 ms post cue window (see Methods section). 1 Specifically, we tested the hypothesis that reward value affects frontal ERP activity early in time. This ERP activity could be associated with the processing of reward itself, or more with indirect effects of reward on attention networks (Corbetta and Shulman, 2002). Probability manipulations were expected to specifically affect parietal ERP activity later in time (P300) (Bekker et al., 2004;Holroyd et al., 2011;Donamayor et al., 2012;Flores et al., 2015). We also anticipated the possibility of an interaction between reward value and probability. Reward value could have an additive effect on the probability P300 ERP, given that numerous studies have shown that the P300 is also sensitive to reward outcome (e.g. Yeung and Sanfey, 2004) and reward anticipation (Broyd et al., 2012;Flores et al., 2015;Pfabigan et al., 2014). Alternatively, reward value and probability could interact like in the Knutson et al. (2005) study. In this scenario, reward announcing cues were expected to elicit more ERP activity than no reward cues, but only when they also predict high probability of an upcoming target.
It should be noted that while our design ensures orthogonal manipulation at block level of reward value and probability, this does not hold at the level of single trials, as cues indicating high-probability rewarded targets also cue low single-target rewards. An 'adaptive scaling' (see Walsh and Anderson, 2012) perspective predicts a response to low single-target reward cues (cueing 98% probability reward), when the low value is the only available option (in addition to no reward, throughout a block of trials), that is identical to the response to high single-target reward cues (cueing 50% probability reward) when the high value is the only available option. From this perspective, reward values during different probability conditions could be validly compared. To assess the extent to which this perspective is tenable, we performed additional analyses using specific contrasts to isolate 'pure' reward and probability effects (see Methods-statistical analysis).
Subjects
Forty-nine healthy subjects participated in the experiment. Participants were recruited via advertisement at the campus of Utrecht University. None of the subjects had a history of psychiatric or neurologic disorders and none of the subjects used psycho-active medication. Participants were requested to abstain from consuming caffeine and smoking for at least 12 h prior to participation and were requested to refrain from drugs for at least 2 weeks prior to participation. All participants declared to have normal or corrected-to-normal vision. The study was approved by the medical ethical committee of the University Medical Centre Utrecht and subjects gave written informed consent prior to participation. Participants received 6 Euros per hour or received study credits instead, and additionally received a monetary bonus with a maximum of 10 euros. The monetary bonus was dependent on task performance (see Cued Go/NoGo task). ERP data of 1 participant were not stored due to a technical issue. Furthermore, 3 participants were excluded during analysis, because too few segments were left in one or more conditions for multiple neighboring electrodes (see data processing). Therefore the final sample consisted of 45 participants (mean age (SD) = 23.9 (4.2) years, 34 females, 43 righthanded).
Procedure
This experiment was part of a larger study (3 sessions on separate days) on the effect of reward and target probability (anticipation) on various aspects of behavior, and of psycho-and neurophysiology. Participants were informed about the experimental procedure and signed the informed consent form during the first session. Half of the subjects completed the cued Go/NoGo (CGN) task during the second session and the other half during the third session.
The CGN task session started with placement of cap and electrodes. Participants were seated in a chair 1 m in front of a computer screen in a dimly lit room adjacent to the control room with the chin placed on a chin-rest. Participants fixated on the center of the screen and the chair was adjusted to a comfortable height accordingly. Task instructions were given and the CGN task started subsequently, which lasted about 1 h. EEG was recorded during the task. Five subjects completed a spatial cuing task before (3 subjects) or after (2 subjects) the CGN task. The other participants were not subjected to other tasks during the CGN task-session. At the end of the test session the cap and electrodes were removed and participants were paid and dismissed.
Cued Go/NoGo task
The CGN task was controlled by Presentation ® software (version 16.0, www.neurobs.com). During the CGN task (adapted from Bekker et al., 2004), the letters A, C, D, E, F, G, H, J, L, X and Y were presented in the center of a 16 inch Dell CRT screen (resolution: 1280 × 1024) in black on a grey background between two vertical bars (height: 1.03°, width: 0.05°). Letters were presented in Arial font, size 79. The letter stimuli were presented for 150 ms and were interleaved by inter-stimulus intervals with a random duration between 1400 and 1600 ms.
The task is illustrated in Fig. 1. Participants were instructed to press the left button with the left index finger when letter X followed letter A and to press the right button with the right index finger when letter Y followed letter A, as fast and accurately as possible. This mapping was reversed for half of the participants. Responses were made on a qwerty keyboard 2 on which all keys were covered by a plastic sheet, except for the "z" key, "/" key and the spacebar. The "z" and "/" key were the left and right target button, respectively, and the symbols on these buttons were covered by a white sticker.
Target probability was either 50% (total of 40 targets, see below) or 98% (78 targets) and either no money or a maximum of 5 Euros in total could be won during each block. Participants were fully informed about the probability and the reward levels for a block before the start of that block. Note that although we manipulated target probability (i.e., probability of X or Y following letter A) and target reward value, we were specifically interested in the electrocortical aspects of reward and probability anticipation as they occur after the cue (i.e., letter A) but before the target (letter X or Y).
The amount of money won in the reward block was calculated by multiplying the percentage of correct and timely responses by 12.5 eurocents (5 Euro divided by 40 trials) and 6.4 eurocents (5 Euro divided by 78 trials), respectively.
The task started with 100 practice trials (letters). The practice block always consisted of the reward-98% target probability condition. The main task consisted of four blocks of 400 trials (letters), comprising the four conditions. Participants were informed about the target probability and the total amount of money that could be won during the block at the beginning of each block. Order of the four blocks was counterbalanced across participants. One-minute rest breaks were provided halfway through each block and participants were reminded of the target probability and reward availability of the current block after the rest breaks. One-minute rest breaks were also provided between blocks.
The cue (A) appeared 80 times during each block. In the 50% target probability blocks, 20 cues were followed by target X and 20 cues were followed by target Y. Forty cues were not followed by a target and 20 X's and 20 Y's were not preceded by a cue. In the 98% target probability blocks, 39 cues were followed by target X and 39 cues were followed by target Y. Two cues were not followed by a target and 1 X and 1 Y were not preceded by a cue. Each of the other letters appeared 20 times in each block, except for letter H and C, which appeared more often (80 and 40 times, respectively), in order to control for frequency differences between the letter stimuli and cues/targets (Bekker et al., 2004). Letter stimuli were presented in a pseudo-random order within each block, with the following restrictions: (1) stimuli were never directly followed by identical stimuli. (2) cues followed by targets (A-X or A-Y) and cues not followed by targets ("NoGo": A-not X or Y) were always followed by at least one "nocue" (i.e., C, D, E, F, G, H, J, L not preceded by a cue). Only cue-and no cue-related ERP activity was analyzed. Note that nocues were not associated with reward or probability information. Nocue related activity was used as a baseline for non-specific effects, as the context of reward/high probability within the current block may sensitize processing. Subtraction of nocue from cue-related activity, therefore, yields ERP activity specifically related to the temporally specific information on reward and probability (i.e., the cue A signaling that with 50 or 98% probability a reward adding up to 5 Euros could be earned in the reward condition, or of 0 Euro in the no-reward condition).
To briefly recapitulate our hypotheses: We expected cues to elicit a reward-related positivity (RRP) and P300, relative to no-cues (these were either C, D, E, F, G, H, J or L). Furthermore, we expected reward (vs. no reward) to enhance RRP and P300, and high (vs. low) probability to enhance P300. Cues and no-cues were presented pseudorandomly within a block of trials, and the resulting average no-cue ERP served as to be subtracted baseline for the resulting average cue ERP. This was done to control for non-specific effects of reward and probability that would affect any response to any stimulus in a given block, including irrelevant probes. In total four of these blocks were presented, corresponding to four conditions that resulted from a 2 × 2 design based on orthogonal manipulation of the probability of target appearance (letter X or Y) after the cue (letter A), and the amount of money that could be won for correct and fast responses.
EEG data acquisition
ERP signals were recorded with the Active-Two system (Biosemi, Amsterdam, The Netherlands) with 64 Ag-AgCl electrodes. Recording electrodes were placed according to the 10/10 system. EOG electrodes were placed above and below the left eye and at the outer canthi of both eyes. EEG signals were online referenced to the Common Mode Sense/ Driven Right Leg electrode. EEG data were sampled at 2048 Hz and online low pass filtered at DC to 400 Hz.
Behavioral data
Mean reaction times (RTs) for valid responses to the target (i.e., single responses within the time window 150-1500 ms after target onset) were calculated for each condition and each subject. Furthermore, the percentage correct responses and percentage omissions were calculated for each condition and each subject. The percentage commission errors to the NoGo stimulus (i.e., a non-target preceded by a cue) was calculated for each subject only for the 50% target probability blocks, because there were too few NoGo trials during Fig. 1. Overview of the cued Go/NoGo task. Letters were presented on the screen and participants were instructed to press a pre-specified button when letter X (target) followed letter A (cue), and when letter Y (target) followed letter A (cue). Four blocks of letter trials were presented, which differed in the amount of money that could be won for correct and fast responses (either 0 or 5 Euros maximally in total) and in the probability of target appearance after the cue (either 50% or 98%). the 98% target probability blocks.
ERP data
ERP data collected during the CGN task were analyzed using Brainvision Analyzer 2.0 (Brain Products GmbH). Data were re-referenced to the average reference, were filtered with a 30 Hz low pass filter (24 dB/oct) and an additional 50 Hz Notch filter, and re-sampled to 256 Hz. Data were segmented into windows from 100 ms before (no) cue onset until 1000 ms after (no)cue onset. Cue-locked segments with pre-mature (< 150 ms) or late responses to the target (> 1500 ms) or with omissions, choice errors, or commission errors were removed from further analyses. Ocular artifacts were corrected by using the Gratton & Coles method (Gratton et al., 1983) and a baseline correction was applied subsequently by using the 100 ms time window before cue onset. Channels were individually inspected for segments with artifacts by using an automatic artifact rejection procedure (maximal allowed absolute difference between two values: 100 μV, lowest allowed activity within a 100 ms interval: 0,5 μV). 1.8 ( ± 2.1) % of the data segments were lost on average due to the artifact rejection procedure.
Channels with less than half of the segments left within a particular subject/condition were interpolated with a spherical splines method (Brainvision Analyzer 2.0, Brain Products GmbH) using the neighboring electrodes. Data of three subjects were removed from further analyses, because less than half of the cue-(< 40) or nocue-locked (< 100) segments were left in one or more conditions for multiple neighboring electrodes. For 12 subjects data of one or more electrodes within one or more conditions were interpolated (See the table in section 2 of the Supplementary materials). For each subject and condition the average cue-nocue waveforms were computed from -100-700 ms around the cue. The border of the segment was set to 700 ms in order to limit the number of factors obtained with the PCA.
A principal component analysis (PCA) was conducted as this technique allows to separate possibly overlapping ERP components sensitive to reward, probability, or both in terms of spatial distribution and timing. A temporo-spatial PCA was conducted following the guidelines by Dien (2012) and by using the ERP PCA toolkit version 2.63 (Dien, 2010). Promax rotation with Kaiser loading weighting was used for the initial temporal PCA and nine factors were retained based on a Scree plot (Cattell, 1966). Subsequently, a separate spatial PCA with Infomax rotation was conducted for each of the temporal factors. Five spatial factors were retained for each temporal factor based on the Scree plot averaged over all temporal factors. Both PCA steps were based on the covariance matrix. These steps yielded 45 temporo-spatial factor combinations (TFSF). Based on prior studies (see Introduction) we expected the reward and probability effects to be strongest surrounding the midline of the scalp. The effects were expected to emerge between approximately 180-500 ms post cue onset. Based on recommendations by Dien (2012), PCA factors were selected for statistical analysis in case: (1) they explained more than 0.5% of the total variance, (2) the temporal loading peaked between 180 and 500 ms, and (3) the positive voltage was maximal around the midline electrodes. Eight of the 45 TFSF combinations met these criteria. Exploratory analyses were conducted for factors with a temporal loading outside the 180-500 ms postcue window (i.e., within 0-180 ms or 500-700 ms post-cue), and for factors with a more lateral spatial distribution. This pertained to an additional 10 TFSF factors.
Standard ERP analysis
For comparability with earlier and future studies we supplement the PCA results with results of a standard ERP analysis. Grand-average ERP waveforms for selected midline electrodes are depicted in the results section. The methods and statistical results of the standard analysis are provided in the supplementary materials section.
2.6. Statistical analyses 2.6.1. Behavioral data Repeated-measures ANOVAs (GLM, SPSS version 22) were run for RT, the percentage correct responses, the percentage commission errors to the NoGo stimulus, and the percentage omissions with reward availability (no reward, reward) and target probability (50%, 98%) as within-subject variables. For each contrast (i.e., reward-no reward, high-low probability, and reward effect high-reward effect low probability) deviation from normality was tested using Shapiro-Wilk's tests. Non-parametric Wilcoxon signed-rank tests were conducted for those contrasts that deviated significantly from normality.
ERP data -PCA
Reward x probability ANOVAs (GLM, SPSS version 22) were run on the TFSF combinations using the average of a 20-ms window around the peak of the factor tested. Alpha was set at 0.05. For each contrast (i.e., reward-no reward, high-low probability, and reward effect high-reward effect low probability) deviation from normality was tested using Shapiro-Wilk's tests. Non-parametric Wilcoxon signed-rank tests were conducted for those contrasts that deviated significantly from normality. The ten exploratory analyses were corrected for multiple comparisons using a Bonferroni correction. In addition, we analyzed the specific contrasts as mentioned in the introduction (discussion of 'adaptive scaling') in order to identify cortical activations sensitive to gradual increases in single-target reward value. The 'pure reward effect' was estimated from the contrast between 50%-target probability reward versus 50% no-reward conditions. The 'pure probability effect' was estimated from the contrast between no-reward 98% versus noreward 50%-target probability conditions. Furthermore, in order to identify cortical activations sensitive to gradual increases in singletarget reward value (which would NOT be predicted from the adaptivescaling perspective), rather than just block-level reward versus no reward, we constructed a 3-level factor consisting of 50%-probabilityreward (high reward per trial) versus 90%-probability-reward (low reward per trial) versus the average of the two no-reward conditions. This 'gradual-reward effect' was tested with MANOVA (GLM, SPSS version 22).
Behavioral results
None of the contrasts, except the probability contrast for RT, was normally distributed. Reaction times to the targets were significantly shorter during the reward blocks (median (mdn) RT = 470 ms) compared to no reward blocks (mdn = 501 ms), Z = −3.20, p = .001, rank bi-serial r (rrb) = 0.55. Reaction times were also significantly shorter during the 98% target probability blocks (mdn = 469 ms) compared to the 50% target probability blocks (mdn = 498 ms), F(1,44) = 32.41, p < .001, η p = .42. Furthermore, participants were more accurate during the reward blocks (mdn = 99.4%) compared to the no reward blocks (mdn = 98.8%), Z = −3.05, p = .002, rrb = 0.62. There was no significant main effect of target probability for the percentage correct responses. The percentage omissions was greater during the no reward blocks (mdn = 1.25%) compared to the reward blocks (mdn = 0.64%), Z = −2.21, p = .027, rrb = 0.43. There was no such difference between the high and low probability blocks. Participants rarely made commission errors to the NoGo stimulus. There was no significant difference between the reward and no reward block for the percentage commission errors (no reward block mdn = 0%; reward block mdn = 0%), Z = −1, p = .317, rrb = 0.5. Fig. 2 shows superimposed ERPs from the four reward-probability I. Schutte, et al. Neuropsychologia 132 (2019) 107138 conditions from selected midline electrode sites. The PCA yielded 8 temporo-spatial factors that met the pre-specified latency and medialdistribution criteria (see Methods paragraph 2.5.2). Table 1 provides an overview of the temporal and spatial distributions of the factors that met the pre-specified criteria and one additional factor that survived the Bonferroni correction. Fig. 3 displays the temporal loadings for the components with a significant effect of reward or probability. Fig. 4 displays the spatial loadings of these components. Fig. 5 summarizes the effects of reward and probability on ERP activity. TF1SF1 peaked at 439 ms after the cue and its positive maximum was localized at Pz. This factor was significantly more positive for the low compared to high probability condition. The factor peak was slightly larger at electrode FPz where it was less negative for the high vs low probability condition, F(1,44) = 4.12, p = .048, η p = .09. For the pure probability contrast (no-reward 50 vs 98%) this effect was replicated, Z = −2.07, p = .038, rrb = 0.35 (the distribution of the pure probability contrast deviated significantly from normality. The outcome of the Wilcoxon signed-rank test was therefore reported instead).
ERPs: PCA analysis of the effects of reward and target probability
TF1SF2 had the same peak latency (439 ms), but its maximum positivity was localized at Cz. It was significantly more positive for the reward compared to the no reward blocks, Z = −2.61, p = .009, rrb = 0.45. This effect was replicated for the pure reward contrast (50% reward vs no reward), t(44) = −2.74, p = .009, d = 0.41. The gradualreward effect (50%-probability-reward (high reward per trial) versus 90%-probability-reward (low reward per trial) versus the average of the two no-reward conditions) was also significant for this component, F (2,43) = 3.41, p = .042, η p = .14 (note, however, that the 50% reward versus averaged no reward contrast was not normally distributed. Therefore, the Wilcoxon signed-rank test was used as a follow-up test for this contrast). The gradual-reward effect reflected significant differences between both low and high per-trial reward versus no reward (t(44) = 2.61, p = .012, d = 0.39 and Z = −2.29, p = .022, rrb = 0.39, respectively), in the absence of a low versus high difference (p = .357). This is consistent with the adaptive-scaling perspective.
Additional exploratory analyses were conducted for 10 temporospatial factors with lateral spatial distributions and/or temporal loadings outside the 180-500 ms interval. Only TF2SF1 survived the correction for multiple comparisons. This factor was significantly less negative for the reward compared to the no reward condition at electrode FP1, F(1,44) = 12.84, p = .001, η p = .23. The special-contrast analysis again revealed a significant pure reward effect, t(44) = −2.46, Note. a The TF1SF1 component was most positive at electrode Pz. The high > low probability effect, however, was observed at electrode Fpz. At Fpz the amplitude of TF1SF1 was less negative for the high compared to the low probability condition. b TF2SF1 did not meet the pre-specified latency criterion and was therefore tested exploratively. The other factors that did not meet the pre-specified criteria did not survive Bonferroni correction.
Discussion
The current study aimed to gain understanding in the temporal profile of the cortical processes activated during anticipation of reward with varying probabilities. Another main aim was to investigate whether ERPs elicited by reward and probability manipulations are dissociable. This study provides evidence for separate processing of reward value and probability in the cortex.
Consistent with prior studies (e.g., Bekker et al., 2004;Donamayor et al., 2012;Flores et al., 2015;Pfabigan et al., 2014), reaction times were significantly shorter during the reward blocks compared to the noreward blocks and during the 98% compared to 50% probability blocks. Participants were also more accurate during the reward blocks. These performance results show that cue-elicited processing must have been differential depending on reward and probability level. Note that this was the case even while, as in the current paradigm, behavioral choices did not at all concern reward or probability options.
To answer our research question, two main effects of reward were found, as well as one main effect of probability. One reward-related ERP emerged relatively early, and was strongest over the prefrontal Fig. 3. Temporal loadings as obtained before spatial decomposition. The Figure displays the temporal loadings associated with the factors with a significant effect of reward or probability. TF1SF1 (probabilityrelated positivity) and TF1SF2 (reward P300) originated from spatial decomposition of temporal factor (TF) 1 (left panel). TF2SF1 (late reward ERP) originated from spatial decomposition of temporal factor TF 2 (middle panel). TF5SF1 (reward-related positivity) originated from spatial decomposition of temporal factor TF 5 (right panel). Temporal loadings are converted to microvolt scaling. electrode locations. A second reward-related ERP emerged later. This P300-like ERP peaked around 400 ms, and was prominent at the central electrode. The probability-related ERP had a similar latency, but the high-low probability ERP was largest over the medial prefrontal cortex. Exploratory analyses revealed an additional reward-related ERP late in the cue-target interval (around 680 ms post-cue). This reward-no reward ERP was largest over the left prefrontal cortex.
As noted, in the present design cues indicating high-probability rewarded targets also cue low single-target rewards. This implies a potential confound between reward and probability effects. Such a confound would not be expected from an 'adaptive scaling' perspective (Walsh and Anderson, 2012). This perspective predicts a response to a relatively low reward value (versus no reward), when the low value is the only available option (in addition to no reward, throughout a block of trials), that is identical to the response to a relatively high reward value (versus no reward) when the high value is the only available option. To evaluate the tenability of the adaptive-scaling perspective, special contrasts between conditions were constructed to assess differences between low and high per-trial reward effects (as in high-probability reward and low-probability reward blocked conditions, respectively). The analyses of the contrasts revealed that reward (versus no reward) effects did not differ at all as a function of per-trial reward magnitude, consistent with the adaptive-scaling perspective. In a similar vein, no indication was found that probability effects depended on the blocked-reward condition.
The PCA revealed an early frontal component that was significantly more positive when reward was at stake compared to when no reward was at stake. It had a latency of 244 ms, which is comparable to reported latencies of reward-related positivities (Donamayor et al., 2012, Flores et al., 2015Holroyd et al., 2011;Krigolson et al., 2014). These studies found an increased positivity around 200-250 ms following cues that signal reward compared to cues signaling no reward (Donamayor et al., 2012;Flores et al., 2015;Holroyd et al., 2011, Krigolson et al., 2014. Similarly, Yu and Zhou (2006) observed less negative ERP activity for cues predicting that money could be won during an upcoming gamble trial compared to cues predicting that money could be lost, albeit somewhat later in time (around 270 mst post-cue).
The early reward-related positivity observed in the present study and in the studies mentioned above, may be an instance of "the reward positivity". This is an ERP mostly observed after positive feedback about performance or a rewarding outcome (Foti et al., 2011;Holroyd et al., 2008;Holroyd et al., 2011;Proudfit, 2015). It usually peaks at midfrontal electrode sites and is proposed to reflect phasic dopaminergic input from the ventral tegmental area into the dorsal ACC when outcomes are better than expected in order to guide reinforcement learning (Holroyd and Coles, 2002) or to reduce conflict (Holroyd et al., 2008). A similar reward-related positivity has also been observed after reward announcing cues after the reward-predicting value of the cue has been learned (Krigolson et al., 2014). As such, the reward-related positivity associated with cues may reflect an initial estimation of the likelihood of a prospective reward which is adjusted after feedback when necessary (Holroyd et al., 2011).
In the current study the early reward-related positivity (we refer here to any reward manipulation-specific positive deflection peaking before 300 ms) was most prominent at the medial prefrontal electrode site AFz, and resembled the topography of the reward-related positivity as observed by Flores et al. (2015). Nieuwenhuis et al. (2005b) observed multiple generators of this component 3 including areas within the rostral ACC. In the study by Donamayor and colleagues (2012), however, the reward-related positivity peaked somewhat more posteriorly (i.e., at the mid-frontal electrodes), and was source localized to the dorsal posterior cingulate cortex (dPCC).
The early reward-related positivity in the current study may alternatively reflect enhanced attentional capture by the reward cues (Padmala and Pessoa, 2011), or may instead reflect the modulatory effect of reward on sensory processes (Pessoa and Engelmann, 2010). A related phenomenon has been described as the 'frontal selection positivity' (FSP; Kenemans et al., 2002;Bekker et al., 2004). This is a frontally distributed ERP deflection that is stronger for relevant (e.g., cues signaling potential reward) relative to less relevant stimuli (e.g., cues signaling no potential reward). Identifying the present rewardrelated positivity as an FSP would imply a pronounced contribution of posterior-cortex generators (Kenemans et al., 2002), consistent with an interpretation in terms of the modulatory effect of reward on sensory (i.e., visual) processing.
The PCA yielded another component that was significantly more positive for reward compared to no-reward cues. This component has a central distribution and a temporal loading of 439 ms. The latency and central distribution of this ERP may be consistent with an interpretation in terms of P300/P3b. This finding is in line with previous research showing sensitivity of the P300 to the magnitude of wins and losses (Yeung and Sanfey, 2004) as well as the anticipation of reward (Broyd et al., 2012;Flores et al., 2015;Pfabigan et al., 2014). Nieuwenhuis and colleagues (Nieuwenhuis, Aston-Jones & Cohen, 2005a) argued that the P300 reflects the modulatory influence of the locus coeruleus norepinephrine system on information processing in the case of motivationally relevant events. Pfabigan et al. (2014) additionally suggested that the P300 elicited during anticipation of reward may particularly be dependent on dopamine transmission.
The third main effect concerned an effect of target probability. The PCA component sensitive to the probability manipulation had a temporal loading of 439 ms. It consisted of a parietal positivity and prefrontal negativity. The component was significantly more positive (less negative) for highly reliable cues (indicating 98% probability of an upcoming target) compared to unreliable cues (indicating 50% probability of an upcoming target) at the prefrontal electrode site. Based on a prior study of our lab (Bekker et al., 2004) and classical findings on the P300 (Duncan-Johnson and Donchin, 1982) we expected the parietal P300 to be sensitive to target probability. However, the high > low probability ERP effect in the current study was frontally distributed. This frontal distribution is consistent with an fMRI study by Knutson et al. (2005) indicating that probability is also represented in the medial prefrontal cortex (Knutson et al., 2005). It is probable that the precise nature of probability effects in the context of explicit reward is different from that in more implicit-reward contexts (e.g., when subjects just follow the instructions of the experimenter).
It could be argued that any effect of probability on cue-elicited activation reflects enhanced response preparation to highly probable subsequent targets, compared to low-probability targets. However, this view predicts probability effects on late slow cue-induced potentials such as the contingent negative variation and stimulus-preceding negativity, not on earlier activations such as the currently described probability ERP. In addition, Bekker et al. (2004) reported such early probability effects, in the absence of probability effects on late slow potentials. Furthermore, in the present study the extent of specific-response preparation was probably very limited, as target stimuli embodied a two-choice reaction-time task in which both response alternatives had equal probability.
In conclusion, the current study aimed to dissociate the effect of reward value and target probability manipulations on anticipatory ERPs and provides evidence for separate processing of reward value and probability cues in the cortex. An early reward-related positivity and a late (P300-like) ERP component were specifically affected by reward availability, whereas target probability affected a late frontally distributed ERP. Both reward effects obey the principle of adaptive scaling. The early-reward-related positivity may reflect reward-modulated sensory processing of the reward cues. The probability effect is qualitatively different from analogous effects reported before, perhaps due to the explicit-reward context as maintained in the present paradigm.
Funding
This work was supported by the Netherlands Organisation for Scientific Research [grant number 404-10-318]. The funder had no role in the study design, in the collection, analysis, and interpretation of data, in writing the report and the decision to publish the report.
Declarations of interest
None.
CRediT authorship contribution statement | 2019-07-12T15:21:59.158Z | 2019-09-01T00:00:00.000 | {
"year": 2019,
"sha1": "e43a03bc277c289b1a3525be3dffba6c70a196ec",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1016/j.neuropsychologia.2019.107138",
"oa_status": "HYBRID",
"pdf_src": "Adhoc",
"pdf_hash": "12a5fb1d0182751c473ad6ff70d8f4dcd56699ff",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": [
"Psychology",
"Medicine"
]
} |
248965453 | pes2o/s2orc | v3-fos-license | Compression ensembles quantify aesthetic complexity and the evolution of visual art
The quantification of visual aesthetics and complexity have a long history, the latter previously operationalized via the application of compression algorithms. Here we generalize and extend the compression approach beyond simple complexity measures to quantify algorithmic distance in historical and contemporary visual media. The proposed"ensemble"approach works by compressing a large number of transformed versions of a given input image, resulting in a vector of associated compression ratios. This approach is more efficient than other compression-based algorithmic distances, and is particularly suited for the quantitative analysis of visual artifacts, because human creative processes can be understood as algorithms in the broadest sense. Unlike comparable image embedding methods using machine learning, our approach is fully explainable through the transformations. We demonstrate that the method is cognitively plausible and fit for purpose by evaluating it against human complexity judgments, and on automated detection tasks of authorship and style. We show how the approach can be used to reveal and quantify trends in art historical data, both on the scale of centuries and in rapidly evolving contemporary NFT art markets. We further quantify temporal resemblance to disambiguate artists outside the documented mainstream from those who are deeply embedded in Zeitgeist. Finally, we note that compression ensembles constitute a quantitative representation of the concept of visual family resemblance, as distinct sets of dimensions correspond to shared visual characteristics otherwise hard to pin down. Our approach provides a new perspective for the study of visual art, algorithmic image analysis, and quantitative aesthetics more generally.
Introduction
The quantification of visual aesthetics, including artistic expression goes back to Birkhoff (1933) and Bense (1969), inspiring several computational approaches in the recent past (cf. Galanter 2003;Rigau et al. 2007;Kim et al. 2014;Elgammal and Saleh 2015;Sigaki et al. 2018;Elgammal et al. 2018;Zanette 2018;Müller and Winters 2018;Lee et al. 2020). Previous research drawing on information theory has shown repeatedly, often in parallel, that subjective visual complexity can be estimated with some accuracy using compression algorithms such as or (Fairbairn 2006;Rigau et al. 2007;Campana and Keogh 2010;Forsythe et al. 2011;Palumbo et al. 2014;Guha and Ward 2014;Chamorro-Posada 2016;Machado et al. 2015;Müller and Winters 2018;Fernandez-Lozano et al. 2019;Ovalle-Fresa et al. 2020;Bagrov et al. 2020;McCormack and Gambardella 2022;Murphy and Bassett 2022). While some of the aforementioned proposals also included testing against perceptual human judgments, results diverge as to which single compression algorithm or approach would be optimal. Elsewhere in the humanities and cultural sciences, measures of compression length, taken at face value, have been used to compare the complexity of various visual inputs (e.g. Tamariz and Kirby 2015;Miton and Morin 2021;Han et al. 2021 ( (A) is the original image, with the compression ratio indicating the -to-bitmap ratio, i.e. the baseline compression size. values indicate the compression ratio against this non-transformed size. Turning this colorful painting into gray scale, for example, reduces its complexity and thus increases compressibility -but would not affect an already gray scale pencil drawing. The ensemble also includes fractal dimension ( ) and estimates of colorfulness. The brightness of the lightest and darkest example images is slightly adjusted here to make them perceptible. The complete ensemble further includes a subset of these transforms using a reduced size base image (see Materials and Methods). Each row in (B) represents a transformation, arranged by similarity; each column is an artwork. The Windmill example of (A) is highlighted with vertical lines in (B). The matrix values are z-scores, calculated using the mean and standard deviation of the artist's era (including all their own and contemporary works in our dataset). Darker blues indicate lower, and reds higher values compared to the respective average. Mondrian starts out fairly traditional, 1895 left to 1944 right, but eventually develops his iconic style, departing from the mainstream (see Example 1 in Figure 2B).
approaches have also been applied to quantify artistic styles and conceptual groupings (Sigaki et al. 2018;Lee et al. 2020;Tran et al. 2021).
When considering the quantitative analysis of visual art, it makes sense to adopt an algorithmic approach, since the creative processes of creating an artwork also follow a set of procedures -or algorithms, in the broadest sense -which can be assumed to be particular to a given artist and career period (cf. Bense 1969).
Algorithmic complexity is best understood through the lens of algorithmic information theory (Kolmogorov 1968;Chaitin 1977) which defines the complexity of a dataset in terms of the shortest algorithm that reproduces the data. While Kolmogorov complexity itself is uncomputable, the size of a compression of a given dataset can serve as its upper bound, and be extended to measures of algorithmic distance. An established example for such a measure is normalized compression distance (NCD; Li et al. 2004), which however requires a separate compression for each comparison event (and has been rarely applied to visual materials; but see Cilibrasi and Vitányi 2005). Pairwise image comparison frameworks have been also proposed by Guha and Ward (2014) and Müller and Winters (2018).
Here we introduce a simple and fast algorithmic comparison framework for images, and apply it to the exploration of two-dimensional art such as paintings and drawings. In this "ensemble" approach, an array of image processing filters is applied to each input, including various low and high pass filters, distortions and color manipulations (see Figure 1. A). The altered images are all compressed, yielding vectors of compression lengths (as a ratio, being divided by the size of the compression of the original bitmap image). This approach is augmented by another array of statistical transformations such as colorfulness metrics and fractal dimension. The latter could also be viewed as compressions in a broader sense; if the purpose of a given ensemble is to be strictly an estimate of Kolmogorov complexity via compression only, then these can be omitted of course. The resulting vectors can be rapidly compared, clustered, and used in downstream tasks such as identification of authorship or style, as demonstrated below. Fitting a new image into an already generated model does not require any retraining or realignment; just the set of applied transformations needs to match.
We use lossless to compress the transformed images, and additionally and lossy on a smaller subset. In total, the current model consists of 112 transformations. The exact number is unimportant and a question of optimization, as more (non-collinear) features provide more information but increase computation time, while different types of transforms are informative for different tasks. Indeed, as demonstrated in the art classification experiments below, a handful of well-chosen features can in some cases yield accuracy close to using the full ensemble. For a more detailed description of the workflow pipeline see the Methods and Materials section, and the Supplementary Information for the full list of transformations.
Some of the outputs of the transformations, e.g. blurs of various magnitudes, may well correlate. In applications where multicollinearity would be an issue, or where interpretation in a lower number of dimensions is desired, methods such as Principal Component Analysis (PCA) or UMAP can easily be applied. Here we use both, as the PCA is directly interpretable due to its linear relationship with original variables, while UMAP arguably provides better low-dimensional representations (cf. McInnes et al. 2018). The dimensions of the vector space, preceding PCA or UMAP, remain readily interpretable, as each represents a distinct transformation (see Figures 1.B and S1). The dimensions of a PCA also remain interpretable, due to the linear relationship between components and original variables.
UMAP constitutes a complementary "field of similarity" (cf. Riedl 2019) where similar images cluster together intuitively, like in a spring-embedded network-diagram, while remaining subject to the so-called "curse of dimensionality". Yet, as shown in Figure 1.B (and S1 in the Supplementary Material), we can break the curse by mapping the contribution of individual transformation or PCA dimensions onto the general UMAP, which effectively assumes the function of a reference topography. The resulting "small multiple" visualization provides an intuition why and ensemble of multiple different transformations and compressions is necessary towards a fuller understanding of visual aesthetic complexity.
The nature of a transformation -edge detection, color quantization, blurring -emphasizes or attenuates particular characteristics of the input images. For example, applying a black-and-white transformation to a colorful image increases its compressibility relative to the original (yielding a compression ratio 1), but has no effect on a already black and white image (a compression ratio 1). Applying coarse pixelation to Piet Mondrian's geometric abstract paintings (Example 1 in Figure 2) barely changes their compressibility, whereas the same pixelation greatly increases the compressibility of highly detailed works, such as those of compress_gif blur30 colors_round colors_saturate lines_division_gray flood_centre fx_scramble fft1 year, 1400−2018 . Proximity in this space indicates multidimensional similarity in aesthetic complexity, i.e. often by proxy, style or more general family resemblance. Images with few colors and simple structure are close together, and distant from those with complex patterns and palettes (Examples 1 versus 5). Images that are close by often also contain similar subjects and color palettes, due to conventional commonalities in the aesthetics of depicting certain scenes and objects (cf. Examples 2 and 8).
The small bottom panels depict the same UMAP, yet as heatmaps colored according to the mean values of individual transformations in a given UMAP region, blue to red, low to high (cf. Figure 1, and S1 in the Supplementary Materials for a full set of these maps). While the nearest neighbors sets pictured above intuitively make sense, the additional heatmaps strikingly clarify the underlying polymorphic complexity, promising a rewarding territory for future research.
Hieronymus Bosch (Example 6 in Figure 2). A difference between two images in terms of their compression ratios for a given transformation reveals that they differ in this aspect. Therefore, images similar in multiple aspects of aesthetic complexity end up close together in the multidimensional ensemble space, while dissimilar ones are placed far apart.
Similarity in the sense of depicted objects or scenes is not directly encoded, as the compression ensemble does not include any visual similarity features in the machine learning sense. However, certain themes may be more popular than others within a given region of the space, and depicting certain things (such as people on a dark background) may yield a similar complexity profile, which is why nearby artworks often also contain similar subjects and color palettes (e.g. Example 2).
The multidimensional ensemble space of compression ratios, consisting of continuous values, also allows for interesting mathematical vector operations (not unlike in word embeddings, cf. Vylomova et al. 2016) and explainable latent space exploration. As an example, adding the vectors of Example 4 and 6 of Figure Importantly and in contrast to previous research, our goal is not to construct a model to learn and predict what humans may perceive as visually complex or intuitively "aesthetic" as such (cf. Forsythe et al. 2011;Cela-Conde et al. 2009;Fernandez-Lozano et al. 2019). Nor is it our goal to compare artworks based on the similarity of their depicted subjects, recognizable features, or iconographic attributes (cf. Tan et al. 2016;Mao et al. 2017;Elgammal et al. 2018). Rather, our model is meant to capture the residual signal of the generating process, in a kind of "algorithmic fingerprint" of an artwork, to eventually quantify and explore the artistic dynamics and evolution in the space of intrinsic aesthetic complexity. However, to do so with confidence, we first verify our model in a number of experiments, including ones that use human judgment scores.
We then go on to quantify global trends in historical art over the past six centuries in a benchmark dataset, and over the course of the first 175 days of the non-fungible token (NFT) art marketplace Hic et Nunc. Finally, again on historical timescales, we introduce a temporal resemblance model to quantify artistic career trajectories, grouping them into qualitatively distinct types. We reveal artists that were well embedded in the historical tradition of their time, those who simultaneously experimented with different styles, artists with transitory success, and those who were later seen as ahead of their time.
Results
We make use of two large art corpora to proof the application of the compression ensemble approach for visual data, while exemplifying the exploration of historical and contemporary dynamics of visual art. The first dataset which we denote as "Historical" (henceforth capitalized when being referred to) is illustrated in Figure 2. It is sourced from the art500k project (Mao et al. 2017), filtered to only include two-dimensional art with intact metadata (in particular, retrievable year of creation). Our subset contains 74028 (primarily Western) artworks representing 6555 artists. We note that after our filtering, the remaining dataset consists mostly of items art500k had in turn sourced from Wikiart.org. The latter is an online, user-editable, encyclopedic collection of mostly Western art images, which is also frequently used in computer vision research. From an art historical standpoint the dataset provides a reasonable and sufficient proxy benchmark to show the feasibility of our approach. Known biases of the Historical dataset include reliance on partially dated literature, including a corresponding gap of 18th century art, and very likely some variation in terms of reproduction quality due to the broad variety of the crowdsourced images, either found in the public domain or taken from a great variety of literature and online sources on the basis of fair use. Digitizing larger amounts of visual cultural heritage in high resolution, consistent quality, and minimal bias is a generational challenge. While the Historical dataset is sufficient for our proof of concept, as more data in better quality becomes available, descriptions based on our method are expected to also become more precise and representative.
The second dataset which we denote as "Contemporary" is mined from Hic et Nunc, a Tezos blockchain-based NFT art marketplace, representing the first 175 days of its existence (March to August 2021; 51640 artworks, 7284 artists). It contains 31% of all the objects added to the marketplace during our observation period of 175 days: we only include static images ( , ) as we do not yet have a pipeline to compress multi-frame objects such as animated s and videos), exclude very small resolution images (such as icons), and a subset for which the data collection process failed to retrieve the image. For an overview of the NFT-driven "crypto art" market, see Nadini et al. (2021) and Vasan et al. (2022).
Before applying the compression ensemble approach to capture systematic patterns of art history, we evaluate it extensively using three datasets and two methodologies, (1) examining correlations of our model predictions with human judgments of visual complexity, (2) using the model to perform authorship and style attribution. We show that our model performs very well on the first task and with fair accuracy on the second task (despite not being trained for the specific purpose). The second experiment also demonstrates explicit connections between specific dimensions in the vector space of compression ratios and particular aspects of the corresponding artworks. For example, the compression ratio of edge-filter transformations are informative regarding the genre of the work (portraiture versus landscape), while color-affecting transforms can help predict the medium (drawing vs oil painting).
Human complexity norms
We assess the cognitive plausibility of the compression ensemble approach by comparing its predictions of visual complexity with human judgment norms from two datasets. The first dataset, MultiPic (Duñabeitia et al. 2018) consists of 750 colored pictures of concrete concepts, and human judgments on various aspects of visual perception, including complexity, based on experiments with a total of 620 participants from six language communities (British English, Spanish, French, Dutch, Italian, German; see Figure 3.A). The dataset does not include individual ratings, only means for each image for a given language sample. The second dataset, Fractals (Ovalle-Fresa et al. 2021) consists of 400 abstract fractals and related norms, again including (means of) judgments of visual complexity, here by 512 German-speaking participants (Figure 3.B). Previous research has also engaged in analogous exercises of evaluations against human complexity judgements (Machado et al. 2015;McCormack and Gambardella 2022). We use the datasets described here as they are both publicly available while representing fairly large pools of participants.
We generate the compression ensemble vectors separately for each of the two datasets, then carry out repeated out-of-sample evaluation where we train a linear regression model on a set of vectors to predict human scores, then test its accuracy on a separate test set. The results are very good, with median absolute error ranging from 0.19 (Multipic English) to 0.23 (Multipic Flemish) on a scale of 0 to 5. To put this in perspective, this is smaller than the differences between languages in this dataset (the median standard deviation of complexity scores per image across languages is 0.24). In Fractals, median absolute error is 0.46 on the same scale of 0 to 5. The linear regression model with compression ratios as predictors describes most of the variance (measured as adjusted 2 ) in human visual complexity ratings: 73% (Multipic Italian) to 83% (Multipic Flemish), and 32% in Fractals. By comparison, using compression alone describes just 37 − 44% (Multipic) and 10% (Fractals). These results provide us with confidence that the approach is cognitively valid, correlating with what the human eye would consider visually complex. (C) represent Baroque, Realism, Impressionism, Expressionism and Surrealism via central images in the ensemble for each style. Panel (D) illustrates the difficulty of the artist detection task: while some artists are more unique and hence recognizable (O'Keefe), others produce very similar works, while also changing over their careers (Lawrence, Romney). Panels (E-I) illustrate mean testing accuracy given variable number of training items (light to dark blue) and number of transformations used (horizontal axis; the total number of features varies between tasks, as zero-variance and collinear ones are excluded). The dashed horizontal line indicates baseline chance accuracy for each task. Each dot stands for one added transformation feature, always starting with compression without transformation. The next 5 are given on each panel. Different transformations, ordered by variable importance, are informative in different tasks, e.g. color-related transformations in distinguishing paintings from drawings. Just compressing the image without transforming already provides an above-chance result in all cases, even if using just a handful of training examples. Adding more transformations generally improves performance (when there are enough training examples to avoid overfitting; dark blue dots). That being said, around 15-20 well-chosen features are usually already enough to get close to maximal performance.
Artist, date, style, genre, and medium classification
The second evaluation involves the Historical dataset, in the form of a number of retrieval or classification experiments. We generate the compression ensemble vectors for the entire dataset, and extract the following subsets, where each included class has at least 1100 unique examples: 13 style periods as per metadata (5 of which are exemplified in see Figure 3.C), 7 centuries, drawings vs oil paintings, landscape paintings vs human portraits, and 91 artists with at least 110 artworks each.
We perform out-of-sample evaluation where we repeatedly train a classifier for each subset, on a randomly sampled set of vectors from each class in the subset to predict the relevant class labels such as style period (n=1000 per class, except 100 for authors due to limited data), then test its accuracy on a separate test set (n=100 per class, except n=10 per author). We use Linear Discriminant Analysis -a simple, computationally lightweight supervised machine learning model that straightforwardly generalizes to multi-label classification. To probe how well the ensembles work on this task given different amounts of data and number of transforms, we carry this out in a step-wise manner, as depicted in Figure 3.E-I. Each classifier is trained on 10, 100 and 1000 examples of each class, and employing an increasing number of transforms, starting from the baseline of compression (ratio to raw bitmap file size). The rest of the features are ordered by a rough estimate of variable importance (derived from repeatedly training binomial logistic regression classifiers on all possible pairs of classes and averaging the t-statistics of the variables).
Even with a handful of examples and a couple of the most informative transforms, the simple classifier is able to detect above chance the creator, the date, style, genre, and medium of a given artwork. With a 100 examples and the full ensemble of transforms, author (n=91) detection accuracy is 38%, which is much higher than the accuracy of 1.1% that random attribution would achieve by chance. Provided 1000 examples per class, oil paintings are distinguished from drawings about 86% of the time, same for landscapes vs human portraits (both have 50% random chance baseline), style period 34% (baseline ∼ 8%) and century 44% (baseline ∼ 14%).
The ranking of the transforms beyond the compression baseline (as depicted in Figure 3.E-I) is also informative. The aspects represented by the transformations vary in usefulness in the prediction task; for example, gray-scaling distinguishes pencil drawings from colorful oil paintings, because this is one of the primary aspects they differ in. Turning this around, the explainable features of the compression ensemble can be used to describe how any two images (or sets of images) differ, by looking into which transformation dimensions describe the most variance.
Inspecting the relevant confusion matrices reveals the errors are fairly systematic and intuitive, as classification errors are more likely between adjacent style periods and artists (see Figure S2 in the Supplementary Materials). In the set of 91 artists, Thomas Lawrence and George Romney are most often confused with each other by the model -and indeed, both are portrait artists from roughly the same period (see Figure 3.D). Conversely, artists in a distinguishable style or genre are easy to identify, for example the 19th century engraver Charles Turner is detected at 97%. Rococo, also know as Late Baroque, is correctly labeled in 47% of the tests, while 16% of it is misclassified as Baroque. Impressionism is easiest to identify (53% correct) -but confused with Post-Impressionism (14%). Expressionism is by far the hardest to put a finger on (12%).
Good results in authorship and style attribution have also been achieved using purpose-trained classifiers built upon large pre-trained deep learning models (cf. Mao et al. 2017;Strezoski and Worring 2017;Elgammal et al. 2018). For reference, although not directly comparable due to training and test set differences, Mao et al. (2017) report a 39% accuracy for style period and 30% for author retrieval; Tan et al. (2016) report 55% for style and 76% for artist (but that is between just 23 artists with the most training data). If for example authorship attribution was the goal, we envision that the accuracy of such models could likely be improved further by combining them with compression ensembles. The purpose of this exercise here however is not to compete with these approaches, but to show that a compression ensemble -despite consisting of no features other than file size ratios and statistical transformations, and containing no pre-trained baseline -still captures and disambiguates enough family resemblance to place stylistically similar artworks close together and dissimilar ones apart, with a non-random error structure.
Tracking historical and contemporary art dynamics
Given the explainable nature of the compression ensemble vectors, and their cognitive and technical plausibility as demonstrated above, we can now use this method to investigate and interpret aesthetic trends over time. We do this for both the Historical and the Contemporary NFT dataset.
To simplify this task, we apply PCA here and focus on the two first most informative principal components. We obtain compression vectors for both the Historical and the Contemporary datasets, and fit them both in the same PCA space for comparability. Figure 4 . Each dot is an artwork, reduced to a single pixel. The principal components (vertical axes) are based on a concatenation of both vector sets, making the graphs comparable. (A) and (C) show the joint first component PC1, (B) and (D) the second PC2, also allowing for a reading across datasets, left to right. Note however the different ranges on the vertical axes: the Historical dataset is constrained to a much smaller area in the aesthetic complexity space (marked by black brackets on the sides). The trend lines correspond to the median (black) and quartiles (dark gray) of a given principal component; 95% of the data lies between the outer light gray lines. The more frequent style period labels are given in (A), arranged by the median year of the respective artworks. The insets (E) and (F) indicate areas of the complexity space conductive to NFT sales (dark red means all items in a given area were sold; dark blue that none was sold). The bottom panel (G) shows typical NFTs sold on the Hic et Nunc marketplace, as images closest to the median (across all PCs) for each day. Various avatar or portrait series (similar to CryptoPunks or Bored Ape Yacht Club) eventually rise to be among the most commonly minted objects -visible as tight colorful groupings at low complexity in PC1 -but not all such series are successful, as indicated by the prevalent blue areas in the corresponding inset panels. This example demonstrates how the same method can be used to make sense of both very long and very short timescales, in art history and contemporary art.
Changes in the trends in the half-millennium dataset correspond broadly to art historical style classifications. PC1 in this model corresponds to texture and detail complexity (loading onto blur, despeckle filters, and the Canny edge transform). There is a marked decrease (visible in the right half of Figure 4.A) going from the period of more detailed paintings of Realism and Impressionism to the second half of the 20th century where (in this dimension on average less complex) styles such as Abstract Expressionism and Pop Art become more prevalent.
PC2 corresponds to overall compressibility (loading onto compression of the original unfiltered image with an array of algorithms). The median in the Historical dataset is lower where the dataset contains many Rococo style portraits (in the middle of Figure 4.B), which typically contain plain and therefore easily compressible areas, not unlike the pixel-art portraits of Hic et Nunc, (cf. days 100-150 in Figure 4.C-D). The PC2 values in the Historical data (Figure 4.B) go up around the onset of Impressionism, and the bounds are pushed once more with Cubism, Expressionism, Surrealism, and the general diversification of classic modern "-isms". As demonstrated in the Evaluation section above, given a sufficient number of transformations, such differences are consistent and diverse enough to predict style periods with reasonable accuracy.
While Historical and Contemporary data is combined in the same space, the vertical axes representing the principal components in Figures 4.A-B versus C-D are intentionally different, as the two datasets occupy markedly different ranges in the complexity space, with much higher variance in the Contemporary Hic et Nunc dataset compared to the more conventional Historical dataset. This does not necessarily mean that art in the last 500 years has been less creative or explorative. The relative boundedness instead is more plausibly rooted in a combination of material affordances and limits of curation and scholarship. The latter is a function of cultural selection, as collectors, audiences, and art historians put a bound on what has been and is considered worthwhile of adding to collections from the time of creation to current retrospectives.
In contrast, everybody who is able to pay the fairly low minting fee can upload an artwork to blockchain art market places such as Hic et Nunc, making their creations public in an attempt to get attention and sell. Material affordances can further explain changes within the Historical dataset, and salient differences in relation to Contemporary NFT art. The Historical broadening of the parameter space goes in lockstep with the fraction of noted creatives growing faster than world population in the last five centuries (Schich et al. 2014). It is broadly established knowledge in art history that new technologies and concepts, from pigments to theories of perception (cf. Gombrich 1960), were harnessed by said creatives, arguably at an equal pace. Examples include the emergence of more affordable blue pigment alternatives to the rare and expensive azurite and lapis lazuli, or (color) photography, which put traditional pictorial conventions of depiction into question. Another striking difference between the Historical and the Contemporary NFT dataset becomes visible in Figures 4.A-B versus C-D when we focus on the range of colors in the single-pixel reductions of the artworks. The digital NFT images appear darker and more saturated, as they are using the full RGB color space, while the dominant color of Historical artworks tends to remain in the range of "natural" pigments, which one could buy in a physical art supply store.
Since we have information on transactions in the Hic et Nunc dataset (as of the data collection time, 22 August 2021), successful sales are shown as inset heatmaps E and F in Figure 4.D. The heatmaps show the fraction of sales across the first and second principal components respectively. About half the objects in the Hic et Nunc sample in total were sold off by their authors during our observation period, with some areas -dark red in the insets -being clearly more conductive to sales, while others do not sell at all. Even qualitatively, one can see revealing patterns, such as the mass-minting of initially non-selling NFT images starting around day 110 in mid 2021 (including CryptoPunks-like avatar series, such as "AI Pokemon", "Dino Dudes", and "NFT-People"). An emergent quality of these mass-produced images is that their texture and detail complexity (PC1, Fig. 4C/E, day 100-150) is substantially lower than the all preceding art, putting them more in the realm of icons or brand logos. At the same time their overall compressibility (PC2, Fig. 4C/F) is not only systematically lower, but also subject to much less variance, indicating that most of them are indeed low effort attempts to make money quick. The narrowness of the mass-produced NFT series also expresses itself in their skew towards highly saturated primary dominant colors. In at least one case, this indeed seemed to work, where sales follow in the wake of a strong minting burst, mostly consisting of the "NFT-People" and "NFT Kids" series (cf. the rightmost vertical blue line in Fig. 4.E, followed by a light red wake).
Taking a quantitative perspective, we further trained another Linear Discriminant Analysis model on the sales data, predicting whether an NFT art piece was sold or not, by the values in the compression vectors. Using training sets of size 20k per class and separate test sets of 5k (and replicating the model 500 times), the model predicts sales at an average accuracy of 58% (or a 17% kappa, given the 50-50 baseline). This is despite containing no information on the prestige or reach of the artists, past sales, the depicted content, nor trends of the market of the respective time. A linear regression model fitted to 23370 sold items predicting log price (excluding zero-price giveaways) by all the compression variables describes about 6% of variance (adjusted 2 ); allowing for interaction with the time variable improves this to 8%. While these are all fairly low scores in absolute terms, we consider this a promising result for future research, as it could be likely improved by combining our model with the aforementioned variables of author properties, sales history and past trends (see also Lakhal et al. 2020;Vasan et al. 2022), to predict future trends in evolving art markets.
Quantifying temporal resemblance in art
We can also use compression ensembles to investigate how the oeuvres of individual artists are situated in their eras. Tracing "the lifes of the artists" has been a central direction in the historiography of art since the 1550 book by Giorgio Vasari which initiated the genre (Vasari et al. 1998). Since then, a great number of artist monographs and critical catalogs have filled several large art libaries around the world. More recently, multidisciplinary science has tackled the issue using methods of network science and quantitative measures of success, with some limiting their focus on birth to death migration (Schich et al. 2014), or on contextual and socio-institutional aspects, such as exhibition records and art market price information (cf. Fraiberger et al. 2018), while yet others have taken into account visual aesthetic aspects using information theory or deep machine learning (Lee et al. 2020;Liu et al. 2021). Other related work has looked at the innovativeness of individual artworks using deep learning (Elgammal and Saleh 2015).
Here, we introduce a simple metric, which we call temporal resemblance, which goes beyond these approaches. Given that the vectors of all artworks reside in the same space (as set up above), we can calculate the nearest neighbors for each work. We use cosine similarity and the top closest 100 neighbors, while works by the same artist are excluded. The median of the temporal distances of these neighbors from the target work indicates if it resembles the past or anticipates some yet unseen future. This allows us to group artists who are traditionalist or historicist, those who stay current, and those ahead of their time. We also adjust the median time distances to account for the boundedness and density bias of the dataset: the metric reported here is derived from the residuals of a generalized additive regression model (GAM), still on the same yearly timescale (see Materials and Methods). Figure 5 depicts the careers of 20 artists, grouped by career trend similarity.
Our metric is relative to the point in time of each work, and all measures are relative to all other works. Therefore, curves that stay close to the zero line in Figure 5 should be interpreted as artists who produce works that are similar to other artworks made in the same years, in terms of aesthetic complexity (and thus aspects of their style). That does not preclude changes in their style, if the changes in the artist and their era correlate. Staying around the zero may also indicate that a given artist is surrounded by a handful of prolific contemporaries with very similar output, who as a group may not be representative of the mainstream. Descending curves can indicate an artist who becomes more traditional, the world catching up to an artist's style, or the world adopting other new styles. Note again however, while our results are intuitively correct for a trained art historian, the career comparisons discussed here only refer to artworks that are present and dated in our dataset. Figure 5 reveals different modes of artistic existence, similar to yet not identical to the narrative types of Vonnegut (Reagan et al. 2016). Some such as Piet Mondrian or Mark Rothko, "rise above the flock", starting out in the mainstream, but growing into their own distinct style, with works that could be considered ahead of their time (recall also Figure 1.B). Paul Cezanne and Mary Cassatt instead become "constant innovators", starting out by producing more conventional, retrospective works, but growing and remaining innovative throughout the rest of his careers. Albert Bierstadt and Camille Corot represent "mainstream artists", appearing more narrow in their practice, and remaining consistent with the current of their peers. Finally, we find artists who "rise and fall", growing to their moment in history, then becoming more conventional again over the course of their careers. Examples would be James Mcneill Whistler or William Merritt Chase. An extreme case would be Eastman Johnson, who was predominantly drawing inspiration from the past, even at the height of his career. That said, even highly innovative careers may also include such revivals on occasion. The oeuvre of Paul Cezanne, for example, contains some works resembling artworks preceding his own by 100 to 200 years.
As a static graph, Figure 5 is of course not comprehensive, but merely exemplifies how quantitative compression ensembles can be used to filter and cluster artistic trajectories from the multidimensional space of aesthetic complexity. We note that interactive versions of such plots could function as a research instrument for qualitative experts, to investigate the quantitative model and dataset biases, and compare artists between different datasets. The supporting material contains an alternative version Figure 5 with larger overlapping thumbnails, allowing for a micro-macro-reading of qualitative content versus aesthetic quantification.
Discussion
Products of human culture, such as art, language and music are all subject to ongoing change, complex dynamics, and cumulative evolution (Boyd, Richerson, et al. 1996;Tomasello 2009;Beckner et al. 2009;Mesoudi and Thornton 2018). And even though complexity could emerge from a simple generating mechanism in principle, a single measure would likely prove insufficient to capture the polymorphic complexity of human cultural interaction and cultural products (cf. also Ebeling et al. 1998). Here, we have demonstrated the utility of compression ensembles to quantify polymorphic visual aesthetic complexity, using fully explainable aspects in the process. We evaluated the cognitive plausibility of our approach, tested its viability at author, date, style, genre, and medium detection tasks, and showed how the approach can recover and reveal meaningful patterns in datasets of historical and contemporary art. Given the increasing availability of cultural datasets in machine-readable form, this operationalization opens up new avenues to study the dynamics of visual art at scale, over long time spans and almost in real time. As such, the approach may help to transcend the still considerable specialization and bifurcation of qualitative art historical scholarship (by artist, period, region, style, genre, etc.). Revealing emergent patterns, while allowing for the study of systematic bias in comparison, indeed, our approach may fill a similar niche in art history as computational corpus linguistics does in relation to the qualitative study of literature.
As each transformation in our compression ensemble represents a tangible visual aspect such as abundance of detail or colorfulness, as a whole, the ensemble constitutes a functional estimate of the philosophical and cognitive concept of polymorphic visual family resemblance, as originally used to characterize the similarity of games such as chess and soccer, later extended to polymorphic visual perception (Wittgenstein 1953;Weitz 1956;Rosch and Mervis 1975). As shown in our evaluation experiments, our model captures enough polymorphic family resemblance to cluster similar styles and works by the same artist together. The explorations of the Historical art dataset yield results that meaningfully reflect the art historical scholarship which underlies the chosen dataset (Figures 4, 5), yet here presented in easy to digest plots and visualizations. Figures 5 and S3, for example, summarize the careers of several artists in a single page, in each case reflecting the intuition of an individual connoisseur, who has been trained on the given corpus. We consider this a crucial contribution of our approach, as the recognition of visual family resemblance has hitherto remained particularly hard to explain. At the same time, the recognition of visual family resemblance is arguably foundational and intrinsically mastered by trained human art connoisseurs, by other human visual experts such as radiologists, and more recently by trained convolutional neural networks in deep machine learning (LeCun et al. 2015).
While the latter has solved the recognition of polymorphic family resemblance, including object detection, which for a long time has remained computationally intractable, the magic of connoisseurship has so far still remained hidden in latent variables. It is in this sense a striking result that our evaluation results show the distinguishing explanatory power of taking into account multiple different explainable transformations for compression, effectively addressing what Friedländer in his foundational book on art connoisseurship called "the visible in its manifoldness and unity, bristling against concepetual segmentation, so that the boundaries between the species of images get into flow" (Friedlander 1946, p. 60, our translation).
While our application of the methodology here has been aimed at visual aesthetic complexity, the same basic approach could be used to make sense of other, related phenomena. For example, Sinclair et al. (2022) raise the concept of "aesthetic value", the "attractiveness" of a given product of culture, to discuss whether the arts could be considered as a product of cumulative cultural evolution (cf. Mesoudi and Thornton 2018). They cast doubt onto the possibility of art or music objectively improving over time, which "cumulative" would allude to (cf. also art historian Gombrich 1971). However, we do not see an issue here, nor with their point that attractiveness is subjective to individual preference. A style that builds on or grows out of another style is not necessarily objectively better, but may better meet the preferences of its consumers in a given time, place, or ecological niche. This is not unlike the concept of communicative need in the context of language: a structure or lexical configuration may not be better in some absolute terms, but may be more optimal or efficient given the usage tendencies or needs of the language community (cf. Kemp et al. 2018;Karjus et al. 2021). The extent this can be studied depends on the data available. For example, the Historical dataset used in our work represents only a rough estimate of a (primarily Western, and somewhat dated) preference consensus, and even that with notable sampling caveats (as discussed above). The Hic et Nunc dataset is already more specific and also includes artist and collector profiles with trade and price information, which could be (carefully) interpreted as preference, and easily linked with the social media activity of the sellers and buyers for further study.
In this paper, we focused on static, two-dimensional art such as Historical paintings, drawings, prints, and Contemporary digital art in the case of the Hic et Nunc dataset. However, there is no intrinsic reason why the same methodology could not be applied to quantify other static media such as photographs, maps, websites or natural patterns (cf. Zanette 2018; Dou et al. 2019;Fairbairn 2006;Bagrov et al. 2020) to assess their aesthetic complexity (and by proxy, style) in a transparent, explainable framework. Multi-frame visual media such as films and animations could be split up by frame or shot, and represented as sets of vectors in a compression ensemble. Three-dimensional objects such as sculptures, architecture, or clothing items in fashion can similarly be operationalized by systematically scanning them from multiple angles, or using three-dimensional versions of the transformations and compressions, e.g. using voxels instead of pixels. It may be also possible to employ this approach to quantify the aesthetic complexity of sound and music (cf. Beauvois 2007;Clemente et al. 2022) using the same method, by generating the spectrogram of a given sound, then applying the visual transformations on that. Alternatively, the same compression ensemble principle could be applied directly to audio data, but using audio filters instead of image filters and compression of audio files in place of (analogously, visual filters directly on video files, or general filters on general signals). These avenues remain a prospect for future research for now.
There is also no reason why multiple ensembles or embeddings could not be concatenated in the case of multimodal media, provided there is a principled way to weigh or normalize their contribution (the simplest way to do so would probably be PCA). As stated in the Introduction, our focus here was on aesthetic complexity and not visual similarity of recognizable subject features (such as faces, bodies, or objects), but the latter could easily be incorporated by horizontally aligning and concatenating our compression ensemble with a deep learning induced image embedding (e.g. Mao et al. 2017), or more explicitly by daisy-chaining feature recognition using deep learning and image segmentation, followed by compression ensembles of comparable sub-images of recognized objects (such as an ensemble space of human pose to further operationalize Aby Warburg's Mnemosyne, cf. Warburg 2008;Impett and Süsstrunk 2016). A scene in a film or a recorded theater play could be represented by the concatenation of a visual compression ensemble, a visual embedding, an audio compression ensemble, and a language model embedding of the spoken dialogue (e.g. Devlin et al. 2019). The full apparatus of art history could further be combined with the presented approach, integrating our systematic study of visual aspects with socio-cultural contexts, as covered in literature and recorded in structured databases or knowledge graphs (cf. Schich 2010).
Seen from yet another angle, we constitute three kinds of vector spaces: the multidimensional ensemble space of compression ratios, the decorrelated multidimensional space of associated PCA components, and a reduced two-dimensional UMAP space providing a proxy topography. Together these spaces may bring to mind the latent embedding spaces of deep machine learning, where the explanation of implicit dimensions remains a challenge. Our three spaces can also be understood as subspaces of more general cultural meaning spaces. In the sense of Cassirer's "most general reference framework" they can be seen as "spaces of geometric intuition" (Cassirer 2010Schich 2019), belonging to the realm of what art historians later called "iconologic" aspects of visual art, complementing associated contextual information, including "iconographic", i.e. written aspects (cf. Panofsky 1939). The three ensemble spaces are further in line with the cognitive theory of Gärdenfors (cf. Gärdenfors 2000;Gärdenfors 2014), where conceptual spaces are based on a set of quality dimensions, and representations are rooted in topological and geometrical notions. Finally, our approach resonates with the notion of information space (cf. Eigen 2013), where movement in space corresponds to a change in meaning. Given large sets of cultural products, we assume that a negotiation and integration of these various concepts of space may lead to further advances in the study of quantitative aesthetics.
Very recently, submitted concurrently, Murphy and Bassett (2022) have proposed a similar approach to reveal explanatory structure of complex systems centered around the concept of a "distributed information bottleneck", including an analysis of the painting of Mona Lisa as one of their examples. While their model may seem more general at first glance, applied to several types of complex systems, our approach has a wider scope in another sense. Our compression ensemble approach aims to understand visual artifacts implying an algorithmic process in the broadest sense. This algorithmic process may include individual, collective and external cognition (for example, using pencil and paper to think), where the bandwidths of the "distributed information bottleneck" are not necessarily a limiting factor. Therefore, we argue that it is realistic to consider transformations which not only decrease but add information preceding compression, as we do in our proposed framework. An intuitive example of such a broadening of bandwidth using external cognition would be the transformation of large amounts of urban features into a city map, consecutively compressed via abstraction to postcard size.
Conclusion
In summary, throughout this paper we have shown here the utility of the ensemble approach, which provides a vector-based (and therefore fast) algorithmic distance metric. While previous research in computational aesthetics and psychology has engaged in looking for a single metric of visual complexity, we argue that it may be useful to instead use an array of estimates that captures complimentary aspects of complexity. While our proposal is particularly suited for the analysis of visual media, this approach holds broader promise as a new framework for the quantification of aesthetic, linguistic and cultural complexity.
Constructing a vector space of algorithmic distance
As discussed in the Introduction, compression as such has been used to estimate visual and aesthetic complexity before. In some applications, it has also included combination with limited visual transformations (Bagrov et al. 2020;McCormack and Gambardella 2022;Lakhal et al. 2020;Machado et al. 2015;Fernandez-Lozano et al. 2019). However, the fairly large number of transformations is key to our approach, with the following rationale. Consider two algorithmically similar uncompressed images and , for example two versions of the same famous view of Rouen cathedral by Claude Monet (of which the artist painted more than 30 in 1892-1893). These two images will yield similar compressed sizes for the same compression algorithm because the "algorithm" that generated them (being a function of Monet's perspective, style, and execution) is similar. Another artwork , e.g. a late, abstract work by Piet Mondrian will, due its lack of detail, likely have a much smaller compression size. However it is entirely conceivable that a work that is stylistically very different to Monet's Rouen cathedral, e.g. a surrealist painting by Salvador Dali, might by chance have a very similar compression size. The "algorithms" used by Monet and Dali differ greatly, and an equal compression size does not imply that they are of equal algorithmic complexity either, as the efficiency of the compression algorithm itself will differ depending on the detailed characteristics of the images. However, now consider an image transformation (e.g. Gaussian blur), which we apply to the uncompressed versions of our four images , , , and before compressing them. The compressed sizes of ( ) and ( ) are still likely to be very similar, as the algorithms that generated the original images are very similar, and the transformation and compression algorithms are identical. ( ) is very likely to still be very different to ( ) and ( ). While the compressed size of was similar to and by chance, it is much more unlikely that ( ) is also similar to ( ) and ( ), as the interaction between the transformation and the generative algorithm of would have to change the compressibility in the same way as the interaction of and / . Put more intuitively, a Gaussian blur is very likely to affect the compressibility of a Monet very differently from the compressibility of a Dali. Thus, more generally speaking, two images with similar compressed sizes are much less likely to still yield similar compressed sizes by chance after a transformation unless they are algorithmically similar to start with in which case the combined algorithms of generation and transformation (and their interaction with the compression algorithm) remain similar. If we now consider the application of different transformations of an uncompressed image , each applied before a subsequent compression, the compressed sizes (including of the untransformed image) ( ), ( 1 ( )), ( 2 ( )), ... ( ( )) form a vector v( ) of length + 1. It follows from the above argument about coincidental proximity that it becomes increasingly unlikely for two algorithmically dissimilar images to remain close together as increases. Thus the resulting vector space of compressed sizes provides an indication of algorithmic distance between images.
Data processing and limitations
In practice, we use normalized compression lengths. The compression size of the original image without transformations is divided by the size of the original bitmap image. Compressions of transformations are divided by the size of the original compression. In most applications discussed in this paper, it also makes sense to rescale the vector space components (we use z-scoring), to put the compression ratios and the additional statistics on a comparable scale.
The statistical transformations include the following. We use both the LAB ("M2") and RGB space ("M3") based measures of colorfulness from Hasler and Suesstrunk (2003); standardize images by quantizing down to 200 colors and record statistics of contrast (range and standard deviation of the lightness channel values in LAB space), color distribution mean, median, max, standard deviation and entropy (which all provide insight into color complexity). We also attempt to estimate composition regularity, first as entropy of the angles of composition lines (based on the Hough transform applied on Canny filtered, i.e. edge-detected images). We also estimate fractal or Hausdorff dimension on bilevel-quantized versions of the images, using both a small and large window size, and on a Canny-filtered image (cf. Gneiting et al. 2012).
Both the Historical and Contemporary Hic et Nunc dataset are preprocessed the same way, downscaling images to 160000 pixel bitmaps (400x400 in the case of a perfect square) while retaining aspect ratio. Smaller images up to 50% of that size are allowed (but not upscaled), smaller images are discarded. Another option would be to resize all images to identical squares, but that would distort the composition of wide or tall artworks. The aspect differences, size differences resulting from integer division of the 160000 and the inclusion of smaller images, are all controlled for in the next step. The assigned file size of a compressed image (or its transformation) is actually the mean of two compressions, of the original and its 90 degree rotation. The compression ratios are calculated in terms of the respective downscaled bitmaps. Furthermore, one of our visual transformations is the Fast Fourier Transform; given its square-shaped output components, the transform is applied twice, on the original and its rotation, and the resulting components are also additionally rotated for compression.
This approach to homogenizing the images is far from perfect, as the size of the originals that these photographs and scans represent may well range from the size of a postcard to that of an altar piece. Not only that, but the latter may well be represented by a lower resolution image than the former, with better or worse color grading, etc. The dataset contains sparse metadata on original size, but we have no way to systematically quantify this issue at scale, and remains a limitation of the current study. However, in a sense, our approach is not very different from making art historical inferences by going through and looking at large visual resource collections, much like students of art history examining art historical survey literature, or an art connoisseur training their eye using a large comprehensive 35mm slide collection of a library of photos, which historically served exactly this very purpose.
Since we are interested in making comparisons over time, the Historical dataset was also filtered for items with an identifiable creation date. We carried out some preprocessing of the date metadata, retrieving four-digit years from descriptions that included them. However, much of earlier art is tagged with heterogeneous and approximate descriptions such as "early XVI century". Discarding these made the earlier end of this dataset even smaller, which is why we limit some analyses to the 19-20th century. The Historical dataset is also likely biased in a number of ways. It features primarily Western art, most of the data is concentrated in the 20th century, the metadata quality varies and is of unidentifiable origin, the sampling mechanisms are unknown but likely biased by archival and selection practices of the various museums and collections these reproductions originate and the websites that house them.
In general, in the case of art collections or databases like Wikiart and art500k, it is important to be clear that these consist of small, curated, often biased samples of art of some place and period -and as such, they represent the historiography of art first and the actual history of art second (see also Lee et al. 2020). This is not unlike the case of linguistic corpora, which also consist of small curated samples of utterances (in diachronic corpora, often in the form of newspaper articles or books) from a much larger population of all utterances produced by all speakers of a given language over some period of time. In short, it is important to acknowledge that when we make claims here about the history or dynamics of visual art, we are only referring to information derived from the sample -but we make the assumption that the sample is reasonably representative and as such informative of the population of Western art in the time periods we cover. This means that the figures depicting historical changes may look different if more data would be available. However, this is not a weakness of our approach, but an opportunity to use it for the study of data set bias. Indeed we are confident that future research will confirm our results in principle, while making headway by enhancing the approach, with larger more complete datasets as they become available.
Adjustment of temporal resemblance
As an important technical detail, we need to adjust the distances in the Temporal Resemblance model reported in the Results section, due to two biasing factors in the Historical dataset: the boundedness of the dataset (works in the last years have a higher likelihood of having neighbors in the past and vice versa) and its imbalance (much more data in some decades than others). The adjustment works as follows. We calculate temporal distances for all works between 1800-1990. We limit ourselves to this period, where the amount of works per unit of time is more consistent, compared to other parts of the corpus where there is less data but also more variation between years in terms of data points. When finding nearest neighbors, the entire dataset is still taken into account, e.g. a work from 1801 can theoretically have one of its nearest neighbor in 1400 or 2018. We then fit a generalized additive regression model, predicting distance by year. The residuals from that model, still on the scale of years, approximate temporal resemblance given the shape and bounds of the data. Table S1 lays out all the transformations used in the version of the compression ensemble used in this paper. The flood fill color is determined by finding a primary color that is least frequent in the distribution of pixel colors of the input image.
Compression ensemble pipeline technical details
This section gives a step-by-step overview of the pipeline used in this work implementing the compression ensemble approach. The pipeline was implemented in R (version 4.1.2). The visual transformations and export into various compression formats is handled by the magick R package (2.7.3), which is a wrapper for the Imagemagick cross-platform software suite (6.9.12.3).
The compressed file sizes are recorded as the mean of two sizes, that of compression of the image in the original orientation, and after 90 degree rotation (as these values usually differ slightly due to the way image compression works). For the Fast Fourier Transform, and additional rotation step is is applied both before doing FFT (as FFT in Imagemagick uses the width of the original image for the dimensionality of the squareshaped outputs).
We recommend carrying out the image compression steps in memory, as saving all the files to disk would be computationally inefficient. Parallelization of the pipeline is recommended if the dataset is large. Note that due to the architecture of Imagemagick, the file sizes of image outputs in different formats may vary slightly between operating systems. As long as all images in a given dataset go through the pipeline on the same system, this should not be a problem though. We carried out all processing on an Nvidia DGX Station A100 running Red Hat Enterprise Linux (AS release 4) hosting a Docker running Ubuntu 20.04.3.
The vector of compression ratios and statistical transformations is calculated as follows.
• Import image file, convert to RGB colorspace, downsize so the total number of pixels is (as close as possible to) 160000. Smaller images are not upsized. If the original image is smaller than 50% of that, stop processing. • Record the RGB bitmap file size of this normalized image as • Compress the image using compression, record the ratio of the size of this file to as . This is the baseline value that all transformations compressed using will be compared to. • Also compress the image using other compression algorithms ( , -we do the latter both with quality parameter 0 and 100), record the ratio, these file sizes divided by . Record the size as .
• Downscale the image to 40, 20 and 10% of , record compression ratio to with the different algorithms.
• Transform the image using all the visual transformations (see list above), compress using , record the ratio by dividing the compressed file size with • Also transform the image using a subset of the visual transformations, compress using , record the ratio by dividing the compressed file size with • Carry out the statistical transformations (see list above). • Concatenate the compression ratios and statistical transformation values into a vector.
Note that the compression ensemble approach is general and the specific number of transformations is unimportant. As demonstrated in the Evaluation section in the main text, and handful transformations may be enough for some tasks. Here, we opted to implement a fairly large array of transformations and additional compressions with different sizes and algorithms, in order to explore the transformation space in a relatively comprehensive manner. If, unlike here, computation time is important, a smaller set of transformations is likely a better choice. Figure S1 illustrates individual transformations and how they constitute the UMAP shown in Figure 2. Figure S1: The transformations illustrated in Figure 1 mapped onto the UMAP projection of Figure 2, here as a heatmap (for better visibility), where each cell represents the mean value of the points in a given area of the UMAP. Blue stands for low, gray for mean and red for high values in a given compression ratio or variable. Date of creation is added as an additional map to the bottom right. Figure S2 supplements the discussion on the art classifier in the Evaluation section in the main text (see also Figure 3). Figure S3 supplements Figure 5, displaying all the artworks as larger thumbnails. Figure S3: All artworks on the main panels of Figure 5, centered on their coordinates on the vertical and horizontal axes. Figure S4: This figure exemplifies the results of vector operations discussed in the main text as a form of latent space navigation, and supplements 2. Each cell is a result of the following operation: take the compression ensemble vectors of the first image in a given row and column, sum them element-wise, find the cosine-nearest neighbor in the ensemble for that new vector. This figure shows how vector addition can be used to navigate the space of aesthetic complexity. | 2022-05-23T01:15:51.814Z | 2022-05-20T00:00:00.000 | {
"year": 2022,
"sha1": "36b582a824ad89eb1bbaac5fc1f0f71916eec84e",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "36b582a824ad89eb1bbaac5fc1f0f71916eec84e",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
265932317 | pes2o/s2orc | v3-fos-license | Prevalence and Antibiotic Resistance Pattern of Diarrheagenic Escherichia coli Pathotypes Among Children under 5 Years in Khuzestan, Iran
Background: Diarrhea is a life-threatening cause of high mortality, especially among children living in areas with poor sanitation. Enterobacteriaceae is one of the serious causes of bacterial diarrhea in children and adults. In this family, infection with diarrheagenic Escherichia coli (DEC) pathotypes in children is associated with extensive health risks and is of particular importance. In this study, we compared the distribution of pathotypes, epidemiological patterns, and antibiotic resistance of DEC in two diarrheal and non-diarrheal groups among children less than 5 years. Methods: In this study, 303 stool samples were collected from patients admitted to Golestan hospitals in Ahvaz and Dr. Ganjavian in Dezful, Khuzestan. To this end, 201 samples from children with diarrhea (case group) and 102 samples from healthy children (control group) were examined. DEC was characterized by polymerase chain reaction (PCR) for each stool sample, and DEC isolates were tested with antibiotic resistance tests against different antibiotic agents to identify the prevalence of multidrug-resistant (MDR) strains in both groups. Results: DEC was found in 24% (48 out of 200) of the children with diarrhea and 3.8% (4 out of 103) of the healthy children. Enteroaggregative E. coli (EAEC) was the DEC most frequently associated with diarrhea (32 out of 48, 66.6%), which was followed by enteropathogenic E. coli (EPEC) 22.9% (11 out of 48, 22.9%), and enterotoxigenic E. coli (ETEC) (5 out of 48, 10.4%) from children with diarrhea. Four DEC isolates were identified in healthy children: EAEC (2 out of 4, 50%) and EPEC (2 out of 4, 50%) in the healthy group, but no enteroinvasive E. coli (EIEC) or enterohemorrhagic E. coli (EHEC) strains were found in both groups in this study group. In general, DEC isolates exhibited high resistance to ceftriaxone and cefotaxime, and 33 (63.4%) isolates of DEC were MDR . Conclusion: A high prevalence of DEC strains was observed in the group of children with diarrhea and healthy children. Accordingly, further attention should be paid to continuous monitoring of the prevalence and pattern of antibiotic resistance of diarrheal bacterial isolates among children and the whole community .
Introduction
Diarrhea causes the death of 533 768 children up to 5 years old globally (1), causes approximately 9% of all deaths among children under 5 worldwide, and kills more than 1300 young children every day (2).Owing to unsanitary living conditions, including contaminated water sources, unsanitary environment, and insufficient education in economically poor countries in Africa, Asia, and South America, these regions are more susceptible to diarrheal diseases (3).Infectious diarrhea is caused by different groups of pathogens.The Enterobacteriaceae family causes numerous mild to fatal contagious diseases, especially a variety of gastrointestinal disorders worldwide.Due to the increase in antibiotic resistance and the easy transfer of antibiotic-resistant genes among members of the Enterobacteriaceae family, it is difficult to treat patients with these infectious diseases with conventional antibiotics (4,5).
Diarrheagenic Escherichia coli (DEC) is one of the most prevalent causes of bacterial diarrhea in children living in less developed countries (6).Although various strains of E. coli exist as normal flora in the digestive system of many animals as well as humans in a harmless form, some strains of this bacteria cause serious gastrointestinal and extraintestinal disorders (7).
DEC strains are defined in 5 main subgroups according to their pathogenic characteristics: (a) Enterotoxigenic E. coli (ETEC) is the main cause of traveler's diarrhea and also causes diarrhea in malnourished infants.ETEC strains increase fluid and electrolyte excretion by producing heatlabile and heat-stable toxins (8,9).(b) Enteroinvasive E. coli (EIEC) are closely related to Shigella species in terms of biochemical, genetic, and pathogenicity.Similar to Shigella, EIEC also causes dysentery in humans (10).(c) Enteroaggregative E. coli (EAEC) form an aggregated adhesion (AA) pattern when grown on the 2-HEp cell line.EAEC is one of the main causes of acute and chronic diarrhea in children and adults, especially in developing countries (11,12).(d) Enteropathogenic E. coli (EPEC) strains attach to the enterocyte membrane through a process called localized adherence (LA).EPEC strains are divided into two groups based on the presence of bundleforming pili (BFP).The first group is typical EPEC that has BFP, while the second group is atypical EPEC that does not have BFP (13,14) (e) Shiga toxin-producing E. coli (STEC) strains, the main reservoir of which is raw meat, adhere to the colon mucosa and cause diarrhea by producing verotoxin.Serotype H7:O157 is one of the prominent STEC strains causing endemic outbreaks and sporadic cases of acute diarrhea worldwide (15)(16)(17).Statistics on DEC strains causing diarrhea in children are scattered because accurate identification of DEC strains is not often tested in most countries (18).
To better deal with outbreaks of DEC in communities, it is required to conduct tests for molecular identification of pathological strains and more tests to investigate the pattern of antibiotic resistance and their transmission.The indiscriminate and increasing use of antibiotics and the horizontal transfer of resistance genes by mobile genes cause the emergence of bacteria that are resistant to almost all antibiotic families (19,20).The scattered studies conducted in Iran have indicated that due to the resistance of the strains to the first line of antibiotic agents, the treatment of DEC strains is increasingly accompanied by problems (20,21).
In this study, we have described the distribution of DEC strains and their epidemiological characteristics in children with diarrhea compared to children without diarrhea who have been referred to the Khuzestan medical centers, Southwest of Iran.Furthermore, this study has warned about the dire situation of multidrug resistance (MDR) among DEC strains by conducting antibiotic sensitivity tests.
Sampling
From September 2015 to October 2016, hospitalized children up to 60 months of age who visited Golestan Hospital in Ahvaz and Dr. Ganjavian hospital in Dezful, Khuzestan province, were selected for sample collection.Stool samples containing E. coli strains were analyzed in two groups: diarrheal (case group) and non-diarrheal (control group).Hence, 200 samples of children with diarrhea and 103 samples of children without diarrhea who were referred to hospitals for reasons other than digestive disorders were examined.
Only one stool sample was collected from each child and analyzed.Samples from children treated with antibiotics in the last 28 days or infected with Salmonella, Shigella, and parasites were excluded from the study.
Specimen Collection, Isolation, Culture, and Identification of Escherichia coli
This study included 303 stool samples containing E. coli from children up to 60 months.To perform the relevant tests, stool samples were transported in clean disposable boxes.Fecal samples were cultured directly on a MacConkey agar plate (Merck; Frankfurt, Germany).Then, the incubation of culture media was done at 37 °C for one night.After that, lactose fermenters were subcultured on eosin methylene blue medium (Merck; Frankfurt, Germany).The presence of E. coli strains in the collected stool samples was confirmed by performing standard biochemical tests, including oxidase negative, methyl red positive, indole positive, Voges-Proskauer negative, citrate negative, catalase positive, type of carbohydrate consumption in triple sugar iron agar, and urease negative (22).
Multiplex Polymerase Chain Reaction
The boiling method was used to extract the DNA of E. coli to prepare template DNA.Genes of marker factors were used to identify the DEC pathotypes: escV for EPEC detection, stx1 and stx2 for STEC detection, elt, estIa, and estIb for ETEC detection, invE for EIEC detection, and aggR and astA for EAEC detection.Polymerase chain reaction (PCR) levels and quality for identification of DEC strains, including primer sequence, size, and annealing temperature were similar to the method implemented by Müller et al using a thermocycler (Bio-Rad) (23).After performing PCR, the size of each locus was determined by electrophoresis on 1.5% gel agarose with a molecular marker (100 bp Ladder RTU, Sinaclon, Iran).Then, amplified genes obtained from PCR were evaluated by ultraviolet irradiation in Doc gel (Uvitec, Cambridge, UK).
Prevalence of Diarrheagenic Escherichia coli Strains Among Diarrheal and Non-diarrheal Samples
Out of 303 stool samples containing E. coli strains, 200 isolates (66%) and 103 isolates (34%) were obtained from children with diarrhea (cases) and children without diarrhea (controls), respectively.Finally, 52 DEC isolates were detected from both groups: 48 (92%) children with symptomatic diarrhea and 4 (8%) asymptomatic children.However, neither EIEC nor STEC strains were detected in this study.
The detection of DEC strains in male samples (55.7%) was higher than in female samples (44.2%).Furthermore, children under two years had the highest prevalence of diarrheal diseases (50%) and the presence of DEC strains (52.1%).The frequency of age and gender of children in both diarrheal and non-diarrheal groups and each of the DEC strains are presented in Table 1.
EAEC was the most common pathogenic strain found in both groups (34/52, 65.4%).In the case group, the highest prevalence of EAEC was among children aged 12-23 months (12/48, 25%), but the prevalence of ETEC was higher in children between 0-11 months (2/48, 4.2%).Furthermore, the lowest prevalence of all pathotypes was between 48-60 months.Figure 1 shows the frequency of each DEC strain in both groups.
Antibiotic Resistance of Diarrheagenic Escherichia coli Isolates
All isolates were sensitive to imipenem (n = 52, 100%).The highest rates of resistance were observed against cefotaxime (68.7%), ceftazidime (64.6%), and ceftriaxone (66.7%) in children with diarrhea.Furthermore, none of the isolates were sensitive to all antibiotic discs tested in this study, but all 4 DEC isolates strains were resistant to ceftriaxone in non-diarrheic children.Of the total 52 found DEC, 33 (63.4%) isolates were MDR (resistance to more than three antimicrobial drug families).
Discussion
Pathogenic strains of E. coli are transmitted to the host in different ways, cause serious digestive diseases in humans, especially children, easily penetrate the human food chain, exist in contaminated water, and can be transmitted through the fecal-oral route (9,11,15).DEC is one of the major enteric pathogens that cause diarrhea in children in developing countries.In this study, out of a total of 303 E. coli samples, 17.2% (n = 52) were positive for DEC infection.In a study conducted by Samal et al in Orissa, India, the prevalence of enteric bacterial pathogens in hospitalized patients with diarrhea was reported to be 75.5% E. coli strains and 13.3% DEC strains (25).In the present study, the frequency of DEC pathotypes was higher in diarrheal samples (48 out of 200, 24%) than in the non-diarrheal group (4 out of 103, 3.9%).In a study in Brazil, DEC pathotypes were identified in 18.0% of children with diarrhea and 19.0% of control subjects (26).
The highest prevalence of diarrheal diseases and DEC strains was observed among male children, with 57% and 55.7%, respectively.Moreover, the highest prevalence of diarrheal diseases and DEC strains was observed among children under 2 years old, with 53% and 50%, respectively.In the group of children with diarrhea, the frequency of DEC strains in children aged 48-60 months was lower than that in younger ages (4 out of 48, 8.3%), but in the control group at this age, the frequency of DEC isolates was the highest (2 out of 4, 50%).E. coli strains can colonize and form biofilms on the mucosal surfaces of hosts such as animals and humans.EPEC strains cause diarrheal diseases by forming attaching and effacing legions and firmly adhering to the surface of intestinal cells (13).EAEC strains are defined by manifesting the aggregative adherence pattern on epithelial cells in cell culture.EACE strains cause diarrhea by the aggR gene, which regulates biofilm formation and aggregates adhesion factors that cause direct attachment to intestinal cells (11,27), while the pathogenicity of ETEC strains is due to the secretion of enterotoxins such as heatlabile and heat-stable and heat-stable (8).In the present study, PCR method was used to identify DEC strains (Figure 2) EAEC was the most common DEC pathotype diagnosed in both case and control groups, with 32 (16%) and 2 (1.9%) frequency, respectively.The present study showed the presence of EAEC and EPEC virulence genes even in children without symptoms of diarrhea.
The prevalence and distribution of infections caused by DEC strains are diverse worldwide (28).In the study by Khairy et al, EAEC (47%) was the predominant pathotype of DEC isolated from children with diarrhea in Egypt (29).The highest frequency of DEC observed both in the diarrhea group and all samples is related to EAEC, followed by EPEC and ETEC with the numbers presented in Table 1.The results of the present study are almost similar to the study from India in which EAEC was the predominant DEC strain found in the diarrhea group (69%), followed by ETEC and EPEC strains (30).
In this study, none of the EIEC and STEC strains were found among the samples.Moharana et al in India also reported no EIEC and STEC strains among the diarrheal samples, and ETEC was the most common DEC in the study (40 out of 77) (31).However, studies by Lima et al in Brazil and Eltai et al in Qatar on the epidemiology of DEC among children showed that EPEC is the most common DEC in children (32,33).
In the present study, antibiotic resistance of DEC isolates from children who were admitted to the hospital due to diarrhea was tested and compared to DEC isolates from healthy children as listed in Table 2.In our study, the highest values of antibiotic resistance were observed against cefotaxime, ceftriaxone, and ceftazidime.In addition, high resistance to third-generation cephalosporins in DEC strains has been reported in other studies.Different prevalence rates of MDR in DEC strains have been reported worldwide.In the present study, MDR strains were identified in 63.4% of DEC isolates.In the diarrheal group, the prevalence of MDR strains (31 out of 48, 64.6%) was higher than that in the control group (2 out of 4, 50%).Moreover, EPEC (8 out of 11, 72.7%) had the highest prevalence of MDR among DEC pathotypes compared to EAEC (20 out of 31, 64.5%) and ETEC (3 out of 5, 60%) strains in the case group.Furthermore, the different prevalence of MDR in DEC isolates in Qatar, Iran, and China studies has been reported to be 40%, 50%, and 66.7%, respectively (33)(34)(35).
Conclusion
In the present study, a high prevalence of EAEC strains was observed in children under 5 years in Khuzestan province.
Since the high prevalence of antibiotic resistance of DEC strains was detected in both groups with diarrhea and without diarrhea symptoms in this research, preventive measures and further studies are suggested to reduce the prevalence of DEC strains.Furthermore, the information obtained from this study can be used to identify emerging antimicrobial resistance and develop appropriate treatment guidelines and interventions.
Table 1 .
Age and Gender Distribution of Children with DEC in Case and Control Groups Note.EAEC: Enteroaggregative Escherichia coli; EPEC: Enteropathogenic Escherichia coli; ETEC: Enterotoxigenic Escherichia coli; DEC: Diarrheagenic Escherichia coli. | 2023-10-26T15:19:04.725Z | 2023-06-29T00:00:00.000 | {
"year": 2023,
"sha1": "fcddb662282fae3bf0f39b37e67c246535c266ca",
"oa_license": "CCBY",
"oa_url": "https://ajcmi.umsha.ac.ir/PDF/ajcmi-3447.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "03d3d540b14f2262f69e2fd78ab7ffbe66bb3e17",
"s2fieldsofstudy": [
"Medicine",
"Environmental Science"
],
"extfieldsofstudy": []
} |
89662882 | pes2o/s2orc | v3-fos-license | Physiological changes in sugarcane in function of air and ground application of fungicide for orange rust control
The application of fungicides in different operating conditions is a usual practice for maintaining the productive potential in sugarcane varieties considered susceptible to orange rust, however, the physiological effects provided by the different application methods are unknown. The objective of this study was to evaluate the photosynthetic responses (gas exchange and chlorophyll content) in the SP81-3250 sugarcane variety, in function of different operational conditions of the aerial and ground application of fungicide in the orange rust control. Two application of fungicides of the chemical groups Strobilurins and Triazoles were carried out in the experimental units in different treatments. In the aerial applications, two application rates (30 and 40 L ha) and three nozzle orientations (0o, 90o and 135o) and in the ground application was used 200 L ha and flat fan spray nozzles with air induction (AI11004-VS). Gas exchange evaluations were performed with an IRGA and amount of chlorophyll a and b with a chlorophyll meter. Data were analyzed using Student's t-test for independent samples, at 0.05 significance. The aerial application provided better photosynthetic responses and chlorophyll a and b contents in leaf limb compared to ground application. Significant differences were detected in gas exchange and chlorophyll content between application rates and angulation of the spray nozzles in the boom. Fungicide applications provided increments of more than 19 t ha compared to the control, depending on the spraying technique employed. Aerial application with 30 L ha and 0° of deflection is a viable option to provide safer applications as a function of the larger droplet size.
INTRODUCTION
Sugarcane (Saccharum spp.) stands out as one of the main Brazilian agricultural crops, mainly as a source of energy biomass. On the other hand, the recent finding of sugarcane orange rust (Puccinia kuehnii (W. Krüger) EJ Butler) in the region of Araraquara-SP/Brazil has been worrying producers and technicians for the damages they can cause to the crop (BARBASSO et al., 2010). According to Araújo et al. (2013), in several countries, including Brazil, susceptible varieties (e.g., RB72454, SP89-1115, SP84-2025, SP81-3250, SP77-5181, CTC 9 and 15) had more than 40% yield reduction. Zhao et al. (2011) stated that this disease reduces the development and productivity of the crop, to the detriment of a lower content of chlorophyll in the leaves, efficiency in carbon fixation, stomatal conductance, liquid photosynthetic rate and leaf transpiration. It is widespread throughout the state of São Paulo, and present in the main producing regions of Brazil (CHAPOLA, 2013).
The orange rust etiological agent, P. kuehnii, is a biotrophic fungus, had low range of hosts and is one of the main threats to the Brazilian sugarcane fields, attacking mainly plants of the generus Saccharum. The orange dust lesions rapidly progressed and ruptured leaves epidermis, forming pale orange pustules, mainly observed on the abaxial face of the leaves, facilitating their identification in the field (GLYNN et al., 2010). The preventive application of fungicides in susceptible varieties, during the favorable periods to the development of the disease, has been shown to be effective, with the maintenance of yield potential (MARGAREY, 2008).
Strobilurin compounds act by inhibiting the mitochondrial respiration of the fungal cells, thus blocking electron transfer between cytochrome b and c 1 at the Q 0 site, affecting the production of ATP (OLIVEIRA, 2016). These compounds act as a preventive inhibitor of spore germination, presenting a curative and eradicating action, preventing the development of fungi in the initial and post-germination stages (RODRIGUES, 2006).
Researches have shown that some fungicides, especially the strobilurin group, can also promote physiological changes in the plant, such as increase in chlorophyll content, nitrogen assimilation and photosynthetic rate. In addition, they can help directly in the development of higher biotic and abiotic stresses tolerance, due to its action on the metabolism of abscisic acid and antioxidant enzymes, which would consequently increase productivity (RODRIGUES, 2009;JULIATTI et al., 2012;CARRIJO, 2014).
Studies with various crops such as soybeans, wheat and beans report the effect of strobilurins on plant physiology (RODRIGUES et al., 2009;LENZ et al., 2011;DEMANT;MARINGONI, 2012). However, few studies about the effect of strobilurins on sugarcane cultivation were done, with regard to the increase of photosynthetic activity, gas exchange, chlorophyll content and increase in yield. In addition, there are several reports in the literature on inefficient crop spraying of phytosanitary products, by either excess or lack of active ingredient in biological targets. It is essential that application techniques provide the correct deposit of droplets generated during spraying on biological targets and it is necessary a better understanding of spraying equipment and the plant architecture in order to obtain maximum efficiency, avoiding the contamination of adjacent areas ( VAN ZYL et al., 2013;CUNHA, 2014).
Thus, this study aimed to evaluate the photosynthetic responses, chlorophyll a and b content, biometric variables and the SP81-3250 yield, using different operational conditions in the aerial and ground applications of fungicides in the management of sugarcane orange rust.
MATERIAL AND METHODS
The field study was carried out in commercial areas cultivated with SP81-3250 sugarcane variety, belonging to the Company of Sugar and Alcohol of Minas Gerais (CMAA) located in Uberaba, MG, Brazil. The climate in the region is classified by Köppen (1948), as Aw, which is tropical wet and dry season during the winter. The farm is located at 19º24'45" S and 48º9'46" W geographic coordinates and 803 m above mean sea level. The crop was planted on July 30 th , 2011, spaced 1.5 m between rows and adapted to mechanical harvesting. During the applications, the crop was in its fourth-year sugarcane ratoon.
Fungicide application details
The fungicide applications were defined through inspections in the field, especially when the weather conditions were favorable for the disease development. Two applications were carried out, with the sugarcane plants at the phenological stage of tillering (first application) and crop establishment (second application), according to Gascho and Shih (1983).
The first and second applications of fungicides were performed on January 29 th and March 23 rd , 2015, respectively, due to high natural infection of sugarcane orange rust. This second application follows the same methodology as the first application performed in January. Before the sugarcane was harvested on October 12th, 2015, was not necessary a third application of fungicide for the sugarcane completed its cycle.
All fungicides treatments were detailed in Table 1. For aerial fungicide applications, two application rates (30 and 40 L ha -1 ) and three deflection angles of the nozzles were used at the spray boom. The angles were in relation to the flight line: 0º (parallel and straight back), 90º (perpendicular and up down) and 135º (forward into the wind), for droplets initially considered as coarse, medium and fine droplets, respectively. Applications performed at 90º deflection angle were considered as standard by the applicators and were evaluated only in the second application.
For ground application, 200 L ha -1 sprayed through flat fan spray nozzles with air induction, producing extremely coarse droplets. This treatment was considered the most used by the company and evaluated only in the first application because the sugarcane was 1.5 m height, which did not allow the use of ground sprayers in the second application.
Treatment 5, regarded as the sugar mill standard, first received the ground application and then an aerial application in a new experimental area. The other treatments were similar in both applications. Additionally there was one treatment 605 Physiological changes… ALVES, T. C. et al that did not received application of fungicide (control). Control In ground applications, coupled to the hydraulic system of a tractor, Falcon hydraulic sprayer was used (Jacto S/A, Pompéia, SP, Brazil) with 14 m width boom, 800 L tank capacity and electronic spray controller was used. The nozzles used were AI 11004-VS (Spraying Systems Co., Wellaton, IL, USA) spaced 0.5 m between each other and positioned 0.4 m above the canopy. The application was performed at 7 km h -1 and 207 kPa of pressure.
In aerial applications, an agricultural aircraft EMBRAER, EMB 202A (Embraer, Botucatu, SP, Brazil) had its spray boom equipped with 43 hollow cone nozzles disc #8 and core #45 (Spraying Systems Co., Wellaton, IL, USA). The flight speed and flight height were at 168 km h -1 (105 mph) and 3 m above the canopy, respectively. The pressures during the applications were kept at 207 kPa for 30 L ha -1 and 276 kPa for 40 L ha -1 .
Experimental plots were sized 100 m length x 48 m width for aerial applications and 100 m length x 7 m width for ground applications, whose width corresponded to three crosswind passes by aircraft and half-boom section of ground sprayer, respectively. The plots that did not receive application of fungicides sized 100 m length x 9 m width. The samples were collected in the central area of each plot, sizing 90 x 16 m, 90 x 5.0 m and 90 x 7.0 m for aerial, ground and none application, respectively. The difference among plot dimensions was due to application methods and area format.
The environmental conditions of temperature (°C), relative air humidity (%) and wind speed (km h -1 ) were recorded during the applications using a portable weather station (Kestrel ® , 4000). The conditions vary between 28 and 30 ºC, 50 and 57% and 4 and 6 km h -1 , respectively.
Evaluation of gas exchange and chlorophyll a and b content
Evaluations of gas exchange and chlorophyll a and b contents were held respectively, with an Infrared Gas Analyzer -IRGA -(LCpro-SD model, ADC BioScientific Ltda.) and a chlorophyll meter (ClorofiLOG CFL-1030, from Falker Agricultural Automation). These evaluations occurred after the first application of the fungicide, on February 3 rd , 10 th and 23 rd , 2015; and after the second application, on March 31 and April 7 and 18, 2015.
For the gas exchange evaluations an artificial light gun was used on the chamber so that, during the measurements, all the leaves received 1200 µmol m -2 -1 of photon flux density. For each leaf analyzed, the equipment was stabilized and, after about one minute, three consecutive readings were collected in seven plants per treatment, totalizing 21 samplings. The same evaluator was used to reduce the sampling error when taking the readings. The evaluations were conducted with sky without cloud, between 8 am and 10 am, so there were no extremes of temperature. In each plant in the upper canopy was sampled the pointer leaf and in the middle third, the first fully expanded leaf and the apparent leaflet (leaf '+1') (SALES et al., 2012;ARANTES et al., 2013).
The ratios of instantaneous water use efficiency (W/E), intrinsic water use efficiency (W/gs) and carboxylation efficiency (A/Ci) were calculated from IRGA data. These data demonstrate the photosynthetic performance of a plant, allowing a better evaluation of the gas exchange and the physiology of this plant.
The chlorophyll a and b content, represented by ICF (Falker chlorophyll index) were evaluated with the chlorophyll meter. The readings were made randomly, in seven plants at each 606 Physiological changes… ALVES, T. C. et al treatment.
For chlorophyll assessment, ChlorofiLOG uses two emitters at wavelengths close to the peaks of each type of chlorophyll (λ = 635 and 660 nm) and one emitter at near infrared wavelengths (λ = 880 nm).
Sugarcane yield
The biometric assessments of sugarcane were carried out on October 12, 2015, three days before the beginning of the sugar cane harvest in the areas (sugar cane with 12 months, 4th year of sugarcane ratoon), according to the method proposed by Martins and Landell (1995): The number of stems per linear meter was estimated by counting 30 points in the useful area in order to determine the stand, counting only the stems favorable to industrialization. To determine the stems length, the heights of 30 industrialized stems were measured in the useful area, between the cut -off point and the breaking point of the stem. A scale for the height measurements was used. The stem diameters were measured, in the useful area, using a pachymeter, considering the lower thirds of 30 industrializable stems. For the stems mass, 30 industrializable stems were sampled, in the useful area, with the aid of a portable digital scale.
From this data, and considering the stem density equal to 1, it was possible to estimate the productivity, expressed in tons of cane per hectare (TCH), using the following mathematical expression: TCH = D 2 x S x L x (0.007854/Fs) where; D = stem diameter (cm); S = number of stems per linear meter; L = stem length (cm) and Fs = furrow spacing (m).
Statistical analysis
The results of both application dates were independently considered and evaluated separately, first submitted to presupposition analysis, tested by the Kolmogorov-Smirnov (KS) and Levene tests to analyze the residues normality and the homogeneity of the variances , respectively, at α = 0.01. The data were then analyzed using Student's t-test for independent samples at α = 0.05, using the SPSS Statistical Software, Version 17.0 (SPSS Inc., Chicago, IL, USA).
RESULTS AND DISCUSSION
Among the physiological parameters, the transpiration rate (E, mmol m -2 s -1 H 2 O) and the carbon assimilation rate (A, µmol m -2 s -1 CO 2 ) by the leaves of the SP81-3250 were the most expressive in terms of momentary photosynthetic efficiency evaluation.
The application of fungicides provided a momentary efficiency of the gas exchanges compared to the untreated areas. There was no difference in this efficiency and carbon assimilation rates, in relation to the techniques used in the spraying, in the last evaluation of the first application (Table 2). In the second application of fungicide, the rates of transpiration and carbon assimilation did not differ with the application technologies, mainly in the first evaluations.
Transpiration rate (E) and carbon assimilation (A) were generally higher when aerial application was used in the different forms of spraying, especially in the first application. The use of fine droplets should be primarily considered in aerial applications to provide satisfactory coverage and uniform spray distribution. However, small droplets exposed to unfavorable climatic conditions, such as low relative humidity, high temperatures and wind speeds, are more likely to be evaporated and lost by drift (VILLALBA; HETZ, 2010). Czaczyk et al. (2012) reported that thicker droplets can jump, break and slip through the leaves and reach other targets. However, areas that were treated with an application rate of 30 L ha -1 in the angle orientation of the spray nozzle at 135° of deflection generally produced better results of momentary efficiency of gas exchanges.
Changes in plant gas exchange physiological parameters after application of phytosanitary products are effects already reported in the literature (TORRES et al., 2012;CARRIJO, 2014;ZANDONADI et al., 2017). Besides the thermal effect that fungicide solutions can induce on treated leaves, Biggs (1990) has already pointed out that the application of triazole fungicides could induce changes in leaf transpiration and that these effects would persist for several days after application. These changes observed in the foliar transpiration rate in fungicide treatments with Triazoles were justified by changes in potassium (K + ) concentration in the stomata guard cells, which made them turgid, allowing the stomata opening (Taiz;Zeiger, 2013).
Similar to the present results, the rate of carbon assimilation (A), or photosynthesis rate, was also higher for treatments containing Triazoles (tebuconazole) and Strobilurin (pyraclostrobin) in soybean crop treated with fungicides at the reproductive stage, with persistent results until the 17 th day after the application of the fungicides (FAGAN et al., 2010). According to Martins (2011), the mixtures of Strobilurin + Triazole (pyraclostrobin + apoxiconazole) increase soybean 607 Physiological changes… ALVES, T. C. et al transpiration rate more than the pure triazole fungicide. In sugarcane, the highest carbon assimilation rate persisted until the second evaluation, 15 days after the application, which is understandable because of the fine droplets, mainly in aerial applications that provide a satisfactory coverage and uniform distribution of the first application. The responses of carbon assimilation were equivalent to those of the transpiration rate, that is, where greater transpiration occurred; there was also greater carbon assimilation, since both variables are directly dependent on the stomatal opening. In other words, if there is high transpiration it is because the stomata are open and naturally higher amounts of CO 2 can enter in the leaf and be converted to assimilated carbon, which will ultimately increase the final biomass production.
In relation to the instantaneous water use efficiency (W/E), when the aerial application was used at the different application rates and in the different orientations of the nozzle angles, the averages were higher than those of the ground application (Table 3), differing from both the ground application as the control plant, in the first application. In the second application, it was observed that the averages were higher, when the application rate of 30 L ha -1 was used in comparison with the 40 L ha -1 rate. This behavior may also be a reflection of the difference in fungal syrup deposition applied to the culture.
The instantaneous water use efficiency is characterized as the amount of water transpired by a crop to produce a certain amount of dry matter (SILVA; SILVA, 2007). Thus, when crops were more efficient in the use of water, they can produce higher amount of dry matter per gram of transpired water. The most efficient use of water is directly related to the stomatal opening time, because, while the plant absorbs CO 2 for photosynthesis, the water is lost by transpiration with variable intensity depending on the potential gradient between the leaf surface and the atmosphere, followed by a current of water potentials (CONCENÇO et al., 2007). The instantaneous efficiency of water use (W/E) is therefore a ratio between the net photosynthesis rate and the leaf transpiration rate at the time of evaluation, and any management or stress that reduces carbon assimilation, or photosynthetic rate, and/or increase the transpiration rate will negatively affect the instantaneous efficiency of water use. Another photosynthetic parameter is the ratio W/gs, or intrinsic water use efficiency, which expresses the amount of carbon that is assimilated via photosynthesis per unit of evaporated water via stomata (BELLOTI, 2012). In relation to this measure, when the aerial application was used in the different spraying techniques, the averages were higher than those of the ground application (Table 4), especially in the first evaluation of the first application. However, the differences were not significant between the rates of application and the orientation of the nozzle angles. In the third evaluation of the first application, this differentiation was no longer visible. In the second application, it was observed that, in general, the application rate of 30 L ha -1 presented higher averages compared to the 40 L ha -1 rate, with emphasis on the orientation of the nozzle angle in 135º and 0º in which presented higher means in the three evaluations. Strobirulins physiological effects in plants are related to their fungitoxic action, which in some way interferes partially with the respiration of the plant cell, consequently affecting the liquid photosynthesis of the plant, including potentiating the carbon and nitrogen assimilation, as well as considerably increasing nitrate reductase proteins activity (KÖHLE et al., 2002;RODRIGUES, 2009).
As the plant gas exchange with the atmosphere is regulated by the stomata, the absorption of CO 2 also promotes the loss of H 2 O. The decrease of this loss consequently restricts the entry of CO 2 (SHIMAZAKI et al., 2007). Therefore, in order for the plants to have a higher efficiency of water use, it is necessary to absorb the maximum CO 2 with the least possible loss of H 2 O (TAIZ; ZEIGER, 2013). However, as presented by Blum (2005), genotypic differences in the intrinsic efficiency of water use (W/gs) are expressed when variations occur in plant water use, or in stomatal 609 Physiological changes… ALVES, T. C. et al conductance (gs). As carbon assimilation (A) has a direct correlation with stomatal conductance, it is common to increase A/gs to result in lower biomass production and reduced yields. Regarding the instantaneous carboxylation efficiency (A/ci), the averages were higher using aerial application at different rates of application and orientation of the nozzle angles, with no difference between the first and second evaluations, from the first Application (Table 5). Finally, for the same variable, in the third evaluation after the first application, there was no significant difference between the results of the spray forms and the control.
In both evaluations, in the first application, the result of the ground treatment did not differ from the control, however, in the first two evaluations the aerial sprays were higher to ground. In the second application, it was again shown that the application rate of 30 L ha -1 provided higher averages in comparison to the 40 L ha -1 rate; however in the second evaluation it was not possible to verify this difference between the different forms of spraying. This behavior may be a reflection of the satisfactory coverage and uniform distribution of spray in the crop canopy. The negative reaction of some treatments, in which no difference was observed with the control, is probably a consequence of poor management of the stomatal opening, the activity and/or affinity of the Rubisco enzyme by CO 2 , as discussed by Nason et al. (2007). IRGA results reflect a momentary condition on the sugarcane physiology in the two localities, and although two applications of fungicide are a routine in susceptible varieties, their effects do not always last until the end of the cycle transforming into higher yields.
In relation to the chlorophyll a and b contents, the best values were obtained when using the aerial application in comparison to the ground application (Table 6). However, there was no difference for chlorophyll b in the second evaluation and for both chlorophylls in the third evaluation, from the first application. In the second application, there was no difference in the first two evaluations for the chlorophyll a and b content, as well as the different spraying techniques. The application rate of 30 L ha -1 in 0° of deflection orientation was highlighted, providing the best means in the three evaluations.
According to Rambo et al. (2004), the chlorophyll content is a very important parameter for the evaluation of the development of the plant, being used to differentiate the plants with N deficiency of those that present adequate levels of this element. The use of the chlorophyll meter for this evaluation is adequate because it is a low-cost method, provides results faster than laboratory tests and does not imply destruction of the leaves .
The application of phytosanitary products, in particular fungicides, provided heavier sugarcane stems compared to untreated stems, although the weights were similar in relation to the techniques used in the spraying (Table 7). However, areas treated with 30 L ha -1 in the orientation at 0° and 135° of deflection, produced higher number of stems per linear meter (11.07 and 11.10), stem diameter (2.27 and 2.53 cm), stem length (200.30 and 209.37 cm), and consequently higher yields (70.83 and 77.88 t ha -1 ), respectively. Definitely, crop protection with fungicide was important to increase the yield of at least 19 t ha -1 compared to the sugarcane that did not receive any fungicide application.
With the appearance of orange rust in the SP81-3250 variety, doubts about the viability of its cultivation appeared due to their high productivity and high sucrose content, some growers prefer to keep this variety in the properties and, when rust occurs, use specific fungicides. Meanwhile, research institutions are seeking alternatives to replace SP81-3250 with other more disease-resistant materials. The use of fungicides based on mixtures of the chemical groups strobilurins and triazoles has been shown to be quite effective in the management of sugarcane orange rust, thus making possible to maintain the genetic potency of the crop (MARGAREY, 2008;FERNÁNDES et al., 2013). Similar results were reported by Rodrigues (2012), when fungicides of the groups Azoxystrobin + Ciproconazole were used. This higher productivity is due to foliar sanity and lower foliar senescence provided by the fungicides used, contributing to higher photosynthetic rates and higher yields.
Systemic fungicides are generally effective in lower coverage conditions when compared to contact fungicides. However, adequate coverage provided by the application technology is necessary even for systemic fungicides, especially when they 611 Physiological changes… ALVES, T. C. et al have translaminar movement (BOLLER et al., 2008). According to Staier et al. (2004), the pathogens control depends on the application technology, the climatic conditions and the fungicide effectiveness. The sugarcane orange rust can be controlled if the correct fungicide is selected, when the application occurs at the beginning of the infection cycle and with a satisfactory coverage of the affected leaves (OLIVEIRA et al., 2011). As the aerial application with 30 L ha -1 application rate proved to be more efficient than 40 L ha -1 , it is possible to use the lower application volume without reducing disease management and crop productivity. Lower application volumes improved the autonomy and operational capability of aircrafts, reducing costs and covering larger areas (ROMÁN et al., 2009).
CONCLUSIONS
The aerial application provided better photosynthetic rates compared to ground application, with better photosynthesis performance in the SP81-3250 sugarcane variety and higher content of chlorophyll a and b in the leaf limbus.
The application rate and the nozzles angulation in the spray bar of the aircraft influenced the gas exchanges as well as the chlorophyll content.
Fungicide applications in the sugarcane crop provided increases of more than 19 t ha -1 , depending on the spraying technique employed.
Aerial application with 30 L ha -1 application rate and orientation of the spray nozzle angle at 0° of deflection is a viable option to provide safer applications due to the larger droplet size. | 2019-04-02T13:14:43.915Z | 2018-01-01T00:00:00.000 | {
"year": 2018,
"sha1": "8f02cdb10c48ac64a30750f1f0d8260d4ea8e510",
"oa_license": "CCBY",
"oa_url": "http://www.seer.ufu.br/index.php/biosciencejournal/article/download/38450/22208/",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "b32ecf9355cc32e7f2735a21c1cc0ab1b73775d4",
"s2fieldsofstudy": [
"Agricultural and Food Sciences"
],
"extfieldsofstudy": [
"Biology"
]
} |
144498417 | pes2o/s2orc | v3-fos-license | The construction of ‘tough’ masculinity: Negotiation, alignment and rejection1
Drawing on narrative data collected during a three-year ethnography of a Scottish high school, this article examines the construction of working-class adolescent masculinities. More specifically, the analysis focuses on how adolescent male speakers negotiate, reject and align themselves with the hegemonically dominant ideology of ‘tough’ masculinity, the role socially low-risk discourses of ‘tough’ masculinity play in interaction, and how speakers integrate a range of discursive strategies which help maintain homosociality when ‘tough’ masculinity is at stake. I argue that discourses which appear to be about ‘being tough’ do a great deal more social work than might be expected. keywords: glasgow; glaswegian; masculinity; critical discursive psychology; urban ethnography
Introduction
The city of Glasgow, Scotland has long been associated with criminality, violence and anti-social behaviour, with many representations of the city exploiting the image of Glasgow as Scotland's most violent city (Davies 2007). Such behaviour is strongly linked with working-class males and a specific form of 'tough' masculinity which is considered normative within Glasgow and post-industrial urban contexts more generally (Skelton 1997). 'Tough' masculinity has a long-standing social value in Glasgow, and in a city with a celebrated industrial history, 'being a man' has typically been identified with strength, toughness and physical skill. James Reid, a Scottish trade unionist involved in the River Clyde shipbuilding industry, even went so far as to say that 'we don't only build ships on the Clyde, we build men' (Johnston and McIvor 2007: 35).
The construction of 'tough' masculinity is perhaps best realised in the figure of the 'hard man', a working-class male who embodies toughness, a willingness to fight, a propensity towards physical violence, and a disregard for his own personal safety (Whyte 1998). Representations of the 'hard man' in Glasgow are wide ranging, from the razor gangs in McArthur and Kingsley Long's 1935 novel No Mean City, to the 'Big Man' from the Scottish television show Chewin' the Fat, who solves all his problems with violence. But as ubiquitous as the idea of the 'hard man' is in contemporary Glaswegian society, questions remain over how productive it is in discussing the lived reality of working-class men (both adult and adolescent) in Glasgow.
Indeed, despite the fact that the construction of identity among adolescents has been a central concern in recent sociolinguistic scholarship (e.g. Eckert 2000;Moore 2003;Bamberg 2004), how men in Glasgow discursively negotiate the 'hard man' ideology (and laterally the idea of 'tough' masculinity) has been almost entirely ignored. Moreover, while many studies of identity and masculinity have focused almost exclusively on middle-class speakers (Cameron 1997;Edley and Wetherell 1997;Bucholtz 1999;Kiesling 2004, although see Labov 1972, Cheshire 1982and Milroy 1987 for some important exceptions), very little contemporary sociolinguistic research has focused on identity construction among working-class males, Scottish or otherwise. Lastly, there is an assumption within the sociolinguistic literature that doing 'tough' masculinity is a relatively straightforward endeavour which primarily involves explicit acts of violence (cf. Kiesling 1998;Coates 2003). One consequence of this is that 'tough' masculinity is viewed as a homogeneous construct expressed mainly through physical action. This offers a particularly limited picture of 'toughness' as it relates to adolescent masculinities, and with moral panics about adolescent criminality and violence a pertinent issue in recent times, particularly following the riots in Birmingham, London and Manchester in August 2011, it appears to be a fruitful time to critically discuss the relationship between urban masculinities and 'toughness' and how language is implicated in this relationship.
Adopting a social constructionist approach to identity, where identity is viewed as something which dynamically emerges in interaction (Bucholtz and Hall 2005), this article draws on narrative data and ethnographic observations collected during three years of ethnographic fieldwork in a high school in Glasgow to address three main aims: first, to discuss the construction of masculinities among a group of young working-class male speakers, and in particular, how they negotiate, reject and align themselves with the hegemonically dominant ideology of 'tough' masculinity; second, to argue that alongside discourses of 'tough' masculinity, young men use low-risk conversational strategies which help them preserve the principles of homosociality; and third, to show how discourses of 'tough' masculinity can stand in for the deployment of inter-personal violence to establish oneself as 'hard' . As such, this article is a potentially valuable contribution to our understanding of the discursive construction of masculinity among young men.
In the next section of the article, I discuss the concept of 'hegemonic masculinity' , focusing on the concepts of the 'hard man' and 'tough' masculinity. In section three, I outline the methodology and fieldwork site before analysing three conversational narratives to examine how the speakers in these narratives construct and negotiate 'tough' masculinity. I conclude with some comments on the implications these findings have for language and masculinity research.
Hegemonic masculinity in an urban context
Working-class Glaswegian males have traditionally been accorded with a reputation of violence, aggression, and criminality (Patrick 1973;Davies 2007;Kintrea et al. 2011), with one of the most persistent themes in the social history of Glasgow being the 'hard man' (Johnston and McIvor 2007;Young 2007). Ubiquitous in post-industrial cities (Skelton 1997: 352-353), the 'hard man' is an important touchstone and an embedded cultural theme for men in Glasgow. Scholarly treatment of the 'hard man' is, however, almost non-existent, despite its ideological centrality with Glaswegian society.
The status of being a 'hard man' relies a great deal on the intersection of several different practices, including physical strength, fearlessness, a willingness to engage in acts of violence (premeditated and reactive), aggression, toughness, social competitiveness, and (usually) violent reactions against perceived insults. Drawing these elements together, we can offer a definition of the 'hard man' as someone whose configuration of social practices demonstrates engagement with a culture of excessive aggression and violence. Indeed, for many people in Glasgow, the use of violence is a key component of being a 'hard man' and it is the case that violence is often considered to be a hallmark of masculinity (Kimmel 2001: 278), and a necessary part of being respected as a 'real man' (Quinn 2004: 111). As such, the 'hard man' is one substantiation of hegemonic masculinity within Glasgow since it '[embodies] the currently most honored way of being a man [and requires] all other men to position themselves in relation to it' (Connell and Messerschmidt 2005: 832). But like most forms of hegemonic masculinity, very few men are able to enact the practices required of being a hard man, due to the threat of personal attack, the potential legal ramifications of violence, and individual physical and psychological limitations. Additionally, while being a 'hard man' can facilitate social hierarchies, structures of domination and peer-group status (Phoenix et al. 2003: 180;Kenway and Fitzclarence 2005: 43-45), it can also result in a breakdown of social relations, peer marginalisation, peer rejection, and personal injury (Anderson 1997: 18-23;Hawley 2007: 4). Thus, there is a tension between being a 'hard man' and developing and sustaining robust friendship networks.
While the 'hard man' is an acute embodiment of 'tough' masculinity, the role of 'tough' masculinity more generally has been a recent focus in contemporary language and masculinity research. For example, in their analysis of data collected from a group of men undertaking foundational degrees at the Open University, Wetherell and Edley (1999: 342) discuss how men take on three types of imaginary positionings: 'heroic' (the most closely aligned with hegemonic masculinity), 'ordinary' (where speakers emphasise themselves as normal, moderate or average) and 'rebellious' (where men describe themselves in terms of non-normative discourses of masculinity). One of their important findings was that many of the men adopted the imaginary positions of 'ordinary' and 'rebellious' masculinity, rather than 'macho' or 'heroic' , as a way of reinforcing other hegemonic ideals such as individual autonomy and personal choice. This is alluded to by Bucholtz (1999: 444) who argues that 'physically based' masculinities are becoming subordinated in favour of more 'technically based' masculinities. Such distancing from the hegemonic ideals of 'macho' masculinity is surprising, especially given how far 'toughness' is assumed to be a key orientating point for men. Indeed, in their study on adolescent masculinities, Phoenix and Frosh (2001) outline how 'hardness' is an important predictor in determining not only a boy's popularity, but also their sense of self worth as normatively 'masculine' . As I show in the analysis below, however, while 'toughness' might be an important component of adoles-cent male life, it is certainly not the only, or even the most predominant, component.
Methodology
The ethnographic fieldwork on which this article is based began in 2005 after ethical approval from the high school, the University of Glasgow and Glasgow City Council had been obtained. In this section of the article, I outline the Communities of Practice encountered in the high school (CofP hereafter), the data collection process, and the approaches used in the analysis of the narrative data. I also briefly consider the notion of 'identity' as emergent in discourse.
Communities of Practice
As previous research has demonstrated (e.g. Eckert 2000;Mendoza-Denton 2008), language is only one of a range of social practices through which individuals signal their membership of a particular group and construct their social identities. Consequently, in order to investigate the range of practices which contribute towards the construction of identity, including language, the Community of Practice framework was used, rather than the speech community or social network approach. Eckert (2000: 35) defines a CofP as: an aggregate of people who come together around some enterprise. United by this common enterprise, people come to develop and share ways of doing things, ways of talking, beliefs, values -in short, practices -as a function of their joint engagement in activity. Particular kinds of knowledge, expertise and forms of participation become part of individuals' identities and places in the community. Importantly, the use of the CofP framework allows us to go beyond 'topdown' identity categories such as 'working-class adolescent male' towards identities which emerge as socially relevant for the speakers (I discuss this issue in more detail below). Membership of a particular CofP was decided by a process of 'triangulation' (Mendoza-Denton 2008: 240), informed by speakers' self-identification, other-identification, and ethnographic observations of shared social practices and mutual endeavours. Four CofPs emerged during the fieldwork which I named the Alternative, Sports, 'Ned', and Schoolie CofPs (although in the analysis section, I only discuss data collected from the Sports, 'Ned' and Schoolie CofPs). These four CofPs represented the broad social spectrum of the high school, with each group occupying a distinct position by virtue of their differentiated social practices (see Lawson 2011 for more detail on these practices).
While the members of each of these CofPs knew of, and sometimes informally socialised with, one another, the ethnographic fieldwork uncovered significant polarisation between the groups, a finding consistent with previous 'school ethnographies' conducted in the UK (e.g. Willis 1977;Skelton 1997). The primary distinction was between the Schoolie and the 'Ned' CofP who represented the extreme school and anti-school positions respectively. For example, the 'Ned' CofP were involved in the local subculture, including skipping school, participating in a range of age-restricted activities and low-level crime such as petty theft and minor vandalism. They also appeared to be well versed in the gang culture of Glasgow 2 and either knew of or informally socialised with individuals who were involved in gang-related violence (Lawson 2009: 365-367). The Schoolie CofP tended to reject such social practices and instead positioned themselves as pro-school by orientating positively towards the values promoted by the education system. By recognising (and accepting) the authority of the teachers, the members of this group were more fully aligned with the 'establishment'. The Alternative and Sports CofPs formed the 'grey area' between the Schoolie and 'Ned' CofPs, and although not as anti-school as the 'Ned' CofP, they were not as pro-school as the Schoolies. In terms of distinct social practices, the Alternative CofP listened to rock music and participated in nontraditional sports such as wrestling and BMX riding, while the Sports CofP participated in more mainstream activities such as football and rugby.
Over the course of the fieldwork, it became apparent that masculinity was constructed differently across the CofPs encountered. More specifically, members of the 'Ned' and Sports CofPs appeared to construct more 'tough' identities while the Schoolie CofP explicitly distanced themselves from such identities. Focusing on narrative data collected from members of these three CofPs, the analysis below suggests that, contrary to the positions outlined above, 'toughness' is not only (or always) about 'being tough' , and that conceptualising masculinity as static psychological categories of 'ordinary' , 'heroic' or 'rebellious' removes much of the complexity of the moment-by-moment unfolding of identity construction.
Data collection
Like many ethnographic studies in sociolinguistics, the main method of data collection was interviews. Participants were recorded (in conversational dyads or triads with myself present) once they had returned a permission form signed by a parent or guardian, and to ensure confidentiality and anonymity, all participants were given pseudonyms. the construction of 'tough' masculinity 375 Although there are a range of issues associated with the use of interviews in qualitative research (Potter and Hepburn 2005: 285), the difficulties of access and ethics associated with collecting 'naturalistic' data meant that interviews were the only possible method of data collection. Several steps were, however, taken in order to address some of the perceived weaknesses of interview approaches. First, the recordings were conducted after approximately six months of fieldwork to allow the informants to become comfortable with speaking about their lives with someone with no predefined role in the school. This 'lag' also meant that I had background information about the participants' social lives and was better able to draw on this knowledge during the recordings. Second, the recording context was relatively informal to encourage informants to be less self-conscious of their talk. This meant that the first few recording sessions were facilitated with drinks, sweets, playing cards and so on to reduce participants' degree of 'active monitoring' of their speech (the Observer's Paradox, Labov 1972). Participants were also informed that the research focused on 'how people spoke in different groups' (cf. Potter and Hepburn 2005: 290), although the participants were generally uninterested in the aims of the research. Third, I was wary about the recording sessions falling into a 'question and answer' session, so although it was necessary to ask direct questions of the participants to ensure that useful data was collected (for the purposes of the quantitative sociolinguistic analysis presented in Lawson 2009Lawson , 2011, an attempt was made to have the participants guide the conversations themselves, rather than the conversational agenda be established by me. Nevertheless, it is important to note that my presence during the recordings means that we should view the interviews as co-constructed speech events between the participants and myself, rather than simply co-constructed between the interviewees (Rapley 2001;Baker 2004). Last, in order to mitigate the effects of any perceived association with the authority of the school, I did not observe classes or interact with teachers (Eckert 2000: 72-73;Evaldsson 2002: 204).
By the end of the fieldwork, the dataset consisted of approximately 30 hours of fully transcribed conversations (250,000 words), following the conventions outlined in Atkinson and Heritage (1984). Although the speakers used Glaswegian Vernacular, the narratives I discuss have been rendered in Standard English. Distinct Scots lexical features have been retained where possible, and glosses have been provided. My turns are marked as 'RL' .
Analytical approach and 'emergent' identities
Following transcription, the data were coded for salient conversational themes, 3 including fighting, arguing, friendship, life after school and so on. During this process, several narratives emerged as interesting in terms of how the speakers seemed to enact 'tough' masculinity. Since narratives are the vehicles through which speakers perform their 'identity' work (Bamberg and Georgakopoulou 2008), it was decided that these data warranted further investigation using critical discursive psychology, where 'attention to micro-level detail is supplemented with a macro-level layer of analysis in order to focus on the historical, social and political contexts of identity construction' (Benwell and Stokoe 2006: 9). Importantly, within critical discursive psychology, identity is viewed as something socially constructed; as something speakers do rather than something that speakers have (this framework draws heavily on Judith Butler's theory of performativity).
A key debate about constructionist approaches has, however, emerged in recent years, centring on the extent to which the researcher predetermines the categories speakers occupy. For example, Benwell and Stokoe (2006: 56-57) argue that constructionist or 'gender-as-performance' studies 'rely heavily on analysts' rather than participants' categories' , leading to a tautology where researchers start out already 'knowing' the identities of the speakers whose identity constructions they are supposed to be investigating (Stokoe and Smithson 2002: 81;Benwell and Stokoe 2006: 57). In qualitative research, then, it is important to outline under what categories speakers are recruited (Potter and Hepburn 2005: 290).
Since one of the aims of the research was to investigate quantitative patterns of linguistic variation among young working-class Glaswegian males (Lawson 2009(Lawson , 2011, the ethnographic fieldwork focused on speakers who fit this profile (although only speakers who belonged to one of the four CofPs outlined above were interviewed). Importantly, however, the ethnographic fieldwork (outlined above) uncovered socially meaningful and locally embedded 'ways of being' which went beyond the homogeneous category of 'working-class adolescent male' , moving away from identity categories such as 'working-class' and 'male' towards identities which were informed through 'bottom-up' processes. This article, therefore, does not investigate how 'working-class male' identity is constructed through an analysis of 'working-class male' language, but instead, how salient cultural discourses such as 'tough' masculinity emerge in interaction and how these discourses function as part of a wider set of identity strategies (cf. Kiesling 2006).
The (ir)relevance of 'extra-discursive' features has, however, also been disputed in discourse studies (Wetherell 1998). In her discussion of hegemonic masculinity, for example, Speer (2001aSpeer ( , 2001b argues that extra-discursive issues which are not directly orientated to by participants should not form part of an analytical account. In response, Edley (2001b) notes that it is not enough to focus only on the data, and the construction of 'tough' masculinity 377 although 'hegemonic masculinity' may not be explicitly named as such by speakers, 'it is a mistake to imagine that what it describes is entirely absent from everyday talk' (Edley 2001b: 137). Additionally, the use of ethnography helps us to develop 'detailed insight into the concepts and processes that underlie what people do -but that they are often unaware of' (Forsythe 1999: 129). Indeed, given the ideological centrality of the 'hard man' identity within Glasgow, it would make little sense to suggest that this culturally valued way of 'being a man' in Glasgow would not be a relevant issue.
Narrative I: negotiating 'tough' masculinity
In the analysis of the first narrative, I discuss how Nathan and Phil (two members of the Sports CofP) collaboratively construct and negotiate social identities which align with 'tough' masculinity over the course of a co-constructed narrative. The two speakers discuss a key event in the collective memory of their social group (what Georgakopoulou 2007 calls a 'shared story'): a fight between Nathan and Mark (another Sports CofP member). Although Phil, Nathan and Mark were friends at the time of the fieldwork, there had been a fall out between Nathan and Mark which led to a fight between them. Phil attempted to intervene to protect Mark from injury, but was prevented by others from doing so. In the first part of the analysis, I present the opening excerpt of the transcript and discuss how the 'looking good principle' (Ochs et al. 1989) can help us illuminate the importance of self-presentation in the narrative. In the second part, I outline some of the ways in which 'tough' masculinity is constructed collaboratively and negotiated by the speakers. The 'looking good principle' states that speakers 'present narrated events in a way that portrays themselves in the most complimentary light' (Ochs et al. 1989: 244). In following the 'looking good principle' , speakers attempt to present a positive image of themselves to their interlocutors. In Excerpt 1 of the conversation, Phil and Nathan observe the 'looking good principle' by downplaying the negative aspects of their character as being 'fighters' and position themselves as unwilling participants in the event (although as I argue below, both speakers use discursive means to demonstrate alignment with 'tough' masculinity). Phil uses the conditional modal verb would (line 21 and line 25) to suggest that attacking Nathan is something he considered but did not do (a claim immediately countered by Nathan), while Nathan argues that he 'keeps himself to himself ' (line 41). His use of the tag question 'don't I' (line 41) is a way of seeking affirmation and agreement from Phil to bolster his claims. As Ochs and Capps (2001: 137) point out, however, 'there are risks… whenever recounting… a narrative to an intimate: the moral glow may be dashed when someone recalls a rather discrediting background detail' and after being invited to respond, Phil's dispreferred response is prefigured by an almost two second pause (line 42) before he rejects Nathan's statement, pointing out that Nathan 'sometimes causes fights as well' (line 43-44). Taken together, Phil undermines Nathan's attempt to justify his lack of culpability in and responsibility for the fight. Nathan then rejects the idea that he started the fight by claiming that he only fought Mark because he was forced to (line 56-57). There is a degree of similarity here with Andersson's (2008) study of narratives of violence in which Salim, a young man who had been sent to a youth detention centre for assault, explains away his use of violence as 'self-defence' . Such techniques of neutralisation are often an attempt to justify one's behaviour and place the blame on a second party, and we can see this technique deployed in Nathan's contribution to the narrative. The opening sequence of the narrative is also important in that both speakers use this opportunity to initially construct their identities as 'tough' , albeit in slightly different ways. Phil's first contribution (line 24) positions himself as 'heroic' through his attempted intervention in the fight to protect Mark, while his second contribution (line 27-28) furthers an idea of 'tough' masculinity by virtue of the fact that he had to be held back by other people in the group, suggesting that if this had not happened, Phil would have caused serious harm to Nathan. Nathan's construction of 'tough' identity is more straightforward in that he opens with the claim that he 'battered Mark' (line 5), and although the remainder of his contribution in Excerpt 1 is an attempt to explain his actions, in Excerpt 2, Nathan jettisons his attempt at 'looking good' , which up until now has been based largely on the rejection of violence. Instead, he states that the only possible solution to the situation in which he found himself was to resort to physical aggression. When the narrative arrives at its climax and culminates in physical blows, we revert to a presentation of 'tough' masculinity by Nathan which is not mitigated in any way. Unlike earlier parts of the narrative where Nathan attempts to deflect responsibility for the fight, here he emphasises the agency of his actions. Syntactically, Mark occupies the object slot in the utterance (line 1 and line 2 in Excerpt 2) and is the one towards whom action is directed. Moreover, Mark's position as 'object' is highlighted by the fact that in Nathan's narrative, Mark does not attempt to fight back. From my observations during the ethnographic fieldwork, it is unlikely that Mark would have passively accepted being attacked by Nathan since doing so would have resulted in social censure and a potential loss of status. Nevertheless, by glossing over Mark's participation in the fight in this narrative, Nathan attempts to cement his own position as 'tough' , placing Mark in the undesirable position of being considered an ineffective and inept fighter. Nathan also takes up the earlier point from Excerpt 1 that Phil had to be restrained from intervening, adding the detail that it was 'big Peter' (line 11-13) who ultimately stopped Phil. The repetition of this point solidifies the co-construction of 'tough' masculinity for both speakers: Phil's attempts to intervene and the fact that Nathan's behaviour required intervention. Following Nathan's account of him fighting Mark, he questions Phil's attempts to 'look good' , and in Excerpt 3, offers an (implicit) moral evaluation of Phil's actions. Excerpt 3 includes features which would normally be indicative of a cooperative speech style, such as the repetition of the verb 'see' by both speakers and the presence of simultaneous speech (lines 11-14). In the case presented here, however, the conversation is anything but co-operative, with both speakers vying for control of the conversational floor to contest the issue of Phil crying. Nathan's claim is hedged by the fact that he says 'looked like you were crying' as opposed to 'you were crying' , but nevertheless, Nathan calls into question Phil's claims to a 'tough' masculinity, since crying is often seen as an antithetical masculine quality. It is expected (if not demanded in certain communities) that men should not cry, since doing so belies emotional fragility (Migliaccio 2011: 229). Nathan's comments are an attempt to foreground Phil's breaking of social norms and function as a face-threatening attack on Phil's construction of a 'tough' masculinity.
What is interesting about this excerpt is that Nathan appears to contribute two very conflicting statements (lines 2-3 and lines 7-8). He initially states he did not see Phil crying (lines 2-3), a claim strengthened by the adverb 'honestly'. In line 7-8, however, this statement is contradicted when he says 'I did see tears of water dripping from your eyes'. This claim is boosted by an appeal to the external group of peers observing the fight, a tactic Nathan attempts four times (lines 9, 11, 13 and 16). The commentary on Phil's supposed crying episode is further developed in Excerpt 4. At the start of Excerpt 4, I offer a supportive alignment with Phil (lines 1-4), a comment which Phil rejects by pointing out that he was not fighting, the implication being that since he was not fighting, he had no need to cry. Phil also explicitly positions himself as 'protector' when he says that he was 'just going to stick up for [Mark] (line 6). In line 10, Phil challenges Nathan's claim, upgrading his position that he was not crying through the repeated use of 'really' , a fact that Nathan agrees with (line 14). Nathan's agreement here is positioned as a co-operative speech act which shows alignment with Phil's own version of the event. Nathan then restates his two contradictory claims from Excerpt 3: the first, that it did not look as though Phil was crying (line 15), and the second, that it did look like Phil was crying (line 16). The contest between Nathan and Phil on who is 'right' becomes more apparent from line 19 onwards, during which both participants seek to convince the other of their version of events, highlighted through the use of disruptive overlap throughout the excerpt.
Ultimately, however, we have to ask why Nathan produces the contradiction he does. I suggest that it happens because Nathan has to simultaneously manage a critique of Phil's claim to 'tough' masculinity and maintain the relationship. If he had decided to not mitigate his claim that he saw Phil crying, then it is entirely possible that his comments would have been taken more seriously and with potentially dangerous repercussions. Both participants here are collaboratively defending their sense of 'tough' masculinities and in the process, they use the conversation as a way to explore what constitutes 'tough' masculinity and what does not. Importantly, Nathan's mitigating comments offer Phil a safe way of contesting the claims that he cried (thus to counter accusations that he is not a 'real man'), while allowing Nathan an opportunity to further his own sense of 'tough' masculinity, primarily by positioning himself as an arbiter of acceptable masculine behaviour.
What occurs in this conversation is slightly different to what Goodwin (1990: 248-256) and Evaldsson (2002: 218) find in their analyses of boys' story-telling. Both argue that counter-narratives offered by a boy who is under attack generate further counters from the peer-group. In the case of this data, however, the rejection of Nathan's claims by Phil does not entrench Nathan's viewpoint or generate stronger and more insistent claims. Instead, Nathan utilises strategies which mitigate the strength of his claim, even going so far as to contradict himself. The collaborative nature of the conversation becomes even more apparent when we consider Excerpt 5 where Nathan appears to offer a supportive comment that it too sometimes looks as though he is crying. Nathan's comments about crying (line 10-17) appear to be an attempt to validate Phil's earlier claim in the narrative and shows how judgements about apparent 'weak' emotionality can be reintegrated, refashioned and reinterpreted for the purposes of maintaining homosociality between interlocutors. Indeed, the negotiation of 'tough' masculinity in this narrative relies a great deal on indirection and delicacy between the two interlocutors. Both participants are aware that prototypical expressions of 'tough' masculinity (i.e. fighting) could potentially alienate them from their social group (as was the case in other examples where individuals in the high school had fought with one another). Without collaboratively negotiating in the 'game' of 'tough' masculinity, the narrative could have developed in a radically different direction, particularly if both speakers were truly committed to the notion of 'overt competition' and 'one-upsmanship' . For example, Farrington (1998: 19) suggests that many altercations between adolescent males begin with arguments or disputes. Nathan's contributions could have been interpreted by Phil as insulting, resulting in potentially more confrontational strategies which would have run the risk of threatening the friendship. The way the conversation is framed, however, provides both parties with an opportunity to perform 'tough' masculinity without the 'game' going too far.
Narrative II: Personal histories of 'tough' masculinity
The next narrative was collected during a conversation with two members of the 'Ned' CofP, Danny and Will. 4 As mentioned in section 3, of the four CofPs I encountered, the members of the 'Ned' CofP were the most integrated into the local subculture of Glasgow. Their social practices included a range of age-restricted activities such as smoking and drinking, illegal activities such as drug taking, and a knowledge of local gangs and gang-related activity (Lawson 2009: 152-162). As such, members of the 'Ned' CofP were seen by many as the 'hardest' in the school, leading to some pupils avoiding any interaction with them. 5 In particular, a knowledge of gangs and gang-related activity were important indices of group membership, even though I saw limited evidence that members I spoke to were actively involved in any of the gangs surrounding the local area. Nevertheless, gangs remained an important conversational point for a number of reasons. First, gangs in Glasgow are transient, mobile and changeable, so knowing the best fighters, what fights had happened, who the 'hardest' members were, and other demonstrations of 'gang knowledge' conferred a degree of insider status. Second, because knowledge claims about gangs and gang-violence were difficult to verify, status could be negotiated by claiming to 'know the right things' without serious worry of other people showing this knowledge to be demonstrably false. And last, since gangs in Glasgow are generally organised around physical violence and other antisocial acts, members could vicariously attain 'hard man' status through claiming even peripheral membership. The main speaker, Danny, was identified by many people as a prototypical 'hard man' , a status he maintained through outright rebellion against teachers, claims of 'running' with local gangs, and the retelling of a range of fight narratives (recorded both on and off-tape). Prompted by a discussion on Glasgow gang culture, Danny's narrative focuses on his participation in a gang fight. Throughout the narrative, Danny draws on dominant discourses of 'tough' masculinity, but whereas we might expect the narrative to display elements of 'heroic' masculinity (cf. Wetherell and Edley 1999) and to clearly foreground his skills and abilities as a fighter (cf. Coates 2003: 110), he uses the narrative as a way of distancing himself from dominant expressions of 'tough' masculinity. I suggest, however, that he uses his historical involvement with gangs to also reify his identity as 'tough' . Immediately, Danny distances himself from participation in gang-related fighting, claiming that it was something that he 'used to' do (lines 2-4). When asked about why he stopped, he initially does not complete his first response (line 6). Instead, in the following line, he self-repairs to claim that his involvement in gang violence was only restricted to one evening (line 7) and that it was only after this that he stopped. When questioned about why he stopped, we are faced with a complex interweaving of multi-faceted orientations towards 'tough' masculinity. First, Danny states that the reason he stopped fighting was because he 'didn't like it' (line 11), a claim which, on the surface, appears to be a rejection of 'tough' masculinity since 'real men' are expected to enjoy violence and fighting (Lewis 1983). This expla-nation is then rejected for one where he stopped because he could have been seriously injured during the fight by someone wielding a bottle (line 13-14). Danny appears to be searching for an 'acceptable' reason as to why his involvement in fighting ceased. Nevertheless, his two opening contributions suggest an apparent rejection of 'tough' masculinity along two potential axes; a lack of enjoyment and fear for one's own personal safety, both of which contradict the 'hard man' ideology. In line 16-17, however, a sense of 'tough' masculinity is re-established when he admits that during the fight, he 'smashed the bottle and fucking shoved it right into some cunt there' . Here, Danny presents a stark reframing of the situation in which he engages with a form of extreme 'tough' masculinity. The utterance also alters the dynamic of the event to place Danny in the dominant position and his foe to the subordinate position (acutely marked through his use of the insult term cunt). In lines 19-20, he comments that he did not go back to the scene because he thought he had 'almost killed' his opponent, relating this back to a previous occasion where he had been 'done' (charged) with attempted murder. 6 Finally, towards the end of the narrative, Danny alters his presentation of 'tough' masculinity again by admitting that he would run away from a fight (line 41), reverting back to his original stance of rejecting 'tough' masculinity.
Although there are similarities to the narrative discussed in section 4 (i.e. self-defence against a perceived or actual threat), some crucial differences emerge. Unlike Nathan's narrative, which segues into a negotiation of both his and Phil's claims to 'tough' masculinity, Danny's narrative is, I suggest, a sophisticated and dynamic negotiation of 'tough' masculinity which cannot be read as a straightforward substantiation of 'heroic' , 'ordinary' or even 'rebellious' masculinity. Danny states that he never wants to be involved in a fight of that scale again (line 24), that he does not want to go to jail for murder or assault (line 26-27), and that he is more likely to run away from a fight than to confront an attacker (line 41), allowing him to distance himself from 'tough' masculinity. But his association with gang violence, as brief as it was, also allows him to claim a 'hard man' identity. Danny's story here is a complex personal narrative which shows that he is capable of being a 'hard man' , and as such, it is an advertisement of his ability to embody an extreme 'tough' masculinity. The subsequent telling and retelling of the story serves as a 'pre-emptive strike' against anyone who might bother him, with the words standing in and removing the need for similar actions in the future (cf. Anderson 1997: 19). He is able to reject the hard man identity now because he has 'proven' himself in the past.
Narrative III: The construction of alternative 'tough' masculinity
The last narrative shows how Victor, a member of the Schoolie CofP, rejects 'tough' masculinity while simultaneously orientating towards certain aspects of it. The Schoolie CofP was by far the most integrated into the educational system, recognising and acceding to the authority of the school and the teachers (Lawson 2011: 249). None of them, to my knowledge, engaged in any age-restricted activities and were more likely to meet up with one another outside of school to play computer games or practise guitar playing. As such, the members of the Schoolie CofP existed almost completely outside the sub-cultural context of the high school and were considered by many within the school to be 'model students' . While it was certainly the case that many of the Schoolie CofP members rejected the discourse of 'tough' masculinity, Victor's narrative shows a passing familiarity with some of these discourses, and an implicit agreement with others.
The narrative was elicited through a conversation about gang activity in the local area, during which Victor related how he had been involved in an altercation with a group of young men while he was out with Gary, one of his friends and another member of the Schoolie CofP. The previous narrative (not presented here) focused on an event where Victor and his friends were beaten up by a group of boys and Victor did not attempt to fight back. Excerpt 7 follows on after Victor relates the first encounter. masculinity. Yet this positioning as 'victim' is also done in parallel with a partial engagement with ideologies of hegemonic 'tough' masculinity. For example, he states that he was 'trying to make up' for letting his friends be attacked (line 18), an implicit acknowledgement that he is lacking in some way and that he needs to prove himself. Towards the end of the narrative, Victor's negotiation of 'tough' masculinity is further developed when he states that although he was beaten up (line 23), it 'wasn't that bad' (line 25) and 'it didn't actually hurt' (line 29), a claim that is accompanied by laughter (line 28), apparently trivialising the event. His defeat is reformulated in a positive light by a rejection of weakness and vulnerability, and his subsequent reworking of 'tough' masculinity is achieved through a discourse of being able to stand the pain, rather than deal it out. This aligns with previous research which shows that not only is the denial of pain a typical characteristic of 'tough' masculinity (Courtenay 2000(Courtenay : 1389, but also that being able to endure and withstand pain without complaint is reconfigured as a positive character trait (Zeeland 1997: 119).
Discussion and conclusions
My main point in the analysis of the preceding narratives has been that discourses which appear to be about 'being tough' do a great deal more social work than might be expected. More specifically, the article demon-strates how the speakers' narratives do not focus on heroic, against-theodds achievements, but instead contain a great deal of delicacy, nuance and indirection which allows them to maintain homosociality, distance themselves from their past behaviour, or demonstrate an awareness of what it means to be 'tough' . We also have some evidence that 'tough' masculinity is at least partially rejected by some of the speakers. For example, Danny rejects 'tough' masculinity through a discourse of 'I was a hard man, but I'm not any more' , while Victor does so through a discourse of 'I've never been a hard man' . In contrast, Nathan and Phil positively align themselves with 'tough' masculinity in their narrative more explicitly. None of the speakers, however, offer a more general rejection of 'tough' masculinity (to wit, 'it's not good to be a hard man'), suggesting that such an identity is accepted as the hegemonic one for young men in the city.
In terms of the contribution this article makes to a more general understanding of masculinities in Glasgow (and Scotland more broadly), I would suggest that while the 'hard man' is an important cultural concept within the city, it is of relatively limited power insofar as it encapsulates young men's articulations of masculine identities in the city. Indeed, the picture of the 'hard man' as established by the mainstream media appears to be at odds with the kinds of accounts presented in this article. Although the article focuses on a specific set of speakers in a particular location, it nevertheless provides some substance to how young men in the city construct their social identities as men against a backdrop of a hegemonically dominant ideology of 'tough' masculinity.
Moving beyond Glasgow, this article has several implications for how we approach the study of language and masculinity. First, we should reconsider the usefulness of static identity categories such as 'heroic' and 'rebellious' masculinity, particularly since this implies that speakers deploy only one identity over the course of any given interaction (cf. Wetherell and Edley 1999). As the analysis above shows, identity is a dynamic entity which shifts on a moment-by-moment basis, and any analysis of language and masculinity should be sensitive to these shifts. It may be the case that speakers sometimes foreground certain facets of identity, but even in such cases, we should not focus on the foreground at the expense of the other identity work speakers undertake. Second, we have seen that the use of ethnography permits an additional layer of description in the narratives under analysis. Indeed, integrating insights garnered from ethnographic fieldwork means a more fully formed account of the social context the speakers inhabit can be developed. A third related point is that the use of ethnography also allows us to see the relevance of issues which might not be immediately retrievable from the conversational context (cf. Baker 2004: the construction of 'tough' masculinity 391 163; Benwell and Stokoe 2010: 95). While extra-discursive features such as 'tough' masculinity and the 'hard man' ideology might not be named explicitly by speakers, they are nevertheless important in our account of what speakers do (cf. Kiesling 2006: 268).
The research presented here has, of course, its limitations. Of particular concern, briefly alluded to above, is how far the analysis can be generalised to other men in Glasgow. Indeed, generalisation is an acute concern for most ethnographic work (O'Reilly 2009: 82-86), yet it is important to recognise that ethnography helps us bridge the gap between ideological constructs and how these might be embedded in everyday interaction. By investigating the 'local' , we can start to understand how speakers exploit more 'global' resources for interactional purposes and how the same resources might be deployed across different groups. The concomitant use of interviews to investigate the construction of social identity is also a potential area of weakness (cf. Potter and Hepburn 2005), but it is important to note that the interview data formed only one part of the study and that ethnography facilitated an investigation of the kinds of identities socially relevant to the speakers, going beyond the category of 'working-class adolescent male' . As such, the integration of ethnography with critical discursive psychology has helped to develop a more nuanced account of the role of 'tough' masculinity among adolescent male speakers and has shown how 'tough' masculinity is about much more than just being tough.
About the author
Robert Lawson is lecturer in linguistics at Birmingham City University. [[ Simultaneous utterances [ Overlapping speech which does not start simultaneously = Contiguous utterance [info] Contextual information added (e.g. names) (gloss) Gloss of lexical item (( )) Paralinguistic item (.)
Transcription conventions
Pause less than one second (sec) Pause timed in seconds -Speech stops abruptly : Sound is prolonged .
High rising pitch intonation Notes 1 I would like to thank Paul Baker, Scott Kiesling, Ursula Lutzky, Ruth Page, Nicolai Pharao, and Elizabeth Stokoe for their extensive feedback, friendly support and sagely advice as this article moved from the germ of an idea to final publication. I am particularly indebted to the anonymous reviewers who commented on several versions of this article as it made its way through the peer-reviewing process. I would also like to thank audiences at iClave, iGala, University of Lancaster, UC Santa Barbara, and Stanford University for their questions, suggestions and discussion, all of which have helped make this article stronger. Lastly, this research would not have been possible without the cooperation and involvement of the pupils who shared their stories. Thank you. 2 'Gangs' in Glasgow do not follow a hierarchical structure as that which characterises many urban gangs in North America. Instead, 'gangs' tend to be horizontally distributed and established around territorial areas, including local housing estates, parks, and other important boundary markers (Kintrea et al. 2011). 3 The coding process involved, among other things, a close reading of the transcripts and deciding what the topic of conversation was for each speaker turn. 4 Will's turns are all labelled 'inaudible' because his microphone was not properly attached. 5 They also advised me against trying to get to know anyone they considered a 'ned' . 6 Although I was never able to determine the veracity of this statement, it is a substantiation of my point that it is difficult, if not impossible, to confirm or deny the kinds of events Danny narrates here. | 2019-05-05T13:04:43.493Z | 2013-11-02T00:00:00.000 | {
"year": 2013,
"sha1": "53f6d7f330c4045feacab1b1d8923cacef8be77d",
"oa_license": "CCBYSA",
"oa_url": "https://zenodo.org/record/896817/files/article.pdf",
"oa_status": "GREEN",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "fbca43ab616c59ab4543026184ebc432b1059720",
"s2fieldsofstudy": [
"Sociology"
],
"extfieldsofstudy": [
"Sociology"
]
} |
211518925 | pes2o/s2orc | v3-fos-license | Evaluation of Duration of Immunogenicity and Protective Efficacy of Improved Influenza Viral Vector–Based Brucella abortus Vaccine Against Brucella melitensis Infection in Sheep and Goats
In this study, we first evaluated the duration of a protective immune response against Brucella melitensis infection in non-pregnant sheep and goats immunized with an improved (by vaccine formulation and route of administration) commercial Brucella abortus vaccine based on influenza viral vectors expressing Brucella immunodominant Omp16, L7/L12, Omp19, or Cu-Zn superoxide dismutase (SOD) proteins (Flu-BA_Omp19-SOD). Sheep and goats in the vaccinated group were immunized thrice concurrently via the subcutaneous and conjunctival routes of administration at an interval of 21 days. Animals in the control group were administered with 20% Montanide Gel01 adjuvant in phosphate-buffered saline in the same way. We showed that the Flu-BA_Omp19-SOD vaccine in sheep and goats induces antigen-specific Th1-biased [immunoglobulin G2a (IgG2a) over IgG1] antibody response and T-cell and interferon γ responses lasting over a period of 1 month post–last vaccination (PLV). The levels of protection against B. melitensis 16M infection (vaccination efficacy) in vaccinated sheep for a period of 6 months were 0–20% and in goats 20–40% compared to control challenge group. But the severity of B. melitensis 16M infection in the Flu-BA_Omp19-SOD–vaccinated sheep and goats during the entire period of observation revealed the infection index (P = 0.001–P < 0.0001) and Brucella colonization in lymph nodes and organs (P = 0.04–P < 0.0001) were significantly lower than those in the control group. To conclude, the Flu-BA_Omp19-SOD vaccine using improved formulation and administration method in sheep and goats provides augmented antigen specific humoral and T-cell immune response lasting only for 1 month PLV and partial protection for 6 months against B. melitensis 16M infection.
In this study, we first evaluated the duration of a protective immune response against Brucella melitensis infection in non-pregnant sheep and goats immunized with an improved (by vaccine formulation and route of administration) commercial Brucella abortus vaccine based on influenza viral vectors expressing Brucella immunodominant Omp16, L7/L12, Omp19, or Cu-Zn superoxide dismutase (SOD) proteins (Flu-BA_Omp19-SOD). Sheep and goats in the vaccinated group were immunized thrice concurrently via the subcutaneous and conjunctival routes of administration at an interval of 21 days. Animals in the control group were administered with 20% Montanide Gel01 adjuvant in phosphate-buffered saline in the same way. We showed that the Flu-BA_Omp19-SOD vaccine in sheep and goats induces antigen-specific Th1-biased [immunoglobulin G2a (IgG2a) over IgG1] antibody response and T-cell and interferon γ responses lasting over a period of 1 month post-last vaccination (PLV). The levels of protection against B. melitensis 16M infection (vaccination efficacy) in vaccinated sheep for a period of 6 months were 0-20% and in goats 20-40% compared to control challenge group. But the severity of B. melitensis 16M infection in the Flu-BA_Omp19-SOD-vaccinated sheep and goats during the entire period of observation revealed the infection index (P = 0.001-P < 0.0001) and Brucella colonization in lymph nodes and organs (P = 0.04-P < 0.0001) were significantly lower than those in the control group. To conclude, the Flu-BA_Omp19-SOD vaccine using improved formulation and administration method in sheep and goats provides augmented antigen specific humoral and T-cell immune response lasting only for 1 month PLV and partial protection for 6 months against B. melitensis 16M infection.
INTRODUCTION
Brucellosis is a chronic infectious disease of animals and humans that induces huge economic losses globally. Brucella melitensis is the causative agent of brucellosis in sheep and goats and represents the greatest risk to human health among all known Brucella species (1). To control brucellosis in animals, vaccination is one of the most cost-effective measures, which in turn helps in protecting the health of humans in endemic areas (2). This also aids in eradication of the disease among livestock (3). Currently, attenuated B. melitensis Rev.1 vaccine is used in sheep and goats (4). Although the Rev.1 vaccine has been found effective, it has several limitations such as it causes abortion in a fraction of vaccinated animals, the vaccine bacteria are virulent to humans, and differentiation of infected from vaccinated animals (DIVA) is a challenge (4,5). Therefore, development of a safe and effective vaccine to control B. melitensis infection in sheep and goats that has DIVA potential is warranted.
Earlier, we developed a novel Brucella abortus vaccine based on influenza viral vector (IVV) expressing Brucellaimmunodominant outer membrane protein (Omp)16 or ribosomal L7/L12 protein (Flu-BA) (6). In January 2019, the Flu-BA vaccine was registered in Kazakhstan (registration certificate no. RK-VP-1-3775-19 dated January 14, 2019) and is currently being used in cattle against B. abortus infection. The vaccine response data obtained in cattle (6), as well as information supporting the ability of influenza viruses to infect sheep and goats (7,8), suggest that vaccines based on IVV can be an effective candidate in small ruminants. It is important to note that the IVV-expressing proteins are immunodominant and common (genetically similar for 95-99%) for B. melitensis, B. abortus, Brucella suis, and Brucella canis (9)(10)(11). Our earlier study with Flu-BA vaccine provided 57.1 and 42.9% efficacy in vaccinated non-pregnant sheep and goats, respectively (12), which prompted us to evaluate the improved Flu-BA vaccine formulation. This formulation had additional IVV-expressing Brucella Omp19 and Cu, Zn superoxide dismutase (SOD) proteins, an increased concentration of the adjuvant Montanide Gel01 by 2-fold called Flu-BA_Omp19-SOD, and delivery system (administered the vaccine simultaneously by subcutaneous and conjunctival routes), and the number of doses was increased to three from two and tested in pregnant sheep and goats against B. melitensis challenge infection. In pregnant small ruminants, the Flu-BA_Omp19-SOD vaccine was shown to be safe and effective with complete protection (lack of Brucella isolation in all animal samples) against B. melitensis infection in 66.7% sheep and 55.6% goats (12), whereas the commercial Rev.1 vaccine provides protection against B. melitensis infection in 83.3% goats and 100% sheep (12). Because of added benefits of the Flu-BA_Omp19-SOD vaccine, it is considered as a promising candidate. However, it was important to define the extended duration of protective efficacy of the Flu-BA_Omp19-SOD in sheep and goats, which was the objective of this study. The ability of a vaccine to form a long-term protective immune response is one of its most valuable and critical properties, and therefore this research has been decisive in continuing or discontinuing work in this area.
Bacterial Strains and Biosafety Aspects
The virulent strain B. melitensis 16M (obtained from the Research Institute for Biological Safety Problem's collection of microorganisms) was used in this study. The bacterial cells were cultured under aerobic conditions in Brucella base agar (Sigma, St. Louis, MO, USA) at 37 • C. All experiments with live Brucella were performed in level 3 biosafety facilities. Challenged sheep and goats were contained in specialized facilities (biosafety level 3 agricultural).
Vaccination, Study Design, and Sampling
A total of 30 Degeresskaya semifine meat and wool purpose breed of sheep and 30 Gorno-Altaisk breed of goats aged 6-18 months from an officially brucellosis-free flocks were used in this study. All animals were non-pregnant females. Further, two groups were formed from each animal species by randomization method: the Flu-BA_Omp19-SOD-vaccinated and control groups (n = 15/group). Sheep and goats in the vaccinated group were immunized thrice concurrently via the subcutaneous (2.0 mL in the axillary region) and conjunctival (0.25 mL to each eye) routes of administration at an interval of 21 days with the vaccine dose 7.0 log 10 EID 50 /animal. Animals in the control group were administered with 20% Montanide Gel01 adjuvant in PBS in the same way. The antigen-specific enzyme-linked immunosorbent assay (ELISA) antibodies (total IgG, IgG2a, and IgG1), lymphocytes stimulation index (SI), and interferon γ (IFN-γ) production in animals both before and at the first (n = 15/group), third (n = 10/group), and sixth (n = 5/group) months post-last (third dose) vaccination (PLV) in serum and whole-blood samples were performed. At the first (n = 5/group), third (n = 5/group), and sixth (n = 5/group) months PLV, sheep and goats from the vaccinated and control groups were challenged with a virulent strain of B. melitensis 16M at a dose of 10 6 colony-forming units (CFU)/animal subcutaneously (axillary region right side). At 28 days after challenge, the animals were euthanized and slaughtered aseptically for sampling. Schematic representation of the study design is shown in Figure 1.
Antibody Response
This study was conducted in full accordance with our previously published work (14). Briefly, 96-well microtiter plates (Nunc, Roskilde, Denmark) were coated overnight with pre-titrated mixture of Brucella L7/L12, Omp16, Omp19, or SOD proteins (each at 2 µg/mL) in PBS, blocked for 1 h using PBS containing 1% ovalbumin (PBS-OVA; 200 µL/well), and washed with PBS containing 0.05% Tween-20 (PBS/Tw). Serial 2-fold dilutions of the serum samples in PBS/OVA were added (100 µL/well) to the plates and incubated for 1 h at room temperature. Donkey antiruminant IgG horseradish peroxidase conjugate (Sigma) and monoclonal antibodies specific for sheep IgG1 and IgG2 (Novus Biologicals, Littleton, CO, USA) were used for detection of total and isotype-specific antibodies. After 90-min incubation at 37 • C, plates were washed, and the specific reactivity was determined by addition of an enzyme substrate ABTS [2,2_azinobis(3ethylbenzthiazolinesulfonic acid)] diammonium (Moss, Inc., Pasadena, CA, USA) at 100 mL/well. The absorbance values were measured at 415 nm. Antibody levels were expressed as the arithmetic mean ± standard errors of the optical density (OD) value obtained for sheep and goats samples included in each group.
Preparation of Peripheral Blood Mononuclear Cell for Lymphocyte Proliferation Assay
This work was carried out in accordance with Mailybayeva et al. (14). Briefly, peripheral blood mononuclear cells (PBMCs) were isolated by density gradient centrifugation using the Ficoll-sodium diatrizoate gradient (DNA-Technology, Moscow, Russia). Cells number was adjusted to 10 7 viable cells per mL determined by trypan blue dye exclusion, and 50 µL of each cell suspension (containing 5 × 10 5 cells) was added to eight separate flat-bottomed wells of 96-well microtiter plates already plated with 100 µL of RPMI-1640 medium only or RPMI-1640 medium containing 8.0 µg of purified Brucella proteins L7/L12, Omp16, Omp19, or SOD per well. The cell cultures were incubated for 7 days at 37 • C under 5% CO 2 . After incubation, the cells were pulsed with 1.0 µCi of [ 3 H] thymidine per well for 18 h. Cells were harvested onto glass filter mats and counted for radioactivity in a liquid scintillation counter. Cell proliferation results were converted to SI [counts per minute (cpm) of wells containing antigens/cpm in the absence of antigens] for comparison.
Cytokine IFN-γ Production
This study was also conducted in accordance with Mailybayeva et al. (14). Briefly, PBMCs from each animal were adjusted to 10 7 viable cells per mL as described in the previous paragraph. Aliquots (50 µL) of each cell suspension containing 5 × 10 5 cells were plated and stimulated with Brucella proteins L7/L12, Omp16, Omp19, or SOD as described above. Cell cultures were incubated at 37 • C under 5% CO 2 , and the supernatants were harvested 72 h later and assayed for IFN-γ using a commercial ELISA kit (RayBio1Bovine IFN-γ ELISA Kit; RayBiotech, Inc., Norcross, GA, USA). This kit has been shown to cross-react with IFN-γ of sheep and goats (15). Antigenspecific IFN-γ production was determined for each individual animal by subtracting the background concentration of IFN-γ in wells without antigen from the IFN-γ concentration in wells with antigen.
Assessment Protective Efficacy of the Vaccine in Sheep and Goats
Samples of lymph nodes (submandibular, retropharyngeal, right subscapular, left subscapular, mediastinal, bronchial, portal, paraaortic, pelvic, mesenteric, udder) and parenchymal organs (liver, kidney, spleen, and bone marrow) were taken from slaughtered animals. In total, 15 organs were sampled from each animal. Bacteriological studies and evaluation of results were carried out as described in the previous study (12). Briefly, the tissue homogenates in 0.1% Triton-PBS after 10-fold serial dilutions were plated onto Brucella base agar plates and incubated at 37 • C for 2 weeks, and the growth of bacterial colonies was counted periodically during this time. The concentrations of bacteria [colony-forming units (CFU)/g of tissue] in the tissue samples were determined by performing standard plate counts. An animal was considered infected if a Brucella colony was detected from the culture of one or more organs. The results of the bacteriological study were evaluated by the following three parameters: (A) vaccination efficacy or number of animals (expressed in %) with complete protection against infection (lack of Brucella isolation in all animal samples); (B) generalization of the infectious process or infection index (number of organs and lymph nodes of animals in which Brucella are isolated; the arithmetic mean was given); (C) intensity/severity of the infectious process or the degree of Brucella colonization from samples of lymph nodes and organs (expressed as log 10 CFU/g of tissue).
Statistical Analysis
Differences in protective efficacy (complete protection vs. infection in animals) between groups were compared by onesided Fisher exact test at a significance level of α < 0.05. The significance in antibody responses (IgG, IgG1, and IgG2a), SI, concentration of IFN-γ, the index of infection, and colonization of Brucella in tissues between groups was analyzed using two-way analysis of variance followed by Sidak's multiple-comparisons test. P < 0.05 was considered significant. Means are reported with standard errors (SEM). Statistical analysis of all experimental data was performed with GraphPad Prism Software version 6.0 (GraphPad Software Inc., La Jolla, CA, USA).
Antibody Response to Brucella Proteins in Animals
In the serum of vaccinated sheep and goats, IgG antibody response to a mixture of Brucella L7/L12, Omp16, Omp19, and SOD proteins was at peak levels after 1 month PLV. In sheep, the antibody levels were significantly higher (P = 0.0007) compared to control (Figure 2A). Immunoglobulin G antibody responses in goats did not differ at any of the sampling times. Analysis of isotype specific antibodies in sheep and goats at the first month PLV revealed significantly (P = 0.014-0.02) higher IgG2a over IgG1 (Figure 2A), indicating the Th1-polarized immune response. At the third and sixth months PLV in vaccinated sheep and goats, reduced production of antibodies was observed, and the data were not statistically significant (P = 0.33-0.97) compared to control animals values.
Vaccine Protection in Sheep and Goats Against B. melitensis Infection
The duration of the Flu-BA_Omp19-SOD vaccine protective efficacy in sheep and goats against B. melitensis 16M infection was assessed using parameters such as vaccination efficacy (level of full protection against infection expressed in %), infection index, and Brucella colonization in tissues and organs. Our FIGURE 2 | the subcutaneous and conjunctival route. Animals in the control group received only the adjuvant in phosphate-buffered saline. Statistical analysis was performed using two-way analysis of variance followed by Sidak's multiple-comparisons test. ELISA antibody levels were presented as OD ± standard error. Cell proliferation results were converted to stimulation index [counts per minute (cpm) of wells containing antigens/cpm in the absence of antigens] for comparison. Antigen-specific IFN-γ production was determined for each individual animal by subtracting the background concentration of IFN-γ in wells without antigen from the IFN-γ concentration in wells with antigen. *P = 0.0007 vaccine group vs. control; **P = 0.014-0.02 IgG2a vs. IgG1 at the first month PLV. Data of lymphocyte stimulation index and levels of IFN-γ are presented as mean ± standard error; *P = 0.047-P < 0.0001 vaccine vs. control group. results showed ( Table 1)
DISCUSSION
The Flu-BA vaccine was commercialized to use in cattle in Kazakhstan. Earlier, we tested this vaccine's efficacy after improving the formulation (Flu-BA_Omp19-SOD) and delivery system in pregnant sheep and goats and observed promising safety and efficacy (14). In this study, we for the first time evaluated the duration of protective responses in non-pregnant sheep and goats induced by the candidate vaccine. Earlier, we conducted short-term pilot studies in both non-pregnant and pregnant small ruminants. In this study, the protracted duration of effectiveness of the vaccine for up to 6 months PLV was evaluated. The protective efficacy of the Flu-BA_Omp19-SOD vaccine in non-pregnant small ruminants at the first month PLV against B. melitensis infection was 20% in sheep and 40% in goats, whereas in pregnant sheep and goats, it was 66.7 and 55.6%, respectively (14), wherein similar vaccine formulation and immunization regimen were followed. The severity of B. melitensis infection in vaccinated sheep and goats in this study was measured by infection index (3-3.4 times lower than control) and Brucella colonization in tissues (lower by 131 times than control), which was inferior to those in pregnant animals (infection index 4.5-9.6, Brucella colonization >200 times than control) (14). However, the antigen-specific humoral and especially T-cell responses, which play an important role in the antibrucellosis immunity (16,17), were comparable in both those studies. The Flu-BA_Omp19-SOD vaccine-induced antigen-specific IgG antibodies (IgG2a vs. IgG1), lymphocyte SI, and IFN-γ production in sheep and goats at the first month PLV were less (SI: 2.3-2.7 vs. 3.1-3.7; IFN-γ production: 15.3-16.8 vs. 19.1-19.4 ng/mL) than those in published similar pregnant small ruminant study (14). We partly attribute the lower protective efficacy observed in this study to a wide range in age differences (6-18 months) in experimental animals used and small numbers included in each group (n = 5) compared to the findings in an earlier study (14). This assumption is consistent with previous work (18), wherein responses by PBMCs measured by two different assays between different sheep and within sheep over different sample time points varied substantially in terms of cytokine production and proliferation. In all previous experiments, animals used were more age-homogeneous, as well as in relatively larger numbers of younger animals (3-4 months, n = 7/group or 9-10 months, n = 9/group) (12,14). Unfortunately, it has not been possible to reliably determine the effectiveness of the Flu-BA_Omp19-SOD vaccine on the adult immunized sheep and goats in this study. In any case, in this study, we used a sufficient number of animals in each group, which allowed us to obtain statistically reliable data.
At the third and sixth months PLV, we observed not only reduced Flu-BA_Omp19-SOD vaccine efficacy but also decreased humoral and T-cell responses. However, it is important to note that the severity of B. melitensis 16M infection in vaccinated sheep and goats at the indicated time of observation, estimated by the infection index and the degree of Brucella colonization from tissues, was significantly lower than in the control group. (Continued) Frontiers in Veterinary Science | www.frontiersin.org 7 February 2020 | Volume 7 | Article 58 FIGURE 3 | Animals in the control group were administered with adjuvant in phosphate-buffered saline. Animals were challenged with the virulent strain of B. melitensis 16M at a dose of 10 6 CFU/animal via subcutaneous route. The bacteriological examination was assessed by the index of infection in animals (number of organs and lymph nodes from which Brucella was isolated in each animal; the arithmetic mean ± standard error was given) and colonization of Brucella in tissues (the data were given as log 10 CFU/g). Statistical analysis was performed using two-way analysis of variance followed by Sidak's multiple-comparisons test. The data of index of infection and colonization of B. melitensis in tissues are presented as mean ± standard error; *P = 0.04-P < 0.0001 vaccine vs. control group. LN, lymph node.
Our results demonstrated that a partial protection was induced by the Flu-BA_Omp19-SOD vaccine in sheep and goats for at least 6 months PLV.
In comparison of our results on the duration of protective responses of the Flu-BA_Omp19-SOD vaccine in sheep and goats with the available vaccines (19,20), it is clear that our candidate vaccine provided that it is insufficient to provide complete protection against infection. For example, the commercial vaccine (B. melitensis Rev.1) provides more than 80% full protection in vaccinated small ruminants for ∼2-5 years (19). The Flu-BA vaccine after prime-boost immunization provides at least 12 months' antigen-specific T-cell immune response and protection in 57% of cattle against B. abortus 544 infection (20). The apparent difference in the duration of the protective antibrucellosis immune response in cattle (20) and small ruminants vaccinated with the same vaccine type indicates that IVV-based technology is most appropriate for cattle, and less so for sheep and goats. This can be explained by the fact that cattle are more sensitive to influenza A viruses (7,8), and consequently, our IVVs effectively express Brucella proteins and induce a more pronounced immunity. Comparative analysis of all these data with the Flu-BA_Omp19-SOD vaccine (12,14) indicates that it does not meet the important requirement of prolonged complete protective immunity in sheep and goats, indicating the need of improvements in the vaccine formulation, dosage, and immunization regimen. Further, it is important to note that the present study was performed using non-pregnant small ruminants, which are less sensitive to brucellosis infection (attributed to the presence of erythritol in the pregnant ruminant's placenta, an important growth factor of Brucella) (21,22), and therefore expected higher vaccine efficacy than in pregnant animals, but the result was opposite. However, we still need to perform duration of protective immune responses to the Flu-BA_Omp19-SOD vaccine in pregnant sheep and goats. Consistent with our vaccine results, a similar attempt to use the commercial B. abortus vaccine RB51 in small ruminants against B. melitensis infection was unsuccessful (23).
To conclude, the Flu-BA_Omp19-SOD vaccine using improved formulation and administration method in sheep and goats provides augmented antigen-specific humoral and T-cell immune response lasting for only 1 month PLV and partial protection for 6 months against B. melitensis 16M infection.
DATA AVAILABILITY STATEMENT
All datasets generated for this study are included in the article/supplementary material.
ETHICS STATEMENT
This study was carried out in compliance with national and international laws and guidelines on animal handling. The protocol was approved by the Committee on the Ethics of Animal Experiments of the Research Institute for Biological Safety Problems of the Science Committee of the Ministry of Education and Science of the Republic of Kazakhstan. Animals were euthanized using sodium pentobarbital anesthetic and all recommended efforts were taken to minimize suffering.
AUTHOR CONTRIBUTIONS
KT, AM, SR, and NZ: conception and design of the study, or acquisition of data, or analysis and interpretation of data. KT: drafting the article or revising it critically for important intellectual content. KT, GR, and E-MZ: final approval of the version to be submitted.
FUNDING
This work was supported by grant from the Science Committee of the Ministry of Education and Science of the Republic of Kazakhstan (grant no. 1296/GF4 and AP05135949). The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. | 2020-02-27T09:10:02.969Z | 2020-02-27T00:00:00.000 | {
"year": 2020,
"sha1": "257a78983b3ea9db76af4cb98feaa580cb2d9f56",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fvets.2020.00058/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "2b26491cfdb9cfb46aeee8788b97054d6980a626",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
269147515 | pes2o/s2orc | v3-fos-license | Decreased serpin C1 in extracellular vesicles predicts response to methotrexate treatment in patients with pulmonary sarcoidosis
Background Sarcoidosis is a systemic granulomatous disease of unknown etiology primarily affecting the lungs. Treatment is needed when disease symptoms worsen and organ function deteriorates. In pulmonary sarcoidosis, prednisone and methotrexate (MTX) are the most common anti-inflammatory therapies. However, there is large inter-patient variability in response to treatment, and predictive response markers are currently lacking. Objective In this study, we investigated the predictive potential of biomarkers in extracellular vesicles (EVs) isolated from biobanked serum of patients with pulmonary sarcoidosis stored prior to start of therapy. Methods Protein concentrations of a four-protein test panel of inflammatory proteins were measured in a discovery (n = 16) and replication (n = 129) cohort of patients with sarcoidosis and 47 healthy controls. Response to therapy was defined as an improvement of the absolute score of > 5% forced vital capacity (FVC) and/or > 10% diffusion lung of carbon monoxide (DLCO) after 24 weeks compared to baseline (before treatment). Results Serum protein levels differed between EV fractions and serum, and between sarcoidosis cases and controls. Serpin C1 concentrations in the low density lipid particle EV fraction were lower at baseline in the group of patients with a good response to MTX treatment in both the discovery cohort (p = 0.059) and in the replication cohort (p = 0.032). EV Serpin C1 showed to be a significant predictor for response to treatment with MTX (OR 0.4; p = 0.032). Conclusion This study shows that proteins isolated from EVs harbor a distinct signal and have potential as new predictive therapy response biomarkers in sarcoidosis. Supplementary Information The online version contains supplementary material available at 10.1186/s12931-024-02809-y.
Introduction
Sarcoidosis is a systemic granulomatous disease of unknown cause mainly affecting the lungs, intrathoracic lymph nodes, eyes and skin [1].Diagnosis, monitoring, as well as predicting disease course or response to therapy is challenging in the management of patients with sarcoidosis.For decades, biomarkers such as angiotensin converting enzyme (ACE) and soluble interleukin-2 receptor (sIL-2R) have been studied to guide clinical management, unfortunately with modest sensitivity and specificity [2,3].Pharmacological treatment of sarcoidosis is initiated to prevent further specific organ damage or alleviate symptoms.Immunosuppressing and immunomodulating drugs, most often in the form of prednisone and methotrexate (MTX) are the first-and second-line choice of therapy.When initiating therapy however, it is not possible to predict the treatment response for individual patients upfront, while a significant part show no benefit from therapy.Personalized prediction of treatment response is a clinical unmet need in light of protecting patients from exposure to ineffective drugs and their side effects.
Sarcoidosis is characterized by the formation of noncaseating granulomas, persistent inflammation and activated monocytes/macrophages [4].Macrophages form the core of the granuloma, producing inflammatory cytokines and chemokines to attract lymphocytes resulting in an inflammatory environment [5].These macrophages originate from circulating bone-marrow derived monocytes which are patrolling antigen presenting cells but have a secondary function as a reservoir to replenish the macrophage pool in the tissues when needed.Several studies have highlighted an increased inflammatory status of monocytes in the blood of sarcoidosis patients [6][7][8][9].This increased inflammatory status can also be induced through extracellular vesicles (EVs) as has been described previously by Wahlund et al. [9].Furthermore, more monocytes are found in the circulation of patients with sarcoidosis and these cells have the capacity to activate other immune cells, not only through cell-cell interaction but also through monocyte derived EVs [10,11].Taken together, EVs isolated from whole blood serum may be particularly informative in sarcoidosis.
EV is an umbrella term for all vesicles found in body fluids, including exosomes, micro-vesicles and apoptotic bodies.In the last decade, it was found that EVs can functionally transfer molecules between cells [12].EVs contain nucleic acids, lipids and proteins from the releasing cells and are often referred to as liquid biopsies [13,14] reflecting the status/pathology of the releasing cell.EV proteins are often better associated with the pathology then the same freely circulating proteins in blood [15].In the search for new potential biomarkers, there has been an increased interest in EVs [16][17][18].Because of the relatively new stage of this emerging field of research there is a need for standardization in both methodology and technology to be able to validate the EV-associated biomarkers [19].
For this study a well-established EV-protein panel [20] was used, originally developed to predict adverse cardiovascular events.This panel consisted of four proteins, CD14, Serpin G1, Serpin C1 and Cysteine C. Although, the four proteins in this panel are mostly used in the field of cardiology, these proteins are also involved in inflammation and coagulation [21][22][23][24][25][26][27].Therefore, this panel of proteins could be of interest in assessment of inflammation and prediction of response to immunosuppressive treatment in sarcoidosis.The proteins were originally measured in form of a multiplex, however, for this study the proteins were measured separately to investigate the associations of the individual proteins to response to treatment.
In this exploratory study, we investigated the potential of inflammatory biomarkers derived from serumisolated fractions of EV using a well-established panel of proteins previously validated in EVs.The goal of the present study was three-fold.First, we investigated whether differences in levels of inflammatory biomarkers exist between EV-isolates and serum.Second, we investigated whether EV-derived biomarkers differ between patients with sarcoidosis and healthy controls.Third, we assessed whether these inflammatory biomarkers predict treatment response in patients using prednisone or MTX.
Patients
Case and control samples were collected from the St. Antonius Hospital ILD biobank, screening all sarcoidosis patients (n = 2265) for eligibility.Cases were selected based on pulmonary treatment indication (decrease in lung function, dyspnea or pulmonary fibrogenesis), treatment with prednisone or MTX, and presence of serum collected within 6 months prior to start of treatment.98 sarcoidosis patients were treated with MTX and 69 patients treated with prednisone.All patients were adults > 18 years with sarcoidosis diagnosed according to the international ATS/ERS/WASOG criteria [1].
Baseline characteristics (age at diagnosis, gender, comorbidities as reported in the medical records), lung function, Löfgren syndrome as well as organ manifestation were recorded up to 2 years after start of treatment.Based on the results of the recently reported SARCORT trial [28] as well as the sarcoidosis treatment score [29], an improvement of the absolute score of > 5% forced vital capacity (FVC) and/or > 10% diffusion lung of carbon monoxide (DLCO) after 24 weeks compared to baseline (before treatment) classified a patient as a "responder".All other patients were classified as "non-responder".
This study was performed in accordance with the Declaration of Helsinki and GCP guidelines.The study was approved by the Medical research Ethics Committees United (MEC-U) of the St. Antonius Hospital (R05-08A) and written consent was obtained from all patients.
Study cohorts
First the discovery cohort was composed, a cohort of 16 prednisone and 16 MTX cases which were matched on age, sex, ethnicity and smoking history as best as possible.For each treatment, eight patients were responders and eight patients were non-responders to therapy.The replication cohort consisted of the remainder 49 prednisone and 80 MTX cases.In the replication cohort the prednisone treated group consisted of 29 responders and 20 non-responders, and the MTX treated group consisted of 42 responders and 38 non-responders.In addition, 47 healthy control samples were included.The workflow is summarized in a flowchart in Fig 1 .Blood samples were collected before any treatment was given.Serum was isolated by the centrifugation of serum separator clot activator tubes at 1800 g for 5 minutes and stored at − 80 °C in the St. Antonius Hospital BIOBANK until further use.
EV isolation
EV fraction isolation was based on the protocol of Dekker et al. [30]; however, here serum instead of plasma was used (see detailed description in the supplemental materials).EVs were isolated from 25 μL serum with the use of magnetic beads (Nanomag ® -D plain, 130 mm (1:25) (Micromod).For sequential isolation of the fractions Dextran Sulphate (DS) (MP Biomedicals, Illkrich, Fig. 1 Flowchart showing procedure for identification of patients and distribution of patients across the discovery and replication cohort.In brief, from 2265 patients with sarcoidosis and informed consent for ILD biobank 161 patients fulfilled the criteria to be included in the study.A total of 32 sex and age matched patients were selected for the discovery cohort.The remaining 129 patients were included in the replication cohort France) and Manganese (II) chloride (MnCl 2 ) solution (Sigma Aldrich, St. Louis, MO, USA) were used.The presence of isolated EVs using this technique was previously confirmed with electron microscopy and western blotting [30][31][32].Different fractions of EVs in serum were obtained by EV co-precipitation with monolayer lowdensity lipid particles (LDL fraction) and with bilayer membrane vesicles high-density lipid particles (HDL fraction).Previous performed experiments have shown that relatively small EVs (± 101 nm) are present in the LDL fraction while larger particles are found in the HDL fraction (± 120 nm) [30].
Protein concentration measurements
On a 96-wells plate, LDL, HDL and whole serum were measured simultaneously.Protein concentrations in EV fractions and serum were measured in a well-established protein panel consisting of: CD14, cystatin C, serpin C1 and serpin G1 [20].Capture antibody, biotinylated detection antibody and antigen of all four proteins were purchased from R&D systems.Proteins were quantitatively analyzed by Luminex-based multiplex assay (Bio-Rad, Austin, USA).All EV protein levels were corrected for total amount of protein.
Statistical analysis
Non-parametric Mann-Whitney U test was used for nonnormally distributed data.Categorical variables were compared using Chi-Squared and Fisher's exact test, where appropriate.Log transformed values of the protein levels were used to reduce the effect of skewness in the distribution of the protein-levels.To calculate the odds ratio and enable the direct comparison between different proteins, EV-protein levels were converted into standardized units, or the z-score, by using the observed value minus the mean value, divided by the standard deviation.To investigate the relationship between the protein levels and response to therapy, a logistic regression model was used with the outcome "response to medication".Spearman's rho correlations were calculated to assess direct relationships between protein concentrations and other parameters.For the analysis in the discovery cohort p-values < 0.1 were considered of significant interest, for the analysis with the replication cohort and the combined cohort p-values < 0.05 were considered significant.
Baseline characteristics
Baseline characteristics of the age, sex, ethnicity and smoking history matched discovery cohort of the patient groups treated with MTX or prednisone are shown in Table 1.For the MTX group the DLCO %pred was significantly lower in the responding group (p = 0.010).
Differences in protein levels between EV and serum in sarcoidosis
The proteins from the protein test panel were measured in the EV-fractions LDL, HDL and in whole serum.Figure 2 shows the concentrations of the different proteins measured in the discovery cohort (n = 32).
Compared with EV-fractions, whole serum protein concentrations were higher for CD14, Cystatin C and Serpin C1, but not for Serpin G1.Furthermore, protein concentrations for Serpin G1 and Serpin C1 were lower in the HDL fraction than in the LDL faction.
EV-derived proteins are higher in sarcoidosis than in controls
The proteins from the protein test panel measured in the discovery cohort were compared to protein levels in healthy controls (HC) (n = 47).Baseline characteristics are shown in Supplementary Table S1.The concentration of proteins was significantly higher in sarcoidosis patients compared to HC when measured in whole serum (Fig. 3).
For the HDL EV-fraction only serpin C1 was significantly different between patients and HC.For the LDL fraction, a significant difference was observed for all proteins of For all proteins except serpin G1 serum concentrations were higher compared to EV-fractions (LDL and HDL).For both serpin G1 and C1 there was a difference between concentrations in LDL and HDL.LDL = low-density lipid, HDL = high-density lipid.**p < 0.005 the test panel.When compared to healthy controls, the concentrations of CD14 and cystatin C were higher in the LDL fractions of patients while concentrations of both serpin proteins were lower in LDL fractions.
Response to treatment
In the prednisone discovery cohort no differences were found in protein concentrations between responders and non-responders in either LDL, HDL fractions or serum.In the MTX discovery cohort, concentrations of serpin C1 were lower (p = 0.059) in the group of responders while concentrations of CD14 were significantly higher in the group of patients classified as responders (p = 0.014) in the LDL fraction.In the HDL fraction, cystatin C concentrations were lower in the group of patients classified as non-responders (p = 0.027) (Fig. 4).No difference was found in serum protein levels of responders and non-responders (Supplementary Fig. S1).
Replication cohort Baseline characteristics
In the second phase of the study, the EV proteins were quantified in a larger replication cohort of 80 sarcoidosis patients treated with methotrexate and 49 sarcoidosis patients treated with prednisone.Baseline characteristics are shown in Table 2.As with the discovery cohort, all included patients had a pulmonary treatment indication.Supplementary Fig. S2 shows the concentrations of all proteins measured in the replication cohort.Regarding the proteins of interest identified in the discovery cohort, serpin C1 concentrations in the LDL fraction were significantly lower in the methotrexate responder than in the non-responder group of the replication cohort (p = 0.032; Fig. 5a).Regarding CD14 in LDL and cystatin C in HDL, however, no difference between responders and nonresponders was found (p = 0.091 and p = 0.215, respectively, Fig. 5b, c).
EV proteins to predict response to MTX
To determine the predictive value of serpin C1 for treatment response prediction we combined the MTX discovery and replication cohorts for further analysis.Values were log-transformed to stabilize the variance.Logistic regression revealed that serpin C1, when measured in the LDL sub fraction, was a significant predictor for response to treatment (OR 0.42 [95%CI 0.19-0.93]p = 0.032).To investigate if there was a direct relation, Spearman's rank correlations were computed to assess the relation between change in FVC 6 months after initiation of MTX therapy and protein concentrations at baseline.There was a positive correlation between change in FVC and CD14 concentrations in the LDL fraction, (R = 0.35; p = 0.004).None of the other proteins directly correlated with change in FVC and no correlation was found for change in DLCO.
Discussion
In this study, we showed that concentrations of proteins in serum and in EVs fractions isolated from serum can differ significantly from each other and that this depends on the analyzed protein.Second, we found that levels of proteins in EV fractions differed significantly between sarcoidosis and healthy controls.Third, we showed that EV proteins, and particularly serpin C1 in the LDL EV-fraction is significantly lower at baseline in the group of patients that responded to treatment with MTX, which we verified in a second cohort.Baseline serpin C1 was shown to be a marker with predictive value of response to MXT therapy.Our data demonstrates that measuring proteins specifically in EV can provide novel information in sarcoidosis research compared to measuring the same proteins in serum alone.Although EVs have the disadvantage of an extra isolation step, they do also have advantages such as stability, easy accessibility and minimal sample volumes [18].Furthermore, EVs are increasingly recognized to contain proteins involved in cellular processes directly linked to disease pathogenesis [33].
Table 2 Baseline characteristics of the replication cohort of sarcoidosis patients with pulmonary treatment indication
The difference in protein expression between patients and healthy controls is more evident in the LDL fraction than in the HDL fraction.A possible explanation for this observation lies in the difference in composition of the vesicles, HDL vesicles are larger and are suggested to contain more proteins from the originating cells [34,35].While the LDL fraction consist of smaller EVs including exosomes which are released by activated immune cells and have the capacity to activate other immune cells.The overly active immune system in sarcoidosis patients could lead to an increased release in exosomes eventually resulting in higher EV protein concentrations in patients versus healthy controls [36].Protein concentrations of pro-inflammatory cytokines CD14 and Cystatin C have been shown to be elevated in multiple inflammatory disorders [37,38] and were also elevated in the LDL fraction in sarcoidosis patients compared to healthy controls.Both serpin C1 and G1 were elevated in healthy controls.These serpins exert anti-inflammatory properties [21,39].However, as previous research describes a decrease in EV numbers with advancing age [40].We also have to take into account the possibility of an age effect on the differences in EV-protein concentrations between patients and healthy controls, since the healthy controls were significantly younger.When comparing the concentrations of the proteins with the age of the healthy control subject, none of the proteins correlated with age resulting in a minimal impact of age on the difference in EV-protein concentrations between patients and healthy controls.
In light of diagnostic potential of these biomarkers, there was no added value of EV over serum since all four proteins also differed significantly between patients and healthy controls in serum.Further analysis, using serum samples from time of diagnosis from patients with sarcoidosis and differential diagnoses are needed to determine the diagnostic potential of the markers.The EV proteins in our panel are general inflammation markers, found on the cell surface, shed from the cells or actively involved in cellular inflammatory processes.Therefore, it is not surprising that we found a difference in protein concentrations between healthy controls and patients.However, an interesting finding was made for serpin C1.In the LDL and HDL EV-fractions, serpin C1 concentrations were lower in patients than in controls, while in whole serum, the serpin C1 concentration was higher in patients than in controls.The effect of serpin C1 is different in the circulation than as a cellular surface maker, where it has an increased anti-inflammatory effect.As previously suggested, more serpin C1 may have been shed in patients with sarcoidosis compared to healthy controls [41], which corresponds to our increased levels in the serum of patients.
Further aim of investigation was to assess whether protein concentrations in EV could have a predictive value for response to therapy.To assess this, patients receiving either prednisone or MTX were divided in responders and non-responders, based on change in lung function.The protein test panel revealed no difference in EV-fractions between responders and non-responders in the group of patients treated with prednisone.For the patient group treated with MTX, Serpin C1 concentrations were significantly lower in the group of patients responding to treatment while CD14 concentrations were higher in the responder group, unfortunately for CD14 the difference between responders and non-responder was no longer significant.In addition, we found a positive correlation between CD14 concentrations at baseline and change in FVC over the course of 6 months of MTX treatment (p = 0.004).This protein was previously identified as an EV marker for sarcoidosis by Futami et al., who described CD14 levels in EVs to be significantly increased in patients with sarcoidosis and showed that CD14 was up-regulated in the process of granuloma formation [42].In MTX treated patients with RA a positive correlation has been described between the concentration of soluble CD14 (sCD14) and response to MTX, RA patients with the highest sCD14 concentration at baseline responded best to MTX treatment.Our findings suggest a similar effect as responders have the highest concentration of CD14 in the vesicles at baseline and this correlates with highest change in FVC [43].In future, it would be interesting to see how the CD14 concentration in EVs behave during the 6 months of treatment.However, due to lack of follow-up samples we were unable to measure this.
Serpin C1 was the only of the four proteins with a difference in baseline concentration between responders and non-responders to treatment with MTX in both the discovery and the replication cohort.Significantly lower protein concentrations of serpin C1 were found in the LDL EV-fraction of patients responding to MTX treatment than in non-responding patients.
Serpin C1, also known as antithrombin III (AT III), is a protein involved in inhibition of activation of proteaseactivated receptors (PARs) by thrombin resulting in antiinflammatory activity [26].MTX has been described to exert its immunomodulatory properties through inhibition or activation of a number of pathways, including the adenosine monophosphate protein kinase (AMPK) signaling pathway [44].Part of the anti-inflammatory effect of serpin C1 exerts part of its anti-inflammatory effect through activation of the AMPK signaling pathway [45].Upregulation of the AMPK signaling pathway leads to inhibition of nuclear factor-kB (NF-kB) signaling, consequently leading to AMPK downstream alterations in cytokine production and leukocyte activation [46].In the group of sarcoidosis patients with an insufficient response to MTX we found higher concentrations of serpin C1 in the LDL fraction of the EVs.More serpin C1 activity could lead to upregulated signaling of the AMPK pathway.If the AMPK pathway is already upregulated in patients with higher concentrations of serpin C1 this could lead to a decrease in available ATP in the cells [47].This ATP is needed for alteration of adenosine signaling as an effect of MTX therapy [48].If there is less ATP available due to sustained AMPK activation due to serpin C1 activity, this could have a negative effect on the effectivity of MTX in terms of adenosine signaling.
Previous studies on EV in sarcoidosis has been performed with EVs isolated from broncho alveolar lavage fluid (BALF), resulting in EVs derived from the cells present in the airways and alveoli.From these studies, it was seen that EVs were more abundant and proteins were upregulated in BALF from patients compared to healthy controls [9,49].Because there is a difference in composition of immune cells in BALF versus blood there will subsequently be a difference in composition between BALF and serum EV.Nevertheless, BALF derived EV from sarcoidosis patients are capable of activating PBMCs of healthy controls in vitro as has been demonstrated by Wahlund et al. [9].Although assessment of the pulmonary compartment may better reflect pulmonary inflammation, the use of EV derived from peripheral blood would increase clinical applicability in the future.
Our study has a number of limitations.Firstly, the study has a retrospective design; therefore, not all patient information was available, such as missing data on Scadding Stage.Secondly, the analysis was done using samples from one time point; before start of treatment.Protein concentrations may change over time, in response to disease activity.In case of serpin C1, it is not known how these levels behave prior to and after start of treatment with MTX.However, this article showed that biobanked samples of patients undergoing real world treatment are a valuable resource for EV biomarker discovery studies.
Future dynamic and prospective studies are needed to further validate the value of this protein as a predictive biomarker for response to therapy.
Conclusion
This study showed that measuring inflammatory biomarkers in EV yield results highly different from measuring the same biomarkers in the original serum sample.EVs have high potential when searching for new diagnostic or predictive biomarkers in sarcoidosis.In future studies, serpin C1 deserves special attention as we found an association between EV-concentration of serpin C1 and treatment response in patients with pulmonary sarcoidosis treated with MTX.
Data is shown as whole numbers and percentages between brackets.Response to treatment was based on improvement in lung function (FVC %pred > 10% or DLCO %pred > 10%) after 6 months of treatment.Age, lung function, and biomarkers are shown as mean ± SD. a Age at time of blood withdrawal.b lung function and scadding stage were measured before start of treatment.Scadding stages: 0 = Normal chest radiograph; I = Bilateral hilar lymphadenopathy (BHL); II = BHL with pulmonary infiltrates; III = pulmonary infiltrates without BHL; IV = fibrosis.SFN: Small fiber neuropathy.DLCO (%) was significantly lower in patients responding to MTX therapy.*p < 0.05 .0)0 (0.0) 0 (0.0) 0 (0.0)
Fig. 2
Fig.2Concentrations of proteins measured in different EV fractions (LDL and HDL) and in whole serum in the sarcoidosis discovery cohort (n = 32).For all proteins except serpin G1 serum concentrations were higher compared to EV-fractions (LDL and HDL).For both serpin G1 and C1 there was a difference between concentrations in LDL and HDL.LDL = low-density lipid, HDL = high-density lipid.**p < 0.005
Fig. 3
Fig. 3 Concentrations of proteins from EV-protein test panel.Light triangles represent healthy controls and dark squares represent sarcoidosis patients from the discovery cohort.a Concentrations of CD14; (b) concentrations of cystatin C; (c) concentrations of serpin C1; and (d) concentrations of serpin G1 measured in EV fractions LDL and HDL and in whole serum.LDL = low-density lipid, HDL = high-density lipid.**p < 0.005
Fig. 4
Fig. 4 Concentrations of proteins from EV-protein test panel in patients with sarcoidosis treated with methotrexate of the discovery cohort (n = 16).Figures (a-d) represents EV-proteins measured in the LDL fraction, figures (e-h) EV-proteins measured in the HDL fraction and figures (i-l) proteins measured in whole serum.Filled squares represent non-responders (NR) and open squares represent responders (R) to treatment with methotrexate.*p < 0.1 Data is shown as whole numbers and percentages between brackets.Response to treatment was based on improvement in lung function (FVC %pred > 10% or DLCO %pred > 10%) after 6 months of treatment.Age, lung function, and biomarkers are shown as mean ± SD. a Age at time of blood withdrawal.b lung function and scadding stage were measured before start of treatment.Scadding stages: 0 = Normal chest radiograph; I = Bilateral hilar lymphadenopathy (BHL); II = BHL with pulmonary infiltrates; III = pulmonary infiltrates without BHL; IV = fibrosis.SFN: Small fiber neuropathy
Fig. 5
Fig. 5 Concentrations of EV proteins in patients with sarcoidosis treated with methotrexate of the replication cohort (n = 82), measured in different EV fractions as well as in whole serum.Figure (a) represents concentrations of Serpin C1 in the LDL fraction; figure (b) concentrations of CD14 in the LDL fraction, and figure (c) concentrations of cystatin C in the HDL fraction in non-responders and responders to treatment with methotrexate.Serpin C1 concentrations were significantly higher in non-responders than in responders (p = 0.032).Black filled squares represent non-responders (NR) and gray open squares represent responders (R) to treatment with methotrexate.*p < 0.05 | 2024-04-16T13:07:22.750Z | 2024-04-16T00:00:00.000 | {
"year": 2024,
"sha1": "ea4b462ce2adb60144a9a7d76dc1557b524aa629",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "aed9abcf5e3282e13ce1e5c0bf8bf428590abc48",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
239612568 | pes2o/s2orc | v3-fos-license | Analysis of Endoscopic Evaluation Reliability for Ulcerative Colitis in Histological Remission
The Mayo endoscopic subscore (MES) is a major endoscopic scoring system used to assign a status of mucosal inflammation and disease activity to patients with ulcerative colitis (UC). Using interobserver reliability (IOR), this study clarified the difficulties for endoscopic observers imposed by MES parameters used for the endoscopic evaluation of UC in histological remission. First, 42 endoscopists of four observer groups examined each MES parameter, which were evaluated from endoscopically obtained images of 100 cases as Grade 0 or 1 of the Nancy histological index of histopathological inflammation. Then, IOR was assessed using multiple κ statistics for each finding of MES. The results showed that IOR among all the observers was slight or fair for all the parameters, indicating a low IOR. The experts of the UC practice group had “moderate” or higher IOR for seven of the nine parameters, whereas “slight” or “fair” results were found for all parameters by the trainee group. The IOR for each MES parameter was calculated separately for the observer groups. All the groups showed “slight” or “fair” for “Erythema” and “Decreased vascular pattern”. Large differences between the endoscopists were found in the IOR for the MES parameters in UC in histological remission. Even among UC practice experts, the IOR was low for “Erythema” and “Decreased vascular pattern”.
Introduction
During the management of ulcerative colitis (UC), life events such as school attendance, employment, marriage, pregnancy, and baby delivery are possible when the long-term maintenance of remission is achieved [1,2]. Endoscopic remission (ER) is important as a short-term therapeutic goal leading to the achievement of long-term, therapeutic goals. The importance of the Treat to Target strategy has been proposed, through which treatment is organized to achieve ER [3,4].
Endoscopic evaluation is crucially important for UC management and treatment [1]. Several scores have been used to characterize and calculate the endoscopic findings for UC [5][6][7]. Among them, the Mayo endoscopic subscore (MES) presented by Schroeder et al. in 1987 [8] is a major endoscopic score system for evaluating the status of mucosal inflammation and disease activity. It remains the most commonly used endoscopic evaluation scale [9][10][11]. As an index of endoscopic activity based only on endoscopic mucosal findings, the MES system is used frequently. Nevertheless, it is a subjective evaluation; a different evaluation of the same endoscopic image by observers is common. The objectivity of evaluation has been investigated using various methods. The diagnostic criteria used for endoscopy are more reliable. Their use is reported more commonly for cases in which interobserver reliability (IOR) among endoscopists is higher. Travis SP et al. reported aspects of the ulcerative colitis endoscopic index of severity (UCEIS); its components show satisfactory intra-and inter-investigator reliability [12]. A systematic review indicated that the sigmoidoscopic component of MES and UCEIS presented the most promise as reliable evaluative instruments of endoscopic disease activity [13]. As described above, different studies have positively evaluated the validity and reliability of the endoscopic criteria that are commonly used for UC today. However, in several cases, ER was observed without full achievement of histological remission [14][15][16]. Some reports describe that the histological activity and endoscopic activity are correlated [17,18] but endoscopic findings with a low activity must be assessed for patients who have achieved histological remission. For this reason, a high interobserver reliability (IOR) is necessary for endoscopic findings to be assessed as such. Nevertheless, no report described the evaluation of the IOR of each MES parameter in patients who had achieved histological remission.
This study was designed to evaluate the IOR for each finding among the endoscopists for each MES item used for the endoscopic evaluation of UC cases that achieved histological remission. Then, difficulties with the MES items were clarified.
Study Design and Ethics
This study was approved by the ethics committee of Dokkyo Medical University Hospital (approval no. R-36-7J), conducted in accordance with the ethical principles stipulated in the Declaration of Helsinki, and registered with the University Hospital Medical Network Clinical Trials Registry (R000051904). Regarding the use of endoscopic photographs of patients, we provided a means to opt out instead of omitting informed consent, which was a way to guarantee an opportunity for research participants to notify and publish research information from our website.
Collection of Endoscopic Images and Histological Evaluation
From the medical chart database, among 353 patients treated for UC at the Department of Gastroenterology of Dokkyo Medical University Hospital from 1 January 2018 to 31 December 2019, data of patients that maintained clinical remission for at least 1 year (clinical remission was defined as a partial Mayo score of 3 points or lower [8], excluding 126 cases for which remission was maintained for less than 1 year), and of patients who were judged as Grade 0 or 1 of the Nancy histological index [19] of histopathological inflammation from periodic endoscopy (excluding 98 cases with Grade 2, 3 or 4 of Nancy histological index), were extracted to collect colonoscopic images at the time of pathological diagnosis. Pathological examinations were performed by two pathologists who specialized in pathology of gastrointestinal diseases. The degree of inflammation was assessed using the Nancy histological index based on the agreement of those two pathologists. From those cases, 29 cases judged by the principal investigator as having an endoscopic poor quality image and ambiguous pathological diagnosis were excluded. Therefore, 100 patients were selected for this study (Figure 1). For those 100 investigated cases, the clearest image was selected by MK for each case from endoscopic images of the site where the histopathological biopsy was conducted. The cases were selected by MK; 100 images were presented to the observer endoscopists without patient information. Therefore, the observer endoscopists were unable to obtain information related to histologic activity to exclude bias from the endoscopic evaluation. Additionally, MK was not included among the observers in this study.
Observers for IOR Evaluation
The images described above were shared with 42 endoscopists from our department, including trainers and trainees. They were evaluated to assess their MES classification. These 42 endoscopists were classified into the following four groups based on the number of cases experienced, years of experience and expertise: group A endoscopists examined at least 200 IBD patients per year and were certified as Board Certified Fellows of the Japan Gastroenterological Endoscopy Society (5 persons who were experts); group B endoscopists were not specialized in IBD treatment but were certified as Board Certified Fellows of the Japan Gastroenterological Endoscopy Society (14 persons); group C endoscopists had at least six years of clinical experience as gastroenterologists but were not certified as Board Certified Fellows of the Japan Gastroenterological Endoscopy Society (16 persons); and group D endoscopists were trainees with fewer than six years of clinical experience as a gastroenterologist (7 persons who were trainees).
Method of Presenting Endoscopic Findings
Endoscopic findings led to assignment of an MES [8,20] as MES 0 (normal, inactive disease), MES 1 (erythema, decreased vascular pattern, mild friability), MES 2 (marked erythema, absent vascular pattern, friability, erosions), or MES 3 (spontaneous bleeding, ulceration). Mild friability and normal friability among these findings were necessarily evaluated in real-time during endoscopy. They were excluded from the selected parameters because evaluating them in one presented image was expected to be too difficult. One hundred endoscopic images selected by MK were presented to observers to evaluate the presence or absence of endoscopic findings (nine selected parameters excluding mild friability and normal friability: normal (a), inactive disease (b), erythema (c), decreased vascular pattern (d), marked erythema (e), absent vascular pattern (f), erosions (g), spontaneous bleeding (h), and ulceration (i) (Figure 2)). The evaluator was not informed that this case had a Nancy histological index of 0 or 1.
For this study, IOR analysis was conducted for each endoscopic finding (multiple κ statistics).
Observers for IOR Evaluation
The images described above were shared with 42 endoscopists from our department, including trainers and trainees. They were evaluated to assess their MES classification. These 42 endoscopists were classified into the following four groups based on the number of cases experienced, years of experience and expertise: group A endoscopists examined at least 200 IBD patients per year and were certified as Board Certified Fellows of the Japan Gastroenterological Endoscopy Society (5 persons who were experts); group B endoscopists were not specialized in IBD treatment but were certified as Board Certified Fellows of the Japan Gastroenterological Endoscopy Society (14 persons); group C endoscopists had at least six years of clinical experience as gastroenterologists but were not certified as Board Certified Fellows of the Japan Gastroenterological Endoscopy Society (16 persons); and group D endoscopists were trainees with fewer than six years of clinical experience as a gastroenterologist (7 persons who were trainees).
Method of Presenting Endoscopic Findings
Endoscopic findings led to assignment of an MES [8,20] as MES 0 (normal, inactive disease), MES 1 (erythema, decreased vascular pattern, mild friability), MES 2 (marked erythema, absent vascular pattern, friability, erosions), or MES 3 (spontaneous bleeding, ulceration). Mild friability and normal friability among these findings were necessarily evaluated in real-time during endoscopy. They were excluded from the selected parameters because evaluating them in one presented image was expected to be too difficult. One hundred endoscopic images selected by MK were presented to observers to evaluate the presence or absence of endoscopic findings (nine selected parameters excluding mild friability and normal friability: normal (a), inactive disease (b), erythema (c), decreased vascular pattern (d), marked erythema (e), absent vascular pattern (f), erosions (g), spontaneous bleeding (h), and ulceration (i) (Figure 2)). The evaluator was not informed that this case had a Nancy histological index of 0 or 1.
For this study, IOR analysis was conducted for each endoscopic finding (multiple κ statistics).
Outcomes
For assessing the primary outcome of the present study, IOR was calculated among all observers for the MES parameters used for the endoscopic evaluation of UC. The secondary outcome was a comparison of findings obtained for the MES parameters among the four groups.
Outcomes
For assessing the primary outcome of the present study, IOR was calculated among all observers for the MES parameters used for the endoscopic evaluation of UC. The secondary outcome was a comparison of findings obtained for the MES parameters among the four groups.
IOR among All Observers for MES Parameters
The values of IOR calculated for all observers (42 persons) were calculated for the MES parameters (Table 1) as described below. The interobserver κ coefficients for the respective endoscopic features of UC were 0.402 ± 0.003 in normal, 0.389 ± 0.003 for inactive disease, 0.235 ± 0.003 for erythema, 0.215 ± 0.003 for decreased vascular pattern, 0.351 ± 0.003 for marked erythema, 0.399 ± 0.003 for absent vascular pattern, 0.354 ± 0.003 for erosions, 0.1 ± 0.003 for spontaneous bleeding, and 0.212 ± 0.003 for ulceration. Only spontaneous bleeding was evaluated as "slight"; the other parameters were evaluated as "fair".
Comparison of IORs among Observer Groups
The values of the IOR parameters consisting of MES were compared among the four observer groups ( Table 2). The κ coefficients of the four observer groups differed. In Group A, the κ coefficient was "moderate" or higher for seven of the nine parameters. In Group B and Group C, the κ coefficients were "moderate" or higher for two of the nine and four of the nine parameters, respectively. In Group D, they were "slight" or "fair" in all the parameters. Table 2. Interobserver reliability of MES parameters for observer groups.
IORs of MES Parameters by Observer Group
The IORs of MES parameters were calculated for the respective observer groups ( Table 2). This investigation was conducted without Group D because all the parameters were evaluated as "slight" or "fair" in Group D.
For "Normal", the κ coefficient was "moderate" in Groups A and C, whereas it was "fair" in Group B. For "Inactive disease", the κ coefficient was "moderate" in Groups A, B, and C. The κ coefficient was found to have low values for "Erythema" and "Decreased vascular pattern". They were "fair" or "slight" in all Groups. For "Marked erythema", it was "moderate" only in Group A and was "fair" in Groups B and C. For "Absent vascular pattern", it was "moderate" in Groups A and C, but it was "fair" in Group B. For "Erosion", the κ coefficient moved from "substantial" to "moderate" in Groups A, B, and C. For "Spontaneous bleeding", it was "almost perfect" in Group A, but the result was as low as "slight" in Groups B and C. For "Ulceration", it was "moderate" in Group A, but the result was as low as "fair" or "slight" in Groups B and C.
Meaning of MES
The lower gastrointestinal endoscopy for UC treatment is an important tool for making a diagnosis, elucidating clinical conditions, evaluating treatment, and for detecting and monitoring cancer. Endoscopic observations of inflammation in UC are scored using an objective indicator. In actual clinical situations, the Baron index [6,23] and Matts classification [24] are used. Actually, MES has been used more in recent, large-scale clinical studies [9][10][11].
Difficulties of Endoscopic Diagnosis and IOR in Image Diagnosis
Although the evaluation of the endoscopic findings using MES is important for the treatment selection and follow-up after treatment of UC, difficulty persists in the endoscopic diagnosis of UC: the interobserver agreement rate is unstable. Daperno et al. [25] analyzed the MES agreement rates reported by IBD experts and by IBD non-experts, and found poor results. The respective kappas of the IBD expert group and the IBD non-expert group were 0.53 and 0.71. One report also described that the perfect agreement rate of judgment as MES 0 or MES 1 was 68.2%, even among three endoscopists specializing in IBD [26].
A decrease in MES by at least 1 point is often regarded as an endoscopic improvement; MES 0 or MES 1 is often regarded as signifying an endoscopic remission [27,28]. However, it has been reported from recent studies that the relapse rate and the surgery rate are lower for MES 0 than for MES 1 [29][30][31]. Particularly, it was demonstrated that the remission maintenance rate differed in MES 1 by histological evaluation [15,32]. These problems might result from confusing and complicated parameters for endoscopic evaluation. Therefore, it is particularly important to improve the accuracy of judgment on endoscopic findings in the remission phase. Reportedly, endoscopic activity is correlated with histological activity [17,18], although the long-term studies of hospitalization rates and corticosteroid application rates have shown lower rates in histological remission than in endoscopic remission [16]. The period of remission maintenance is extended considerably in cases that have reached histological remission [33]. These findings suggest that histological remission can be a better indicator of remission maintenance than endoscopic remission. One reason for this might be the reliability of the endoscopic findings, i.e., IOR. Particularly, patients who have achieved histological remission often show endoscopic findings with low activity. This remission might lead to low IOR. In light of that possibility, we investigated the rate of agreement of endoscopic findings for UC patients in the histological remission phase among endoscopic observers. The results could indicate the reliability of evaluations made by endoscopists based on endoscopic data and images.
Significance of Study Results
The results of this study show the IOR of all the observers as "slight" for "Spontaneous bleeding" and "fair" for the other parameters, indicating somewhat lower results for the IOR because the IOR in Groups B, C, and D was lower than in Group A. Although Group B comprised endoscopists certified as Board Certified Fellows of the Japan Gastroenterological Endoscopy Society, pancreatobiliary work was a sub-specialty for most of them. They did not usually engage in IBD treatment, which might have affected the results. Group C members had no established sub-specialty, and therefore engaged in diverse treatments. The small number of cases they experienced might have affected their IOR. Particularly, Group D consisted of trainees with fewer than six years of clinical experience, resulting in the lower κ coefficient because of their relative lack of experience in endoscopy and their fewer cases experienced.
Regarding the item of "Spontaneous bleeding", Group A comprising IBD specialists had a result of κ coefficient = 1, whereas the other three groups had a low IOR of "slight". The images presented for this study were endoscopic images showing histological remission. Therefore, the finding "Spontaneous bleeding" was not observed. However, observers other than the IBD experts tended to overestimate the findings and interpret them as showing "Spontaneous bleeding".
By contrast, the parameters "Erythema" and "Decreased vascular pattern" elicited a low IOR as "fair" or "slight", signifying a disagreement not only among IBD non-experts but among IBD experts. These parameters were similar in expression, expressed as "Erythema" and "Marked erythema", as well as "Decreased vascular pattern" and "Absent vascular pattern". Moreover, these findings are often observed simultaneously. It can be considered that this mode of expression led to a low IOR, eventually leading to results that were inappropriate as evaluation parameters. In fact, the overall IOR of MES overall was likely to be improved by changing these parameters to more objective ones for future use. Preparing several new expressions and findings, evaluating their resultant IOR, and choosing those which lead to better IOR as new evaluation parameters for newly modified MES is expected to be effective. Constructing more common and universal diagnostic parameters is desirable not only for IBD experts, but for all practitioners because all endoscopists might be involved in the endoscopic evaluation of IBD in actual clinical situations.
Limitations
There are some limitations to this study. First, the images used for evaluation in this study were not videos. They were one static image per case. Therefore, mild friability and friability, which served as real-time evaluation parameters during endoscopy, could not be included. Second, the study was a single-center study.
Conclusions
Large differences were found in the IOR of the MES parameters used by endoscopists for endoscopic evaluation of UC in the histological remission phase. Results indicate that IOR was low for the parameters "Erythema" and "Decreased vascular pattern", even among experts at UC practice. The possibility exists that these MES parameters are inappropriate as evaluation parameters for endoscopic findings. The future analyses of the IOR in UCEIS can support development in this field.
Informed Consent Statement:
A means to opt out was provided instead of omitting informed consent, which is a way to guarantee the opportunity for research subjects to be notified and to publish research information related to our website.
Data Availability Statement:
No new data were created or analyzed in this study. Data sharing is not applicable to this article. | 2021-10-23T15:13:00.595Z | 2021-10-20T00:00:00.000 | {
"year": 2021,
"sha1": "9f01f03aa09f726d9643fc0edcfb9e33ca4e7d15",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.3390/healthcare9111405",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "28fc8ead9357876e8961c5dfaea067306bc27bef",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
} |
118580980 | pes2o/s2orc | v3-fos-license | Ultracold bosons in a synthetic periodic magnetic field: Mott phases and re-entrant superfluid-insulator transitions
We study Mott phases and superfluid-insulator (SI) transitions of ultracold bosonic atoms in a two-dimensional square optical lattice at commensurate filling and in the presence of a synthetic periodic vector potential characterized by a strength $p$ and a period $l=qa$, where $q$ is an integer and $a$ is the lattice spacing. We show that the Schr\"odinger equation for the non-interacting bosons in the presence of such a periodic vector potential can be reduced to an one-dimensional Harper-like equation which yields $q$ energy bands. The lowest of these bands have either single or double minima whose position within the magnetic Brillouin zone can be tuned by varying $p$ for a given $q$. Using these energies and a strong-coupling expansion technique, we compute the phase diagram of these bosons in the presence of a deep optical lattice. We chart out the $p$ and $q$ dependence of the momentum distribution of the bosons in the Mott phases near the SI transitions and demonstrate that the bosons exhibit several re-entrant field-induced SI transitions for any fixed period $q$. We also predict that the superfluid density of the resultant superfluid state near such a SI transition has a periodicity $q$ ($q/2$) in real space for odd (even) $q$ and suggest experiments to test our theory.
I. INTRODUCTION
Several experiments on ultracold trapped atomic gases have opened a new window onto the phases of quantum matter 1 . A gas of bosonic atoms in an optical or magnetic trap has been reversibly tuned between superfluid and insulating ground states by varying the strength of a periodic potential produced by standing optical waves 1,2 . This transition has been explained on the basis of the Bose-Hubbard model with on-site repulsive interactions and hopping between nearest neighboring sites of the lattice. [3][4][5][6][7] . In fact, experiments on the superfluidinsulator (SI) transitions of such bosonic atoms in twodimensional (2D) optical lattices 8 is found to agree with predictions of theoretical studies of the Bose-Hubbard model quite accurately 3,7,9 . More recently, several experiments have successfully generated time-or space-dependent effective vector potentials for neutral bosons. Such synthetic vector potentials are created by generating temporally or spatially dependent optical coupling between the internal states of these bosonic atoms [10][11][12] . We note that this experimental technique involves production of a specific effective vector potential for the atoms and hence corresponds to a fixed gauge. In the simplest experimental setup, these vector potentials are typically chosen to represent a constant magnetic field in the asymmetric gauge. However, a few experiments have also generated vector potentials which correspond to spatially varying synthetic magnetic fields 12 . Several theoretical studies have been carried on the properties of the bosons in deep optical lattice in the presence of a constant synthetic magnetic field 13 . In particular, the SI phase boundary has been computed both using mean-field theory 14 and excitation energy calculation which relies on a perturbative expansion in the hopping parameter 15 . More recently, experimentally relevant issues, such as the momentum distribution of the bosons in the Mott phase, the critical theory of the SI transition, and the nature of the superfluid ground states and collective modes near criticality have also been addressed 16,17 . However, in spite of the possibility of direct experimental realization 12 , the phase diagram of these bosons in the presence of a spatially dependent magnetic field has not been theoretically investigated.
In this work, we present a theory of the SI transition for ultracold bosons in a 2D square optical lattice with commensurate filling n 0 and in the presence of a periodic synthetic vector potential given by A * = (0, A * y ) with A * y = A * 0 sin(2πx/l), where l = qa is the period of the vector potential, q is an integer, a is the lattice spacing, and A * 0 is the maximum value of the vector potential on any lattice site. At the outset, we introduce a dimensionless number p = 2πq * A * 0 a/hc, (where q * is the effective charge of the bosons 11 , c is the speed of light, and h = 2π is the Planck's constant) which will be used in the rest of this work to characterize the strength of the vector potential. We first consider the problem of non-interacting bosons in a lattice in the presence of such a periodic vector potential and show that the corresponding single particle Schrödinger equation can be reduced to a one-dimensional Harper-like equation 18,19 . The solution of this equation yields an energy spectra with q bands (with energies ǫ q α (k; p) for α = 0..q − 1) all of which have a periodicity of 2π/q along k x . The lowest of these bands ǫ q 0 (k; p) has, depending on p, either a single minimum at k ≡ (k x , k y ) = (0, 0) or (0, π) or doubly degenerate minima either at (0, 0) and (0, π) or at (0, ±k min y ) where k min y can vary continuously as a function of p for a given q. The minimum energy of the lowest band, ǫ min , turns out to be a non-monotonic function of p for a fixed q. Using these properties of the single particle energy bands and a strong coupling expansion 7,16 , we analyze the Mott phase and SI phase transition of these bosons in the presence of a deep optical lattice. We show that, depending on p and q, the momentum distribution of these bosons in the Mott phase near the SI transition will exhibit single (double) precursor peak(s) at the position of the minimum (minima) of ǫ q 0 (k; p). We determine the SI phase boundary and demonstrate that the bosons exhibit a series of re-entrant field-induced SI transitions as a function of the vector potential strength p for any period q. We also construct an effective Landau-Ginzburg action for the SI transition and show, by analyzing this action at a mean-field level, that the resultant superfluid state has a q (q/2) periodic structure in real space for any odd (even) q. We show that the reason for such a period-halving of the superfluid density for even q can be traced back to the properties of the Harper-like equation obeyed by the non-interacting bosons. We discuss several experiments that can probe our theory.
The rest of the paper is organized as follows. In Sec. II, we introduce the relevant tight-binding Hamiltonian of the bosons in an optical lattice in the presence of the periodic vector potential and obtain the energy spectrum when the interaction between these bosons is set to zero. This is followed by Sec. III, where we introduce the strong coupling expansion for the bosons and use it to compute the boson momentum distribution in the Mott phase and the SI phase boundary. In Sec. IV, we show that the superfluid state into which the transition takes place exhibits a q-periodic superfluid density. We conclude with a discussion of possible experiments to test our theory in Sec. V.
II. NON-INTERACTING BOSON SPECTRUM
The Hamiltonian of a system of bosons in the presence of an optical lattice and a synthetic periodic vector field is given by 1,3,8,14,15 where µ is the chemical potential, U is the on-site Hubbard interaction, b r (n r = b † r b r ) is the boson annihilation (density) operator, the hopping matrix t ′ rr ′ is given by if r ≡ (x, y) = (m, n)a and r ′ are nearest neighboring sites and is zero otherwise, and t ′ is the hopping amplitude of the bosons between the nearest neighboring sites.
In the rest of this work, we set the lattice spacing a, , and c to unity. Our aim in this work is to analyze the phases of H.
To this end, we first analyze the boson spectrum in the non-interacting limit µ = U = 0. In this case, non-interacting boson Hamiltonian becomes where we have used p = 2πq * A * 0 a/hc. To obtain the spectrum for H 0 , we use the identity where J r (z) denotes Bessel functions with integer r, and write H 0 in momentum-space representation as where b(k x , k y ) = k exp(i(k x m + k y n))b mn . In Eq. 5, S r (p) is given by where n takes integer values, S r (p) = S r+q (p), and we have used the 2π periodicity of b(k x , k y ): b(k x +2π, k y ) = b(k x , k y ). Note that for even q, S r = 0 for all odd integer r which follows from the well-known property of the Bessel functions J n (p) = (−1) n J −n (p) for any integer n. The Schrödinger equation obtained from Eq. 1 can be written by expressing the eigenfunctions as 19 where ψ α = ψ α+q , and obtaining the equations of ψ α from H 0 |ψ = E|ψ . This yields a one-dimensional Harper-like equation for ψ α Eq. 8 can easily be cast in the form of q × q dimensional Hermitian matrix equation Λ q (k; p)ψ = ǫψ. The showing a single minima at (kx, ky) = (0, 0). diagonal elements of Λ q (k; p) are given by Λ q nn (k; p) = −2(cos(k x + 2π(n − 1)/q) + S 0 (p) cos(k y )) and the offdiagonal elements by Λ q n,n+r (k; p) = Λ q * n+r,n (k; p) = −S r (p)e −iky . The difference of Λ q (k; p) with its counterpart in the constant magnetic field 19 is two-fold. First, Λ q (k; p) no longer remains a tri-diagonal matrix. However, the 2π/q periodicity of its eigenvalues, which is a consequence of the periodicity of the magnetic field, is still retained. This property is most easily seen by noting that a shift of k x → k x + 2π/q in Eq. 7 amounts to a shift of ψ α → ψ α+1 . Second, for even q, where Λ q n,n+r (k; p) = Λ q * n+r,n (k; p) = 0 for all odd r, Λ q (k; p) separates into two block-diagonal matrices of dimension q/2 leading to q/2 non-zero elements of the eigenvector ψ for any eigenvalue ǫ. Note that for q = 2, which correspond to A * y = 0 on all sites, we have S 0 (p) = 1 and S 1 (p) = 0 so that Eq. 8 reduces to the standard tightbinding Hamiltonian in zero magnetic field.
For q ≥ 3, a straightforward numerical diagonalization of Λ leads to q energy bands with energy dispersions ǫ q α (k; p), where α = 0..q − 1, which have a period of 2π/q along k x . This periodicity is a manifestation of the q-fold folding of the Brillouin zone due to the presence of the periodic vector potential. The lowest energy band ǫ q 0 (k; p), shown in Fig. 1 for p = 1 and q = 3, displays a single minima at (k x , k y ) = (0, 0) within the magnetic Brillouin zone (−π/q ≤ k x ≤ π/q and −π ≤ k y ≤ π). This minima structure changes with increasing p as shown in Fig. 2 for q = 3, 4, 5 and 6. For q = 3, 5 and 6, we find that beyond a critical strength of the vector potential p 1 (q), ǫ q 0 (k; p) has two minima at the (0, ±k min y (p)). As p is increased, k min y increases monotonically from 0 to π until it reaches π at p = p 2 (q), where we recover the single minima structure of ǫ q 0 (k; p) with the minima at (0, π). As p is further increased, till a value p 3 (q), k min y remains at π. Beyond p 3 (q), for q = 3 and 6, we find that ǫ q 0 (k, p) again has two minima at (0, ±k min y (p)) and k min y (p) monotonically decreases from π to 0 as p is increased. For q = 5, beyond p 3 (q), we find a discontinuous change in k min y from π to 0, and ǫ q 0 (k; p) retains its single minima structure. For q = 4, we always have a single minima of ǫ q 0 (k; p) at (k x , k y ) = (0, 0), except at p = nπ where there are two degenerate minima at (0, 0) and (0, π). We also note from Fig. 2, that the minimum value of the energy, ǫ min , is a non-monotonic function of p for all q ≤ 6. We have checked that these features remain qualitatively similar for q > 6 and we shall not discuss those cases further here. In the next section, we shall utilize these properties of ǫ q 0 (k; p) to understand the phase diagram of these bosons in the presence of a deep optical lattice.
Before ending this section, we note that there is an alternative method of finding the energy eigenvalues of the Hamiltonian Eq. 3 by constructing the Schrödinger equation in real space and using the q periodicity of the eigenfunctions along x. This has been carried out in Ref. 20 and yields identical results to the method elaborated here. We also point out that, although we have, keeping in mind the simplicity of experimental realization, considered a relatively simple sinusoidal form of the vector potential, our method can be easily generalized to treat more complicated periodic vector potentials. Also, we note that since the vector potential A * y is not a gauge field, there is no gauge freedom in the choice of the eigenfunctions (Eq.
III. STRONG COUPLING EXPANSION
In this section, we analyze the phases of H in the limit of t ′ /U ≪ 1, where the bosons are in a Mott insulating state. We note that the effect of the magnetic field manifests itself in the first term of Eq. 1 and thus vanishes in the local limit (t ′ = 0). In this limit the boson Green function can be exactly computed 7,9,16 and is given, at T = 0, by Here ω n denote bosonic Matsubara frequencies and E h = µ − U (n 0 − 1)(E p = −µ + U n 0 ) are the energy cost of adding a hole (particle) to the Mott state. To address the effects of the hopping term, we resort to the coherent state path integral description of these bosons. The partition function of the system can then be written as Here τ is the imaginary time,ψ denote boson fields in the path integral representation, n r (τ ) =ψ * r (τ )ψ r (τ ), β = 1/k B T is the inverse temperature (T ), and k B is the Boltzman constant. Following Ref. 7, we then decouple the hopping term introducing a Hubbard-Stratonovitch field φ r (τ ). The partition function can then be written as Finally, we introduce a second Hubbard-Stratonovitch field ψ r (τ ) and decouple S ′ 2 to obtain Note that integrating out φ r (τ ) in Eq. 12 would lead to the constraint ψ r =ψ r on Z. It can also be shown that ψ andψ fields have identical correlation functions 7 . Next, we follow Refs. 7,16 to integrate out theψ and φ fields and obtain an effective action in terms of ψ. The details of this procedure has been elaborated in Ref. 7. The effective action so obtained is given by 7,16 S eff = S 0 + S 1 S 0 = k ψ * q (iω n , k)[−G −1 0 (iω n )I + Λ q (k)]ψ q (iω n , k), where ψ q = (ψ 0 (k x , k y )..ψ q−1 (k x , k y )) T with ψ α (k x , k y ) = ψ(k x + 2πα/q, k y ) denoting the qcomponent of the auxiliary field ψ in momentum space, k ≡ (1/β) ωn d 2 k/(2π) 2 , I denotes the unit matrix, and g > 0 is the static limit of the exact two-particle vertex function of the bosons in the local limit which has been computed in Ref. 7. Note that S 0 reproduces exact bosons propagator both in the local (t ′ = 0) and the non-interacting (U = 0) limits and therefore provides a suitable starting point for the strong coupling approximation. In the next subsection, we shall compute the momentum distribution function of the bosons from S 0 .
A. Momentum distribution of the bosons in the Mott phase
The momentum distribution of the bosons in the Mott phase can be computed from S 0 7,16 To compute n(k), we note that G −1 0 is independent of momenta. Hence finding G(iω n , k) amounts to inverting Λ q (k; p). To this end we introduce an unitary transformation where the transformation matrix U q (k) diagonalizes Λ q (k; p) to obtain a diagonal Green function G d (iω n , k) = U −1 q (k)G(iω n , k)U q (k) whose diagonal elements are given by where we have used the expression of G 0 from Eq. 9 and E q± α (k; p) denote the location of the poles of the interacting boson Green function and are given by Note that E α± q (k; p) can be directly computed from the knowledge of the non-interacting boson spectrum ǫ q α (k; p) derived in Sec. II. In particular, the minima E α± q (k; p) occur in the same position in the magnetic Brillouin zone as ǫ q α (k; p). Also, as noted in Ref. 7, the Mott gap E α+ q (k; p) − E α− q (k; p) vanishes at the position of the minima of ǫ q α (k; p) in the magnetic Brillouin zone provided we are at the tip of the Mott lobe where the SI transition takes place at constant density.
The momentum distribution can now be computed as n(k) = − lim T →0 (1/β) ωn TrG d (iω n , k) and is given by 16 Eq. 17 shows that the peaks of n(k) occur when the Mott gap E α+ q (k) − E α− q (k) becomes small near the minima of ǫ q α (k; p) as the SI transition is approached through the tip of the Mott lobe. The minima structure of the noninteracting bosons is therefore expected to be reflected in the peaks of the momentum distribution of the bosons in the Mott phase. In Fig. 4, we show a representative plot of n(k) as a function of k for q = 3 and p = 1. We find that the central peak of the momentum distribution lies at (0, 0) in accordance with the position of the minima of ǫ the minima of ǫ q α (k; p) always occur at k x = 0, we plot the momentum distribution n(k x = 0, k y ) as a function of k y (for fixed t ′ (p)/t ′ c (p) = 0.95 and q = 3, 5) for several representative values of p in Fig. 3. Fig. 3 clearly shows that as p increases, the peak structure of the momentum distribution changes from a single peak at k y = 0 to two split peaks at k y = ±k min y (p) and finally to a single peak at k y = π. Finally in Figs. 5, 6 and 7, we plot n(k x = 0, k y ) for q = 3, 4, and 5, as a function of k y and p for a fixed t ′ = 0.04U . Note that for these plots, the proximity of the system to the tip of the Mott lobe changes with p since t ′ c is a function of p. These plots again reveal the change in the peak structure of n(k x = 0, k y ) as a function of p.
B. Re-entrant SI transitions
The critical hopping t ′ c for the MI-SF transition as a function of µ can be determined from the condition 16 The SI phase boundary so obtained is shown in Fig. 1 for q = 3 and p = 1 in Fig. 8 and displays the usual Mott lobes. The difference of the present case here with the SI transitions studied earlier 3,7,9,16 arises due to the non-monotonic p dependence of ǫ q min (p). This point is demonstrated in Fig. 9 for q = 3, 4, 5, and 6 by plotting t ′ c (p) as a function of p for n 0 = 1 and µ = µ tip . We find that t ′ c (p) is a non-monotonic function of p and t ′ c (p) > t ′ c (0) for all p, Consequently, varying p at a fixed value of t ′ > t ′ c (0) leads to a series of field-induced re-entrant SI transitions for any q. This is schematically marked by the red-dotted line in Fig. 9. We note that such re-entrant transitions as a function of the magnetic field strength are not present for SI transitions in a constant magnetic field 16 .
IV. THE SUPERFLUID PHASE
At t ′ = t ′ c (p), it becomes energetically favorable to create particles/holes at the minima of the energy dispersion of the bosons leading to the destabilization of the Mott phase. The Landau-Ginzburg theory of the resultant superfluid phase can be expressed by long-wavelength boson fields around these minima. In the present case, there are either one or two degenerate minima of the boson energy spectrum in the magnetic Brillouin zone leading to a Landau-Ginzburg theory of one or two low-energy boson fields 3,7,9,16 . We shall first consider the case with a single minima either at (0, 0) or (0, π) which occurs for specific ranges of p for all q as discussed in Sec. II. In either case, the boson field can be written as ψ(r, t) = χ 0 (r; p)ϕ(r, t), χ 0 (r; p) = q−1 α=0 ψ α (p)e 2πiαx/q e ik min y y ϕ(r, t), (19) where ψ α (p) denotes the components of eigenvectors of Λ q (k; p) at k x = 0, k y = k min y which can be either 0 or π for a fixed p, and χ 0 (r; p) denotes the corresponding wavefunction in real space. Thus the superfluid density can be written as where ϕ 0 = ϕ(r, t) = 0 for t ′ > t ′ c (p). Note that ρ s is independent of y irrespective of the value of k min y , but displays spatial variation along x. Further, as discussed in Sec. II, for even q, only q/2 of the components ψ α (corresponding to either even or odd integers α) will be non-zero. Consequently, we expect the period of ρ s (x) to be halved. A plot of the renormalized superfluid density ρ s (x)/ρ s (0), plotted in Fig. 10 for p = 0.5 and q = 3, 4, 5, and 6, confirms this expectation. The presence of the periodic vector potential leads to a q-periodic pattern with q − 2 small and one large peak in the superfluid density along x for all odd q as shown in the left panels of Fig. 10. In contrast, the superfluid density for even q displays a q/2 periodic pattern. Note that this period halving leads to identical superfluid density patterns for vector potentials with periods q and 2q for all odd q. This feature is clearly demonstrated in the top left (q = 3) and the bottom right (q = 6) panels of Fig. 10. Next, we derive the effective low-energy Landau-Ginzburg theory. To this end, we substitute Eq. 19 in Eq. 13 and obtain the effective low-energy Landau-Ginzburg action in terms of the ϕ fields. The details of this procedure is charted out in Ref. 16. The resultant action is given by where K 0 = 1/2∂ 2 G −1 0 /∂ω 2 | ω=0 = n 0 (n 0 + 1)U 2 /(µ + U ) 3 , K 1 = ∂G −1 0 /∂ω| ω=0 = 1 − n 0 (n 0 + 1)U 2 /(µ + U ) 2 , and v q (p) 2 = ∇ 2 k ǫ min (k; p)/2, r q (p) is given by Eq. 18, and g ′ = g q−1 x,y=0 |χ 0 (r; p)| 4 /q 2 . At the tip of the Mott lobe, where µ = µ tip = U ( n 0 (n 0 + 1) − 1), K 1 = 0. Thus we have a critical theory with dynamical critical exponent z = 1. Away from the tip, K 1 = 0 rendering z = 2. Thus the critical theory turns out to have similar exponent as in the case without magnetic field 4 . ρs(x)/ρs(0) as a function of x for q = 3, 5 (left panels) and q = 4, 6 (right panels). Note that the superfluid density displays a q periodic pattern for odd qs and a q/2 periodic pattern for even qs. p is set to 0.5 for all plots.
Finally, we briefly comment on the case where there are two degenerate minima either at (0, ±k min y ) or at (0, 0) and (0, π). In this case, ψ(r, t) = χ + 0 (r; p)ϕ + (r, t)+ χ − 0 (r; p)ϕ − (r, t) where χ ± 0 (r) denotes the eigenfunctions of Λ(k; p) in real space at (0, ±k min y ) and ϕ ± (r, t) denotes low-energy fluctuating fields about the minima. Substituting this expression of ψ in Eq. 13, and following the coarse-graining procedure detailed in Ref. 16, we find that for all q and p, the superfluid phase corresponds to the condensation of only one of the low-energy fields: ϕ + = 0, ϕ − = 0 or ϕ − = 0, ϕ + = 0. Thus the effective Landau-Ginzburg action in these cases is qualitatively similar to Eq. 21. The superfluid density, plotted in Fig. 11 for q = 3, 5 and p = 2.5, shows similar q periodic pattern as observed in Fig. 10 for odd q.
V. DISCUSSION
There are several possible experimental verifications of our theory. First, we suggest measurement of n(k) for the bosons in the Mott phase near the transition as done earlier in Ref. 8 for 2D optical lattices without the synthetic magnetic field. Our prediction is that the peak structure of the momentum distribution along k x = 0 at a fixed t ′ /U near t ′ c would be similar to those shown in Figs. 5..7. In particular the shift in the peak position of n(0, k y ) with p and change from a single to double peak structure as a function of p should be observable in such experiments. Second, the re-entrant SI transition can also be verified by measuring n(k) as a function of p by fixing t ′ > t ′ c (p = 0) as shown in Fig. 9. Finally, the spatial variation of the superfluid density can also be observed by measuring n(k) in the superfluid phase.
In conclusion, we have analyzed the MI-SF transition of ultracold bosons in a 2D optical lattice in the presence of a synthetic periodic magnetic field. We have shown that the precursor peaks of the momentum distribution in the Mott phases can be tuned by the strength p of the synthetic field. We have also demonstrated that the bosons, in the presence of such a periodic synthetic magnetic field, show a series of field-induced re-entrant SI transitions, and that the superfluid density in the SF phase near criticality shows q (q/2) periodic spatial pattern for odd (even) q. We have suggested several experiments which can test our theory. | 2010-05-25T03:56:07.000Z | 2010-05-25T00:00:00.000 | {
"year": 2010,
"sha1": "4154083f9e23b4cc112309d33b53888660ad8023",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1005.4476",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "4154083f9e23b4cc112309d33b53888660ad8023",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
118506673 | pes2o/s2orc | v3-fos-license | Physically motivated exploration of the extrinsic parameter space in ground-based gravitational-wave astronomy
Efficient parameter estimation is critical for Gravitational-Wave astronomy. In the case of compact binary coalescence, the high dimensional parameter space demands efficient sampling techniques - such as Markov chain Monte Carlo (MCMC). A number of degeneracies effectively reduce the dimensionality of the parameter space and, when known, can render sampling algorithms more efficient with problem-specific improvements. We present in this paper an analytical description of a degeneracy involving the extrinsic parameters of a compact binary coalescence gravitational-wave signal, when data from a three detector network (such as Advanced LIGO/Virgo) is available. We use this new formula to construct a jump proposal, a framework for a generic sampler to take advantage of the degeneracy. We show the gain in efficiency for a MCMC sampler in the analysis of the gravitational-wave signal from a compact binary coalescence.
Introduction
Among the sources of gravitational waves (GWs), inspiralling binary systems of compact objects, neutron stars (NSs) and/or black holes (BHs) in the mass range ∼ 1 M − 100 M stand out as likely to be detected and relatively easy to model. For the network of ground-based laser interferometers (Cutler & Thorne 2002), LIGO (Laser Interferometer Gravitational-wave Observatory) (Abbott & Abbott et al. 2009) and Virgo (Acernese et al. 2008), currently undergoing upgrades, the detection-rate estimates for compact object binaries, although uncertain, are expected to be about 70 yr −1 (Abadie et al. 2010).
The detection of a gravitational-wave event is challenging and will be a rewarding achievement by itself. After such a detection, measurement of source properties holds major promise for improving our astrophysical understanding of these sources and requires efficient methods for parameter estimation. This is a complicated problem because of the large number of parameters (15 for spinning compact objects in a quasicircular orbit) and the quasi-degeneracies between them (Raymond et al. 2009), the significant amount of structure in the parameter space, and the particularities of the detector noise.
We analyse the signal produced during the inspiral phase of two compact objects of masses M 1,2 in quasi-circular orbit. A circular binary inspiral with both compact objects spinning is described by a 15-dimensional parameter vector λ. A possible choice of independent parameters with respect to a fixed geocentric coordinate system is: λ = {m 1 , m 2 , d, t c , φ, α, δ, ι, ψ, a spin1 , θ spin1 , φ spin1 , a spin2 , θ spin2 , φ spin2 } where m 1 and m 2 are the masses of the heaviest and lightest members of the binary, respectively; d is the luminosity distance to the source; φ is an integration constant that specifies the gravitational-wave phase at a reference frequency; the time of coalescence t c is defined with respect to the centre of the Earth; α (right ascension) and δ (declination) identify the source position in the sky; ι defines the inclination of the binary with respect to the line of sight; and ψ is the polarisation angle of the waveform. The spins are specified by 0 ≤ a spin 1,2 ≡ S 1,2 /M 2 1,2 ≤ 1 as the dimensionless spin magnitude, and the angles θ spin1,2 , φ spin1,2 for their orientations with respect to the line-of-sight.
It is convenient to define two families of parameters. The intrinsic parameters: are required for the computation of the gravitational wave in any reference frame. The extrinsic parameters: control the projection of the gravitational wave onto the geocentric reference frame, in which we can compute the response of each detector with Eq. 14. Given a network comprising n det detectors, we assume that the data collected at the i−th instrument (i = 1, . . . , n det ) is given by the gravitational-wave signal (see Eq. 12), and n i (t) is the detector noise (here assumed to be stationary and normally-distributed).
The equations governing the response of an observatory to gravitational waves have long been known, see for instance (Misner et al. 1973) and references therein. To illustrate the degeneracy present in this response we use Markov chain Monte Carlo (MCMC) methods to determine the multi-dimensional posterior probability-density function (PDF) of the unknown parameter vector λ in equation 1, given the data sets x i collected by a network of n det detectors, a model M of the waveform and the prior p( λ) on the parameters. One can compute the probability density via Bayes' theorem where is the likelihood function, which measures the probability (under the noise distribution) of getting data x j given a signal h j . The term p(x j |M ) is the marginal likelihood or evidence. In the previous equation is the overlap of signals x and y,x(f ) is the Fourier transform of x(t), and S j (f ) is the noise power-spectral density in detector j. The likelihood computed for the injection parameters L inj = p(x j | λ inj , M ) is then a random variable that depends on the particular noise realisation n j in the data x j = h( λ inj ) + n j . The injection parameters are the parameters of the waveform template added to the noise. To combine observations from a network of detectors with uncorrelated noise realisations we have the likelihood p( x| λ, M ) = n det a=1 p(x j | λ, M ) , for x ≡ {x j : j = 1, . . . , n det } and The numerical computation of the PDF involves the evaluation of a large, multimodal, multi-dimensional integral. MCMC methods (e.g. Gilks et al. 1996, Gelman et al. 1997, and references therein) have proved to be especially effective in tackling this numerical problem.
In the Markov chain Monte Carlo algorithm, a Markov chain crawls around the parameter space according to a specific set of rules: • At iteration n, the chain is in the state λ n . Choose a proposal state λ k with probability p( λ k | λ n ).
• Compute the acceptance probability p a : • Accept λ k = λ n+1 as the new state of the chain with probability p a , otherwise λ n+1 = λ n (with probability 1 − p a ) The distribution of parameters in the set of states λ n of the chain following this procedure converges towards the posterior distribution as n → ∞. Note that for any proposal to be included in this algorithm, the ratio needs to be computed, see Section 3.3. We derive for the first time in the literature a proposal that generates jumps in parameter space that exploit a near-degeneracy in the detector responses for the threedetector case. Using such a proposal in the context of an MCMC generates moves that efficiently explore the extrinsic dimensions of the posterior distribution function, even when the posterior is multi-modal with widely separated, narrow peaks in the extrinsic dimensions.
In this paper we first present the existing degeneracies involving the extrinsic parameters describing a binary coalescence in Section 2. In Section 3.1 we present the equations which we solve in Section 3.2 to generate proposed moves. In Section 3.3 we address detailed balance. We apply our proposal in our Markov chain Monte Carlo algorithm and describe the results in Section 4. Finally we conclude in Section 5.
Degeneracies between extrinsic parameters
There exists a near-degeneracy in the detector response to a gravitational wave involving the sky location (right ascension α and declination δ), the polarization, ψ, the distance d and the inclination ι of the source when three non-collocated detectors are used. In the following discussion we will restrict ourselves to the case of non-spinning signals for simplicity. Some of our approximations are inapplicable to spinning signals, but we expect that our jump proposal may still prove useful in the spinning case, particularly for signals that are weakly spinning.
The reflection of the true location of the source through the plane defined by the three detectors conserves the arrival time at each detector. This is the reason why in some three-detector analyses, two modes in the sky location are recovered, see Fig. 1 (left). The reflection condition keeps the arrival time of the signal at each detector, ∆ 1 , ∆ 2 and ∆ 3 , with ∆ j (α, δ, t c ), given by: constant. Here the detector location is labelled by the vector L j , and the source by the vector S(α, δ): This degeneracy includes the time parameter t c as well, since the reference time is at geocentre and the plane of the detectors does not in general include the centre of the Earth.
This particular degeneracy has been well documented and a jump proposal has been implemented involving the sky location and the reference time, see for instance (Veitch & Vecchio 2010). However, the detector network sensitivity pattern is not uniform on the sky. Any change in sky location will change the effective strength of the model template in each detector, and changes in polarization, inclination and distance are needed to compensate. Both sky positions in Fig. 1 (left) correspond to different values of polarization, inclination and distance. The center plot shows the same blobs in the distance-inclination space, and the right plot shows the correlation between right ascension and distance.
Formulation of the equations
The signal in detector j, h j , is the sum of two polarisations (in the non-spinning case): Where F j+ and F j× are the antenna beam patterns of the detector, relating the coordinate system centered on the detector to the coordinate system of the gravitationalwave source. F j+,× (HA, δ, ψ) are functions of the hour-angle HA (which is the right ascension α corrected for the earth's rotation: the Greenwich sidereal time minus the observatory's longitude and minus the right ascension), the declination δ and polarisation angle ψ of the source. As a function of the right ascension, F j+,× (HA, δ, ψ) = F j+,× (α, δ, ψ; t c ). The antenna beam patterns are derived from the detector's three dimensional 2nd-order response tensor D (which relates the local coordinates of the detector to the geocentric reference system where HA, δ and ψ are defined). For details and derivation, see (Creighton & Anderson 2012).
The waveform polarisations H +,× (m 1 , m 2 , ι, φ, d, t c ) are functions of the masses m 1,2 , the inclination ι (angle between the line of sight and the orbital angular momentum), the phase at a reference time φ, the distance to the observer d and the time at coalescence t c . And they can be written in the non-spinning case, considering only the dominant 2-2 mode (H + and H × are then related by a simple π 2 phase shift), as: Abusing notation, from now on H +,× refers to H +,× (m 1 , m 2 , φ), and we omit t c , which simply provides an overall sliding of this component of the waveform (recall that t c also enters our analysis in Eq. 10). We define now two quantities of interest: The signal amplitude is then given by: To keep the same likelihood values under a change of parameters, we keep constant for each detector j the quantity: and the arrival time: This gives in the 3 detector network three additional constraints to the 3 arrival time constraints, and leads to a system of 6 equation and 6 variables. The solutions form a set of measure zero as expected, see for instance the narrow blobs (no lines nor extended surfaces) in Fig. 1. (The posterior distribution is composed of two blobs instead of two points because of the finite signal-to-noise ratio.)
Solutions and proposal formula
Starting from a set of parameters α, δ, t c , ψ, ι, d, we want to compute a new set α , δ , t c , ψ , ι , d , which conserves Eq. 30 and satisfies Eq. 31. We compute the quantities R 2 j from Eq. 30. Using only Eq. 31 for each of the three detectors gives the new values α , δ , t c from geometric arguments. The procedure consists of reflecting the sky position across the plane of the detectors and computing the corresponding t c . This procedure is described in the literature, see for instance (Veitch & Vecchio 2010) and references therein.
Detailed balance considerations
For this proposal to be useful in a Metropolis-Hastings Markov chain Monte Carlo (as in Section 4), one needs to compute the ratio of the probability densities on parameter space for particular jumps to be proposed: where is the point corresponding to λ under the mapping just described, which we denote by J, and is the probability density for proposing point x given that the current point is y. In our case, the ratio of densities is given by Unfortunately, the function on parameter space described above is quite complicated, and its Jacobian even more so. Rather than implementing the Jacobian directly, we use the following modified procedure for choosing a new parameter space point, λ from λ. First, we compute where n is a randomly-chosen vector of N (0, 1) variates and is a scale factor that is much smaller that the dispersion we expect in the posterior about λ . Let We do not need an analytic expression for the Jacobian to compute n-we only need to apply the mapping to λ and subtract from λ. The proposal probability density ratio is given by where φ(x) is the PDF for the multivariate N (0, 1) distribution. Based on the relation in Eq. 47, Eq. 48 is consistent with Eq. 45, but we need not have an explicit expression for ∂J/∂ λ. Essentially, we have numerically computed the projection of the Jacobian on the n direction. We use the modified proposal, Eq. 46, in what follows.
Iteration number Iteration number
Iteration number
Results from the jump proposal in a Markov chain Monte Carlo sampler
We have implemented the equations described in Section 3.1 as a proposal in a Markov chain Monte Carlo sampling code. We present the effect of including this proposal in Fig. 2 and compare with the standard sky reflection proposal only in Fig. 3. We injected a known waveform from a non-spinning binary neutron star system (m 1 = 1.4 M , m 2 = 1.4 M ), computed with post-Newtonian expansions (Blanchet et al. 2004), into simulated LIGO and Virgo noise at a signal-to-noise ratio of 20. The MCMC attempts to recover the posterior density using the same frequency-domain template model and marginalising over the phase parameter (Veitch & Del Pozzo 2013). In both simulations we started in the reflected extrinsic parameter position with respect to the true position. While the chain using the standard sky reflection proposal Fig. 3 gets stuck in the wrong mode, the chain using our improved proposal Fig. 2 finds the correct mode and samples both.
Conclusions
We described in this paper a proposal which allows for a much better exploration of the extrinsic parameter space for non-spinning gravitational wave signals. It should still be helpful in the spinning case, whose leading-order behavior mirrors the non-spinning case; we plan to test this in future work. It may be possible that using an approximation beyond Quadrupole instead of Eq. 30 leads to a better handle on the spinning case where there is not simple relation between H + and H × . It may also be necessary to include some intrinsic parameters to construct a more efficient proposal for spinning analyses, as precession of the orbital plane couples the spin parameters to the inclination. | 2019-04-13T07:46:35.693Z | 2014-02-01T00:00:00.000 | {
"year": 2014,
"sha1": "4d8791b495e3079ec6fe67d022101bdb21335290",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "3af5d25846e4f6d2579018930dfaaf23b6542233",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
18992613 | pes2o/s2orc | v3-fos-license | Evaluation and establishment of a canine model of delayed splenic rupture using contrast-enhanced ultrasound
The aim of the present study was to establish a canine model of delayed splenic rupture (DSR). A total of 15 mongrel dogs were anesthetized and laparotomized. The hematomas were observed following an injection of heparin. The hematomas were ruptured. The severity of the spleen rupture was evaluated and the intra-abdominal free liquid was measured. The splenic hematomas in the dogs continued to form and the hematoma area gradually increased. The hematomas were ruptured after impacting the abdominal wall. The spleens were damaged, and conventional ultrasonography showed intra-abdominal free liquid. These conditions were demonstrated via computed tomography scanning. A DSR canine model was established successfully.
Introduction
Delayed splenic rupture (DSR) is a rare type of splenic injury, which was first reported by Baudet in 1902. Baudet described the syndrome of delayed post-injury splenic rupture, in which the patient has a latent period with few or no symptoms immediately after the trauma, but develops evidence of intraabdominal hemorrhage one or more days later (1,2). DSR is a rare but well-recognized clinical entity, which has been defined as the late occurrence of symptoms and signs in patients experiencing no initial hemodynamic instability or clinical symptoms 48 h or more after injury (3). DSR contributes to a significant mortality rate (5-15%) (1,4) compared with that of acute splenic rupture (1%) (5,6). Thus, its rapid diagnosis is urgently required.
The management and diagnosis of DSR is a result of collective investigations, in which a physical examination used to be the first workup. Since the mid-1980s, however, computed tomography (CT) scanning has become a mainstay in the evaluation of intra-abdominal injury in hemodynamically stable patients. The term 'delayed splenic rupture' used before the CT scan era simply referred to a delayed diagnosis of splenic injury that evolved into a rupture. Isolated reports of trauma patients with initially normal CT scans and delayed splenic rupture have been published since the routine use of CT scan.
Some patients with mild symptoms and negative CT results underwent a non-operative management and were discharged from the hospital. However, they returned after a few days with hypovolemic shock, and the recurring CT scan showed a subcapsular splenic hematoma. The most important factor for the DSR was that little attention had been paid to the delayed development of splenic hematoma, particularly in the period between the hospital discharge and the subsequent rupture.
Nevertheless, a CT scan is usually not the first technique used in the diagnosis of DSR, although it has certain advantages. Ultrasonography (US) plays an important role in assessing the traumatized spleen, in view of its being both easily accessibile and cost-effective, hence delivering rapid results in screening. It has been widely used when detecting abdominal free fluid in patients with abdominal trauma due to its high sensitivity and availability, even though it cannot reliably determine the exact site of the active hemorrhage (7)(8)(9). On the other hand, CT has been proven to be efficient for trauma evaluation. However, the patient requires exposure to X-irradiation and needs to be removed from the emergency department (10,11). In addition, contrast-enhanced ultrasound (CEUS) has been improved greatly and applied widely. It shows the extent and the size of intra-abdominal injury and active bleeding, which has been demonstrated in various animal experiments and clinical studies (12).
The experimental outline of the current study was inspired by the following points: ⅰ) little attention has been paid to the delayed development of a splenic hematoma; ii) DSR is extremely dangerous and requires immediate medical attention; iii) the study provides a canine model with DSR for additional investigation in the diagnosis and treatment of DSR.
The current study on canines was delineated to monitor the formation, development and breakdown of the splenic hematoma with the guidance of CEUS.
Materials and methods
Animal model for hematomas of the spleen. All experiments abided by the guidelines issued by the National Institute of Health for the Care of Laboratory Animals and were performed according to a protocol approved by the Animal Care and Use Committee of our institution. A total of 15 mongrel dogs, aged 2-3 years and weighing 18-22 kg, with a health certificate [license number: SYXK (Beijing) 2007-0004] were used in the current study. General anesthesia was induced via intravenous injection of 30 mg/kg pentobarbital sodium (3%) and was maintained by intramuscular injection of 5 mg/kg pentobarbital sodium. Trauma was not induced until the dogs were successfully anesthetized. The spleen was then exteriorized from the peritoneal cavity by a median laparotomy leaving intact the vascular pedicle. At this point, the organ was directly inspected to rule out other pathologies or already existing subcapsular hematomas. In the center of the organ, a hematoma was created by pinching both sides of the spleen (Fig. 1). The needle (16-gauge) was inserted into the hematoma through the normal splenic tissue. Consequently, the tissue inside the hematoma was damaged by swinging the needle and causing a hemorrhage. Heparin (5000 U) was injected into the hematoma through the normal spleen tissue (Fig. 2), and was used to maintain the continuous bleeding status in the organ. Moreover, α-cyanoacrylate (1 ml; Guangzhou Baiyun Medical Adhesive Co., Ltd., Guangzhou, China) was injected into the pathway to close the puncture in the normal splenic tissue. Surgicel (Johnson & Johnson, USA) was pressed onto the surface of the spleen to avoid possible bleeding at the puncture point (Fig. 3), and the abdominal incision was closed in layers. Conventional US and CEUS were performed subsequent to creating a hematoma. The location, shape, size and sonographic appearance of the hematoma were registered, along with the area of the hematoma. An intravenous drip of heparin (200 U/kg/8 h, diluted by 5% sodium chloride) was administered in order to maintain the anticoagulation.
Animal model for delayed splenic rupture. After 72 h, the hematoma was ruptured by impacting the abdominal wall of the dogs using the impacting device (Fig. 4). The impacting device consisted of a supporter, an impacting handle, a piston handle, a power bullet and a power-actuated fastening device (13). The impacting force was recorded by a mechanical force transducer data recorder. After the bullet was loaded, the power-actuated fastening device was fixed into the supporter. The impacting handle was then inserted into the gun barrel, aimed at the splenic region and the trigger was pulled. The force of the bullet pushed the piston handle and the impacting handle onto the designated target region, which was located using the conventional sonography before impact. The impacting force was calculated based on the weight of the dog at 0.28 kN/kg. In the current experiment, the force was 4.8-5.6 kN (the mean, 5.3±0.3 kN).
Conventional US, CEUS and CT were performed to observe the conditions prior and subsequent to rupturing the hematoma, as well as the hemorrhage. Splenectomy was performed, the spleen specimens were harvested and evaluated by gross examination, after observation.
US contrast agent. The US contrast agent used in the current study was SonoVue (Bracco, Milan, Italy), a suspension of stabilized sulfur hexafluoride (SF6) microbubbles in saline (14)(15)(16)(17). The bubble concentration is in the range of 1x10 8 to 5x10 8 microbubbles/ml, with 90% of microbubbles <8 μm in diameter. Supplied as a lyophilized product in a septum-sealed vial, the contrast agent was reconstituted by injecting 5 ml of saline through the septum, followed by manual agitation. Following a bolus injection, the contrast agent was circulated into the cardiovascular system, which lasted for up to 5 min.
Equipment and examination. Conventional US and CEUS were performed using a CX50 system (Philips Medical CEUS employed the pulse inversion harmonic and the energy-modulated technique at a low acoustic power (a mechanical index of 0.07), which detects not only the nonlinear second harmonic response of microbubbles, but also the strong non-linear fundamental component. This process increased the signal-to-noise ratio by 15-20 dB and provided a much stronger contrast signal. In the current study, the pulse inversion harmonic and the energy-modulated technique were used to observe the sonographic appearance of the hematoma. The scan settings during the experiment (including the gain, the scanning depth and the time gain control) were optimized for each region independently. The focus was set on the deeper layer of the lesion examined. Meanwhile, conventional US were performed first, and SonoVue™ (0.025 ml/kg) was administrated with a quick bolus through the accessory cephalic vein. Scanning began immediately after each injection, lasting for 3-5 min. Digital images were recorded as single-frame images and multiple cine loops on the hard disk of the scanner, for off-line analysis.
Results
Establishment of the model. Splenic hematoma was successfully induced on 9 dogs (60.0%), failed on 4 (26.7%), while 2 dogs (13.3%) died. Consequently, the death was anatomically confirmed to have been caused by the bleeding after splenic rupture. CEUS showed low perfusion in the spleen, which was significantly reduced after 72 h for the dogs without splenic hematoma. Thus, the 6 dogs mentioned above were removed from the experiment. The remaining 9 dogs with splenic hematoma were used to analyze the data.
Observation of hematomas through conventional US, CEUS and CT.
Conventional US showed a heterogeneous hypoechoic lesion in the spleen with a poorly defined margin. No significant change was observed after 24 h of observation. All the hema-toma lesions were clearly identified via CEUS after injecting SonoVue, which showed the lesions as anechoic perfusion defects in the arterial, portal and late phases, lasting for ∼5 min. The areas of the hematoma at 4 different time-points gradually increased (Fig. 5). The CT examination performed before the rupture demonstrated that the volume of the spleen had increased with a round or oval shape of a lower density (Fig. 6).
Observation on hematomas after rupture by conventional US, CEUS and CT. Conventional US showed a slightly hyperechoic region, with an unclear boundary, intraperitoneal and perisplenic fluid. CEUS showed perisplenic hemoperitoneum, discontinued spleen capsule and large lamellar and irregular anechoic areas in the parenchyma with a clear margin. On the other hand, the CT examination showed the irregular and unenhanced areas in the spleen parenchyma with discontinued spleen capsule (Fig. 7).
Gross anatomy. Nine dogs underwent splenectomy, in which the gross specimen revealed a splenic parenchyma hematoma and varying degrees of rupture (Fig. 8).
None of the dogs had an adverse reaction during the CEUS examination, nor had any complications after the splenectomy, and all of them recovered rapidly.
Discussion
The etiology of delayed rupture is secondary to blunt trauma, such as falls, altercations, sports injuries and motor vehicle accidents (18). Countless theories have attempted to explain the mechanism of DSR, including the secondary hemorrhage and the delayed diagnosis of splenic injury (19)(20)(21). The widely accepted mechanism involves the formation of a subcapsular hematoma, which develops an increasing tension and eventually ruptures through the capsule, causing intraperitoneal bleeding (22).
Aside from ultrasonography, focused assessment with sonography for trauma (FAST) has emerged as a rapid and efficient method for the detection of hemoperitoneum in blunt abdominal injury patients (23). FAST has a comparable sensi- tivity and specificity for the detection of abdominal free fluid in patients with abdominal trauma due to its high sensitivity and availability (7,8,24). However, in a previous study, conventional US detected only splenomegaly and irregular splenic border. Conventional US showed an irregular splenic border, discontinuity of the splenic capsule, perisplenic and intraperitoneal fluid after the hematoma was ruptured. However, in a 24-h observation interval, conventional US could not reliably determine the site of the hemorrhage, nor the active hemorrhage (9).
Since the mid-1980s, CT scan has become a mainstay in the evaluation of intra-abdominal injury in hemodynamically stable patients, and has been proven to be efficient for trauma evaluation (10). The routine use of the CT scan can provide a more accurate assessment on the range and severity of the injury and the volume of bleeding, contributing to a more accurate diagnosis of splenic injury. Fagelman et al (25) described a patient, whose post-injury CT scans were normal but the repeated scan showed a splenic rupture caused by the secondary injury after 48 h. In the current study, 2 of the 9 dogs (22.2%) were normal in the first detection, but showed subcapsular hematomas and perisplenic fluid collection after the second scanning. The reason of the false negatives may be the included artifact or interference from surrounding tissues, making the injury difficult to detect, or an early CT scan taken before the subcapsular hematoma had bled enough (26).
CEUS demonstrates notable advantages in the evaluation of intra-abdominal injuries, and is superior to conventional US in diagnosing abdominal trauma (27)(28)(29). It also demonstrates the injury site as a non-and/or hypo-enhanced region with a clear extent and contrast material indicating an active hemorrhage. In the current study, CEUS showed the splenic hematomas and the significant development of the diameter of the hematomas within 48 h. CEUS clearly showed the boundary and the range of the hematomas in all the dogs (100%).
Compared with the CT scan, one of the advantages of CEUS is its rapid diagnosis of the hematoma lesions. In the past, patients usually needed to be transferred to the CT scanning room for examination and diagnosis. However, CEUS only needs 8-10 min to observe the injury, determine the grade and make appropriate triage decisions, whereas no investigations have accurately described the grade of the trauma when using conventional US. Another advantage of CEUS is its being free of radiation. In the current study, the dogs required over 4-fold the exposure time to the imaging detecter. However, 4-fold the exposure time to CEUS showed no adverse reactions, whereas 4-fold the exposure time to the radiation of the CT scan cannot be tolerated.
The animal model used in the current study has the following advantages: i) the modeling process was simple; ii) the splenic hematomas of the dogs had a long duration, with a stable status and strong comparability before and after the experiment was observed; iii) it can be used to verify the formation process of the DSR and the treatment of an experiment research in future trials.
The current experimental study has several limitations: i) it needs to be verified with a larger sample size, in order to obtain a more uniform number of dogs and splenic hematomas; ii) dogs have a strong hemostatic ability, which differs from splenic ruptures that commonly occur in humans.
In conclusion, the animal model of DSR is easy to fabricate, reliable for operation and can be evaluated using CEUS, which is more sensitive than conventional US and the CT scan. The DSR animal model has some similarities with actual clinical patients and may play an important role in clinical research, in view of being safe and effective. | 2017-06-20T18:43:54.963Z | 2012-06-14T00:00:00.000 | {
"year": 2012,
"sha1": "f0dd0a3f1301ac50393172869818999e10fa9a92",
"oa_license": "CCBYNC",
"oa_url": "https://www.spandidos-publications.com/10.3892/mmr.2012.948/download",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "f0dd0a3f1301ac50393172869818999e10fa9a92",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
18311496 | pes2o/s2orc | v3-fos-license | Searches for Multibaryon States with $\Lambda$ Hyperon Systems in pa Collision at 10 Gev/c
Experimental data as a stereo photographs from the 2m propane bubble chamber LHE, JINR have been analyzed for exotic multibaryon metastable and stable states searches. A number of peculiarities were found in the effective mass spectra of: 1)$\Lambda \pi^{\pm}$,$\Lambda \pi^+ \pi^-$, $\Lambda p$, $\Lambda p p$, $\Lambda \pi p$,$\Lambda \Lambda $ and $\Lambda K^0_S$ subsystems. The observed well known $\Sigma^{*+}$(1385),$\Lambda ^*(1600)$ and $K^{*\pm}$(892)resonances are good tests for this method. The width of $\Sigma^{*-}(1385)$ for p+A reaction is two time larger than that presented in PDG. The $\Lambda \pi^-$ spectrum observed enhancement in mass range of 1345 MeV/$c^2$ which interpreted as a stopped in nucleus $\Xi^-$. The cross section of stopped $\Xi^-$ production is $\approx$ 8 times larger than obtained by fritiof model with same experimental conditions.
There are a few actual problems of nuclear and particle physics which are concerning subject of this report 8) - 14) . These are following: in-medium modification of hadrons, the origin of hadron masses, the restoration of chiral symmetry, the confinement of quarks in hadrons, the structure of neutron stars. Strange multi-baryonic clusters are an exiting possibility to explore the properties of cold dense baryonic matter and non-perturbative QCD. Multi-quark states, glueballs and hybrids have been searched for experimentally for a very long time, but none is established
Experiment
The full experimental information of more than 700000 stereo photographs are used to select of events by V 0 channel 8) .The momentum resolution charged particles are found to be < ∆P/P >=2.1 % for stopped particles and < ∆P/P >=9.8 %, for nonstopped particles. The mean values of measurement errors for the depth and azimuthal angles are equal to ≤0.5 degrees. The masses of the identified 8657-events with Λ hyperon 4122-events with K 0 s meson are consistent with their PDG values 8) .The experimental total cross sections are equal to 13.3 and 4.6 mb for Λ and K 0 s production in the p+C collisions at 10 GeV/c. Protons can be identified by relative ionazation over the following momentum range: 0.150< P < 0.900 GeV/c. The background has been obtained by methods: polynomial function, mixing angle and by FRITIOF model 11) . The statistical significance of resonance peaks were calculated as NP / √ N B, where NB is the number of counts in the background under the peak and NP is the number of counts in the peak above background.
The Λπ − -effective mass distribution for all 6730 combinations with bin sizes of 18 and 12 MeV/c 2 in Fig.1b,2a has shown. The solid curve (Fig.1b) is the sum of the background (by the polynomial method ) and 1 Breit-Wigner resonance(χ 2 /N.D.F. = 39/54). There is significant enhancement in the mass range of 1372 MeV/c 2 with 11.3 S.D.,Γ =93 MeV/c 2 . The cross section of Σ * − production (≈680 events) is equal to ≈ 1.3 mb at 10 GeV/c for p+C interaction. The width for Σ * − observed ≈2 times larger than PDG value. One of possible explanation is nuclear medium effects on invariant mass spectra of hadrons decaying in nuclei 2) . (Fig.1b). There are negligible enhancements in mass regions of 1410, 1520 and 1600 MeV/c 2 . The cross section of Ξ − -production (≈60 events) stopped in nuclear medium is equal to 315 µb at 10 GeV/c for p+propane interaction. The observed number events with Ξ − by weak decay channel is equal to 8 (w=1/e Λ =5.3, where is a full geometrical weight of registered for Λs) 9) .Then experimental cross section for identified Ξ − by weak decay channel 9) is equal to 44µb and 11.7µb in p+propane and p+C collisions, respectively, which are conformed with FRITIOF calculation. The observed experimental cross section for stopped Ξ − (60 events) is 8 times larger than the cross section which is obtained by fritiof model with same experimental conditions. The width of Σ * − (1385) for p+A reaction is two time larger than that presented in PDG.Figures shows that there is observed Σ * − (1480) correlation which is agreed with report from SVD2 collaboration too. ( 10) ) . There are enhancements in mass regions of 2100, 2150, 2225 and 2353 MeV/c 2 (Fig.2b). There are many published articles 10) -14) for the (Λp)invariant mass with identified protons in momentum range of 0.350< P p < 0.900 GeV/c. There are significant enhancements in mass regions of 2100, 2175, 2285 and 2353 MeV/c 2 .Their excess above background by the second method is 6.9, 4.9, 3.8 and 2.9 S.D., respectively. There is also a small peak in 2225( 2.2 S.D.) MeV/c 2 mass region. Figure 2c shows the invariant mass of 4011(Λp)combinations with bin size 15 MeV/c 2 for stopped protons in momentum range of 0.14< P p < 0.30 GeV/c.The dashed curve is the sum of the 8-order polynomial and 4 Breit-Wigner curves with χ 2 = 30/25 from fits (Table 1) Table 1).
(Λ, p) and (Λ, p, p) spectra
The Λpp effective mass distribution for 3401 combinations for identified protons with a momentum of P p <0.9 GeV/c is shown in Figure 3a (Table 1). These peaks from Λp and Λpp spectra were partly conformed with experimental results from FOPI(GSI), FINUDA(INFN), OBELIX(CERN) and E471(KEK).
(Λ, Λ) spectrum
There is observed significant enhancement in mass region of 2360(4.5 S.D.) Mev/c 2 for Λ, Λ) spectrum in Figure 3b)(137 combination). This peak is conformed with theoretical predictions and with earlier published result from neutron exposure by PBC method with very poor statistics too. There is small enhancement in mass range of 2525 Mev/c 2 (3.0 S.D.) too (Table 1).
The (Λ, p, π − ) effective mass distribution (Fig. ??c) for 2975 combinations for identified protons in momentum range of P <0.9 GeV/c can taken by the 6order polynomial function which is satisfactory described the experimental data with χ 2 /(N.D.F.)=1. But the background by FRITIOF model do not describe the experimental distribution. The sum of BW (with mass 2520 MeV/c 2 and experimental width 280 MeV/c 2 ) and FRITIOF model for Λpπ − effective mass distribution is satisfactory described the experimental data too. Therefore one of probably interpretation of this peak that it can be reflection from phase space distribution too. Earlier published result about observation of resonance with mass 2495 MeV/c 2 and width 200 MeV/c 2 for Λpπ − spectrum by PBC method for neutron exposure(7 GeV/c) is not uniquely conformed. There are not observed exotic states which were earlier observed and published for Λπ + π + spectrum (in mass ranges of 1704,2071,2604 Mev/c 2 )with small statistic in neutron exposure by PBC method ?) shakh).
Acknowledgements
The work was partly supported by the grant of RFBR 07-02-08644 and org. Figure 3: a)Λpp spectrum with identified protons P p <0.9 GeV/c; b)ΛΛ spectruma) c)Λpı − p spectrum with identified protons P p <0.9 GeV/c; d)Λπ + π − spectrum for positive tracks in momentum range of P π+ <0.9 GeV/c. The dashed histogram is simulated events by FRITIOF. The experimental background is the dased curve. | 2008-01-22T08:38:26.000Z | 2007-12-07T00:00:00.000 | {
"year": 2007,
"sha1": "e4592e99bc22905d660f13747ec68231a181b9cb",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "e4592e99bc22905d660f13747ec68231a181b9cb",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
25118195 | pes2o/s2orc | v3-fos-license | Short-Term High- and Moderate-Intensity Training Modifies Inflammatory and Metabolic Factors in Response to Acute Exercise
Purpose: To compare the acute and chronic effects of high intensity intermittent training (HIIT) and steady state training (SST) on the metabolic profile and inflammatory response in physically active men. Methods: Thirty recreationally active men were randomly allocated to a control group (n = 10), HIIT group (n = 10), or SST group (n = 10). For 5 weeks, three times per week, subjects performed HIIT (5 km 1-min at 100% of maximal aerobic speed interspersed by 1-min passive recovery) or SST (5 km at 70% of maximal aerobic speed) while the control group did not perform training. Blood samples were collected at fasting (~12 h), pre-exercise, immediately post, and 60 min post-acute exercise session (pre- and post-5 weeks training). Blood samples were analyzed for glucose, non-ester fatty acid (NEFA), and cytokine (IL-6, IL-10, and TNF-α) levels through a three-way analysis (group, period, and moment of measurement) with repeated measures in the second and third factors. Results: The results showed an effect of moment of measurement (acute session) with greater values to TNF-α and glucose immediately post the exercise when compared to pre exercise session, independently of group or training period. For IL-6 there was an interaction effect for group and moment of measurement (acute session) the increase occurred immediately post-exercise session and post-60 min in the HIIT group while in the SST the increase was observed only 60 min post, independently of training period. For IL-10, there was an interaction for training period (pre- and post-training) and moment of measurement (acute session), in which in pre-training, pre-exercise values were lower than immediately and 60 min post-exercise, in post-training period pre-exercise values were lower than immediately post-exercise and immediately post-exercise lower than 60 min post, it was also observed that values immediately post-exercise were lower pre- than post-training, being all results independently of intensity (group). Conclusion: Our main result point to an interaction (acute and chronic) for IL-10 showing attenuation post-training period independent of exercise intensity.
INTRODUCTION
The benefits of an active lifestyle are well-known, since regular practice of exercise imposes a series of challenges on bioenergetic pathways and active skeletal musculature, resulting in metabolic adaptations (Rivera-Brown and Frontera, 2012). Furthermore, physical exercise promotes increases in the immunological function principally through anti-inflammatory response, mediated by cytokines (Pedersen, 2009;Neto et al., 2011). These modifications depend on fundamental aspects of the training such as intensity, duration, and session volume (Pedersen, 2009;Neto et al., 2011;Lira et al., 2012).
Studies have evidenced the efficiency of endurance training programs, promoting fat loss, and improving aerobic capacity and cardiorespiratory benefits, among others (Sigal et al., 2014;Huang et al., 2016). However, more recently, studies have shown that high-intensity intermittent training (HIIT) also leads to similar, or even higher improvement in the same variables when compared with steady state training (SST) (Robinson et al., 2015;Franchini et al., 2016;Gerosa-Neto et al., 2016;Panissa et al., 2016).
Cytokines exert several functions that act on different cell types and have a crucial role in energy metabolism (Pedersen and Febbraio, 2008). For example, muscle contraction leads to activation of the c-Jun N-terminal kinase (JNK/AP-1) and mitogen-activated protein kinase (MAPK) in muscle cells that raises Interleukin 6 (IL-6) and Tumor necrosis factor alpha (TNF-α) levels immediately in response to acute aerobic exercise and act as a cross-talk between skeletal muscle and immune cells (Pal et al., 2014). They have been considered energetic sensors capable of signaling in a hormone-like manner to mobilize extracellular glucose and induce pronounced lipolysis during exercise (Febbraio and Pedersen, 2005;Kim et al., 2015).
The increase in IL-6 is closely related to the muscle mass involved in contractile activity, exercise modalities that involve a large number of muscle groups present more pronounced increases in IL-6 (Pedersen and Febbraio, 2008), in addition to which, the exercise intensity also plays a role in the magnitude of this response, with high intensity exercises leading to a greater increase in IL-6 post-exercise (Cabral-Santos et al., 2015). In addition, Interleukin-10 (IL-10) and Interleukin 1 receptor antagonist (IL-1ra) levels increase in response to exercise, and their suggested function is to prevent exacerbation of the proinflammatory response (Lira et al., 2015). Interleukin 10 (IL-10) increases in response to HIIE in a similar manner to SSE, when session volume is matched (Cabral-Santos et al., 2015). Although acute responses are known it is important to investigate if these acute modifications change chronically. This observation can be made in a fasted state as in the majority of studies; however, it is also important to verify acute response to exercise after a training period (Zwetsloot et al., 2014;Monteiro et al., 2017).
In this context, the aims of the present study were to analyze the effects of 5 weeks of HIIT or SST on energetic molecules (glucose and non-ester fatty acid levels), and systemic cytokine parameters (IL-6, IL-10, and TNF-α levels) in an acute exercise bout performed before and after the training period.
Subjects
Men, non-obese and physically active (BMI ≤ 25;WHO, 2000), were invited to participate in the study through divulgation of the project in social networks, printed posters, and email lists of students and employees at the Universidade Estadual Paulista-Campus Presidente Prudente. Thirty subjects (age 26.36 ± 4.19 years, weight 74.37 ± 9.26 kg, height 1.77 ± 0.06 m, and peak oxygen uptake 52.82 ± 4.96 mlkg −1 min −1 ) were enrolled for the present study. The participants presented a health and neuromuscular status that ensured their ability to complete the study protocol. Written informed consent was obtained from all subjects after they had been informed about the purpose and risks of the study. All procedures of this study were approved by the Research Ethics Committee for studies involving human participants of the State University (Unesp), School of Technology and Sciences, Presidente Prudente/SP (53297815.8.0000.5402).
Our primary hypothesis was that changes in inflammatory markers in the SST and HIIT groups of men after 5 weeks of training would be statistically significant, with a power (1type II error) of 0.80 and a type I error of 0.05 based on IL-10. For this hypothesis, we used a study that measured differences between both protocols (Wadley et al., 2015) and studies that measured the IL-6 pre and immediately post-exercise as referenced by a similar protocol (high-intensity intermittent exercise) (Meckel et al., 2009(Meckel et al., , 2011Leggate et al., 2010;Lira et al., 2015). Before conducting the study we verified the sample size needed (n = 6) using G * Power 3.1 software (Düsseldorf, Germany).
Study Design
Posteriorly, subjects was stratified into three groups: HIIT (N = 10-exercised 1:1-1 min of running at 100% of velocity correspondent to maximal aerobic speed (MAS) and 1 min of passive recovery, until completing a total volume of 5-km per session); SST (N = 10-exercised continuously at 70% of MAS, completing a total volume of 5-km per session); and control group (CG) (N = 10-continued their training routine, two university football players, five individuals who participated in a local CrossFit group, one amateur jiu-jitsu practitioner, and two military men who performed regular physical exercises). The groups underwent 5 weeks of aerobic training with a frequency of three times a week on a treadmill, except for the CG, which performed no intervention. The participants were submitted to an incremental test and anthropometry. Blood was collected from the participants in two acute session, in the first and last training session. All evaluations pre intervention were repeated in identical conditions after 5 weeks. Figures 1, 2 present these acute and chronic evaluations.
Incremental Test for Determination of Maximum Speed and Peak Oxygen Consumption
The subjects were submitted to an incremental test for determination of aerobic fitness on the treadmill (Inbramed, model MASTER CI, Brazil), with the measurement of maximum oxygen consumption (Model Quark PFT Ergo, Cosmed, Rome), until voluntary exhaustion (see Cabral-Santos et al., 2015). The initial speed was set at 8 km h −1 with an increase of 1 km h −1 every 2 min. The MAS was assumed as the final completed stage. If the subjects stopped before the end of the stage, the MAS was determined according to Kuipers et al. (1985).
High-Intensity Intermittent Training
Subjects performed a 5 km run intermittently; being 1-min at MAS followed by 1-min of passive recovery (the subjects remained standing or sitting after each exercise bout). The general warm-up was performed at 50% of MAS for 5 min. Subjects performed training three times per week on nonconsecutive days.
Steady State Training
Subjects performed a 5 km run continuously at 70% of MAS (determined in the incremental test) on the treadmill. The general warm-up was performed at 50% of maximum speed for 5 min. Subjects performed training three times per week on non-consecutive days.
Acute Session
The volunteers were randomly divided into two groups (HIIE or SSE) and performed a controlled acute session on the first (pre-training) and last (post-training) exercise day of the 5week training period (chronic effect). On the day of the acute sessions all volunteers performed the first fasting blood collection (8-12), then ingested a standard breakfast (consisting of yogurt, toast, and cottage cheese) with energy value stipulated according to body composition (25% of daily energy needs), comprising energy values distributed between carbohydrates (52%), lipids (35%), and proteins (13%). After breakfast, the volunteers remained 1 h at rest, and then the second blood collection occurred. After the second blood collection, the volunteers began the acute training session. Exactly after the training session there was a new blood collect, as well as 30 (analyzed only IL-10) and 60 min after the end of the exercise session (acute effect).
Blood Samples
The blood samples (15 ml) were immediately allocated into two 5 ml vacutainer tubes (Becton Dickinson, BD, Juiz de Fora, MG, Brazil) containing EDTA for plasma separation and into one 5 ml dry vacutainer tube for serum separation. The blood was centrifuged at 3,000 rpm for 15 min at 4 • C. Serum and plasma were then stored in Eppendorf plastic tubes and stored at −20 • C for future analysis.
Statistical Analysis
Data normality was verified using the Shapiro-Wilk test and descriptive data are shown as means and standard deviation. Two-way analysis of variance (ANOVA) with repeated measures was used to compare the differences in metabolic variables and inflammatory markers between groups (control, HIIT, and SST) at baseline (fasting) and training period (pre-and post-5 weeks of exercise). Three-way analysis of variance (ANOVA) with repeated measures was applied to compare the inflammatory and metabolic response to acute exercise session according to group (HIIT and SST), training period (pre-and post-5 weeks), and moment of measurement of the collection of the blood samples in acute session (at rest, immediately-and 60-min post-exercise). Statistical significance was set at 5% for all analysis and the calculations were conducted using SPSS, version 17.0 (SPSS Inc. Chicago. IL). Table 1 presents the comparison between baseline values (metabolic variables and inflammatory markers) of the volunteers, pre-and post-5 weeks of HIIT and SST, as well as the control group. In this table that considered just the fasted values pre-and post-training, there was an main effect for group for glucose (F = 5.29; p = 0.012; partial η 2 = 0.282); with the values of the control group being greater than HIIT and SST (p = 0.018; p = 0.042, respectively). Table 2 presents the values of IL-6, TNF-α, IL-10 and glucose in acute exercise sessions performed pre-and post-5 weeks of training performed in different intensities. All values were Values are mean ± standard deviation. *,different of the other groups, p < 0.05. IL-6, Interleukin 6; IL-10, Interleukin 10; TNFα, Tumor necrosis factor α; NEFA, non-ester fatty acid. Values are mean ± standard deviation. All values were presented besides the values grouped by main effect (groups or training period) to show the differences more clearly. *, main effect for moment, different from immediately post-exercise (p < 0.05); # , interaction between group and moment, different from post-60 min (p < 0.05), £ , different from post training at the same moment of measurement (p < 0.05); & , interaction effect for group and moment of measurement, different from pre-exercise for the same group (p < 0.05).
RESULTS
presented besides the values grouped by main effect (groups or training period) to show the differences more clearly. For IL-6 there was an interaction effect for group and moment of measurement [F (2,36) = 4.55; p = 0.017; partial η 2 = 0.201], in the HIIT group pre-exercise values were lower than immediatelyand 60 min post-exercise (p < 0.001; p = 0.036; respectively), in SST group pre-exercise values were lower than 60 min postexercise (p < 0.001).
For IL-10, there was an interaction for training period and moment of measurement[F (2,24) = 5.55; p = 0.010; η 2 = 0.316], in which in pre-training period pre-exercise values were lower than immediately-(p = 0.003) and 60 min post-exercise (p = 0.002). In addition, in post-training period pre-exercise values were lower than immediately post-exercise (p < 0.001) and immediately post-exercise lower than 60 min post-exercise (p < 0.001). It was also observed that values immediately postexercise were lower pre-than post-training (p = 0.002), without differences for the other moments of measurement (pre-and post-60 min of exercise).
There was no effect for IL6/IL10 ratio or NEFA, but for glucose there was a main effect of moment of measurement [F (2, 36) = 4.68; p = 0.015; η 2 = 0.206], being that pre-exercise was lower than immediately post-exercise (p = 0.017).
DISCUSSION
The aims of the present study were to analyze the acute and chronic effects of HIIT or SST on metabolic profile and systemic cytokine parameters. The main findings of the present study were that (i) HIIT exerted more impact on IL-6 response in the acute exercise session independent of training period since IL-6 increased in the acute HIIT and SST sessions but after the HIIE protocols this increase occurred immediately and 60 min post-exercise while after an SSE session IL-6 increased only 60-min after the exercise session; (ii) TNF-α increased immediately post-acute exercise session independent of intensity and training period, and (iii) finally, IL-10 increased immediately after an acute exercise session independent of training period and intensity, however this increase was less post-training compared with pre-training, showing an attenuation of this increase. To the best of our knowledge, this is the first study to examine acute and chronic metabolic and inflammatory responses to SST and HIIT in physically active young men.
In the present study no effect of intensity was found on metabolic and inflammatory parameters fasting or after a training period. Moreover, another factor that can modulate the immunological and metabolic response to exercise is the pleiotropic IL-6. Only during contractile activity, muscle per se produces and releases IL-6 in several folds in a duration dependent manner (Pedersen, 2009). IL-6 showed a moment of measurement effect with a peak immediately after an HIIE session, and a delay in peak after an SSE session (1 h after acute exercise), showing that the HIIT group presented more effect in acute IL6-response. This finding can be related, at least in part, to a reduction in intramuscular glycogen availability which favors activations in the pathway involved in the production of IL-6 (Pedersen, 2009). Studies have observed that IL-6 increases in skeletal muscle, liver, and adipose tissue by 30-150%, in an animal model accompanied by high activity of AMPK in IL-6 production due to muscle contraction. Infusion IL-6 in males recreationally physically active showed that treatment with recombinant (rhIL-6) in concentrations similar to exercise ameliorate the glucose metabolism, being able to elevate GLUT4 translocation and consequently increase the availability of insulin-stimulated glucose (Kelly et al., 2004;Carey et al., 2006).
There was also an acute effect of time of measurement for TNF-α, with higher values immediately post-exercise than pre-exercise independent of the group (intensity). Consistent with previous reports, a time effect in plasma production of TNF-α was observed in both groups with higher values immediately after the exercise session. This increase suggests, at least in part, a possible lipolysis process in favor of increasing the availability of fatty acids into the blood circulation from adjacent tissues (Pedersen, 2009;Cabral-Santos et al., 2015), in order to maintain contractile activity due to the characteristics of the exercise. However, if prolonged, the elevated TNF-α level can be deleterious, even leading to insulin resistance by downregulating the tyrosine kinase activity of the insulin receptor (Pedersen, 2009). The mechanisms to counteract this mechanism have been assigned to an even more exacerbated increase in IL-10 concentration to attenuate possible deleterious effects, such as the activity of the transcriptional factor NF-κB pathway in target genes related to several pro-inflammatory cytokines including TNF-α, IL-1α, and IL-1β (Pedersen, 2009;Cabral-Santos et al., 2015).
At lower/moderate intensities and prolonged durations of exercise (45-60 min of exercise session at 65-75% of VO 2max ) the high aerobic energy demand depletes glucose rate and promotes a "metabolic shift" in the contributions of fuel (Jeppesen and Kiens, 2012). In our study, the concentration of glucose increased after the sessions in both group's pre-and post-training, although no significant changes were observed in NEFA. Improvements in performance can be achieved through training at or near VO 2max (Buchheit and Laursen, 2013) and there is an adaptation to the training favoring the aerobic metabolism through improvement in free fatty acid uptake and its oxidation in skeletal muscle. However, 5 weeks of HIIT seems not be enough to provide sufficient stimulus to improve this parameter.
Finally, the increase in IL-10 concentration immediately after the acute exercise session pre-training (both HIIE and SSE) was attenuated in the acute exercise session post-training, demonstrating that short-term aerobic training (5-weeks), independent of the intensity and type (moderate-intensity continuous or high-intensity intermittent), leads to adaptation in anti-inflammatory pathways. Leggate et al. (2012), in a study with overweight and obese sedentary young men (18-34 years), during 2 weeks of HIIT on a cycle ergometer (4-min at ∼90% of maximal heart rate, with 2 min recovery, three times a week), showed that HIIT is able to modulate IL-6 source of adipose tissue after only 2 weeks. Another study conducted by Zwetsloot et al. (2014), also evaluated the effects of 2 weeks of HIIT on a cycle ergometer (60 s of exercise, load corresponding to VO 2 max, with 75 s of active recovery, three times a week) on inflammatory response in eutrophic men, physically active. The authors found an acute session of HIIT induced significant increases in IL-6, IL-8, IL-10, TNF-α, and MCP-1 (monocyte chemotactic protein-1) compared with rest, however 2 weeks of HIIT did not change this inflammatory response.
IL-10 exerts function in different cell types and induces the suppression of the inflammatory response; its biological action is interceded by its membrane receptor (IL-10R). Therefore, IL-10 can inhibit the production of several cytokines such as IL-1β and TNF-α, which is transcriptionally controlled by NF-kB pathway. Therefore, the potential mechanisms about the effect of exercise training in diseases condition (e.g., Obesity, diabetes type 2, sedentary, and others), which modulates the production of TNF-α by increasing IL-10 (Teixeira et al., 2016).
To the best of our knowledge, this is the first study to examine the IL-10 responses in an acute exercise session pre and post-training (SST and HIIT). The decrease in plasma IL-10 concentration appears to be down-regulated by training and may characterize a normal adaptation. It is noteworthy that Keller et al., 2005) demonstrated that after a 10 weeks training period, the down-regulation of IL-6 is partially counteracted by enhanced expression of IL-6R, suggesting a sensitization of skeletal muscle to IL-6 at rest. Similarly, there is virtuous evidence that training programs result in a decrease in IL-10 levels chronically, however the mechanism involved needs be determined by further studies.
Overall, the advantage of our study is the exploration of the kinetics of inflammatory and metabolic profile during and after acute exercise (pre-training), as well as, chronic analysis of the acute effect of the exercise (post-training). On the other hand, the limitations were analyzing the inflammatory response in a eutrophic and physically active population, considering that this population is not affected by low-grade chronic inflammation, such as in sedentary individuals or those with diabetes, obesity and other diseases. The other point is the time of intervention, that as suggested by other studies may not be sufficient to promote significant changes in the metabolic and inflammatory profile. Thus, further studies with longer-lasting interventions deserve investigation.
Taken together, the present findings suggest that a similar adaptation may occur through cytokine release, independent of stimulus, observed in HIIT when compared to SST. The present study data support that submitting physically active young men to an exercise program is associated with beneficial effects on metabolic and inflammatory adaptations through exerciseinduced cytokine release after 5 weeks.
AUTHOR CONTRIBUTIONS
Study design and organization of the manuscript were performed by FS, TdS, RS, DI, VP, CC, EC, BR, and PM. Data analysis, statistical analysis, and the first draft of the manuscript were performed by FS, DI, VP, DI, BR, and PM. The manuscript review was performed by EC, PM, VP, DI, and FS. The final approval for publication was performed by FS. | 2017-10-31T17:55:25.660Z | 2017-10-31T00:00:00.000 | {
"year": 2017,
"sha1": "24488ef105903a3fdad4473dd5dd439c2148e0e3",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fphys.2017.00856/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "24488ef105903a3fdad4473dd5dd439c2148e0e3",
"s2fieldsofstudy": [
"Education"
],
"extfieldsofstudy": [
"Medicine"
]
} |
2678498 | pes2o/s2orc | v3-fos-license | A multifaceted analysis of HIV-1 protease multidrug resistance phenotypes
Background Great strides have been made in the effective treatment of HIV-1 with the development of second-generation protease inhibitors (PIs) that are effective against historically multi-PI-resistant HIV-1 variants. Nevertheless, mutation patterns that confer decreasing susceptibility to available PIs continue to arise within the population. Understanding the phenotypic and genotypic patterns responsible for multi-PI resistance is necessary for developing PIs that are active against clinically-relevant PI-resistant HIV-1 variants. Results In this work, we use globally optimal integer programming-based clustering techniques to elucidate multi-PI phenotypic resistance patterns using a data set of 398 HIV-1 protease sequences that have each been phenotyped for susceptibility toward the nine clinically-approved HIV-1 PIs. We validate the information content of the clusters by evaluating their ability to predict the level of decreased susceptibility to each of the available PIs using a cross validation procedure. We demonstrate the finding that as a result of phenotypic cross resistance, the considered clinical HIV-1 protease isolates are confined to ~6% or less of the clinically-relevant phenotypic space. Clustering and feature selection methods are used to find representative sequences and mutations for major resistance phenotypes to elucidate their genotypic signatures. We show that phenotypic similarity does not imply genotypic similarity, that different PI-resistance mutation patterns can give rise to HIV-1 isolates with similar phenotypic profiles. Conclusion Rather than characterizing HIV-1 susceptibility toward each PI individually, our study offers a unique perspective on the phenomenon of PI class resistance by uncovering major multidrug-resistant phenotypic patterns and their often diverse genotypic determinants, providing a methodology that can be applied to understand clinically-relevant phenotypic patterns to aid in the design of novel inhibitors that target other rapidly evolving molecular targets as well.
Background
For over fifteen years, drug resistance has been a primary challenge in the effective treatment of HIV, and our understanding of resistance mechanisms has evolved along with the virus itself as new therapies have emerged [1][2][3][4][5][6]. Thanks to worldwide efforts to tackle HIV drug resistance, many successful treatment regimens have been developed, including combination therapies [7,8] such as the Highly Active Anti-Retroviral Therapy (HAART) regimens [9,10], but treatment options have been uncertain for patients who fail these regimens due to the accumulation of drug-resistant mutations [11]. More recently, in addition to targeting molecules other than HIV-1 reverse transcriptase (RT) and protease, second-generation RT and protease inhibitors (PIs) have been developed such that they remain potent against variants resistant to first-generation inhibitors. Specifically, tipranavir [12] and darunavir [13], the two PIs most recently approved for clinical use, have been shown to be potent against viruses harboring multidrug resistance mutations such as V82A and L90M, in the cases of both tipranavir and darunavir [13][14][15][16], and V82T or I84V in the case of darunavir [13,16]. However, even these drugs have been shown to lose potency in the presence of certain mutations or mutation patterns [14,[17][18][19][20]. In fact, the existence of HIV-1 variants showing resistance to all clinically-approved inhibitors highlights the issue of cross resistance, or the existence of mutation patterns arising from a certain therapeutic regimen that simultaneously cause resistance to other drugs as well. Cross resistance among HIV-1 PIs has been studied [21][22][23][24][25][26] and reviewed [1,4,[27][28][29] extensively for over a decade, with several key mutation patterns thought to confer cross resistance to the vast majority of PIs. Consequently, one strategy is to take advantage of the lack of cross resistance when a mutation confers resistance to one PI but maintains susceptibility to other PIs. For example, D30N and I50L are associated with resistance specifically to either nelfinavir and atazanavir, respectively, but such mutations do not greatly reduce susceptibility (and I50L actually increases susceptibility) to other PIs [30][31][32][33]. Sequential or simultaneous administration of regimens that are each potent against variants toward which the other fails may be a potential strategy to prevent drug resistance and treatment failure [34]. In light of the combinatorial number of both potential treatment regimens and potential mutation patterns, it is becoming increasingly important to understand both the major mutation patterns conferring resistance on the genotypic level as well as the major phenotypic patterns of cross resistance -or lack thereof -of these mutation patterns toward the nine clinically-approved PIs.
Computational analyses have played a key role in increasing our understanding of the genotypic and phenotypic patterns of HIV drug resistance and our ability to predict drug response phenotype from genotype [35][36][37]. The large amount of publicly available data has greatly facilitated these analyses [35,38]. Several computational studies have analyzed new or existing data to identify mutations associated with one or more PI or RT drugs [39][40][41][42][43][44][45][46][47][48]. Some studies have presented longitudinal mutagenetic tree or mutation pathway models for the temporal appearances and contingencies of such mutations [49][50][51][52]. Others have uncovered pairs or clusters of correlated mutations associated with PI or RT therapy through direct enumeration, statistical or information-theory based methods, clustering, or a combination of techniques [39,[43][44][45][46]51,[53][54][55][56][57][58][59][60][61][62][63]. One particularly successful application of computational analysis is the accurate prediction of drug resistance (phenotype)often measured as a fold-change in IC 50 of a drug toward the mutant vs. wild-type -of a target variant given its amino acid sequence (genotype). Many approaches have been used to create prediction models, including regression-based methods [26,[64][65][66][67][68][69], decision trees [70], and other machine learning methods, including artificial neural networks, support vector machines, and others [67,[71][72][73][74]. Several studies have also comparatively evaluated or combined methods to improve accuracy [67,72,73,75]. Models have also been created for predicting drug resistance phenotype [76] and virological success or failure [77][78][79][80] resulting from combination therapies. In addition to these data-driven approaches, structure-based approaches for predicting drug response have also been developed, often in conjunction with the bioinformatics-based approaches [66,81,82]. Taken together, the large collection of available predictive methods still require interpretation and comparison when making patient treatment decisions [83,84], but overall they have been valuable tools both for practical decision-making and for increasing scientific understanding.
The many computational studies of HIV genotypephenotype data therefore demonstrate the power of uncovering patterns in data, with each study providing a valuable perspective on important features of HIV drug resistance. However, the vast majority of studies have offered a perspective at the genotypic level first -that is, they look for patterns on the genotypic level that correlate with phenotypic responses, usually to one drug or drug regimen at a time, in turn. To our knowledge, a rigorous cluster-based analysis of genotype-phenotype data that first uncovers patterns within the complete phenotypic space and then determines representative genotypes giving rise to the multidrug response phenotypes has yet to be done. The goal of this study is therefore to provide this unique, simultaneous view into the existing phenotypic patterns amongst all the HIV-1 PIs, as such a perspective can provide novel insights into the major combinations of PIs for which cross resistance can occur.
In this work, we analyze phenotypic drug resistance patterns by considering experimental resistance data of 398 clinical isolates of HIV-1 protease measured against the nine clinically-approved HIV-1 protease inhibitors. To determine phenotypic drug resistance patterns toward all nine drugs, a constrained k-medoids clustering method implemented via integer programming was employed. Clusters were validated by quantifying their ability to predict a sequence's level of resistance toward one drug knowing the sequence's level of resistance toward other drugs. The selection of representative genotypic sequences from each cluster indicated mutations associated with common patterns of phenotypic resistance and can serve as a "panel" of mutants that collectively represent clinically important variants. Furthermore, our direct analysis of phenotypic space allowed us to determine that the virus often utilizes multiple genotypes to achieve similar phenotypic patterns of multidrug resistance. We also show that certain drugs show highly correlated antiviral activities, while other drugs -especially tipranavir -have unique responses. Finally, information theoretic approaches were employed to determine amino acid positions and identities within HIV-1 protease that are most informative for selection into a phenotypic cluster. Taken together, this work provides a simplified framework for understanding major drug resistance patterns toward clinically-approved HIV protease inhibitors and the mutation patterns that best characterize them.
Data set
We analyzed 398 HIV-1 isolates in the HIV Drug Resistance Database [38] (HIVDB) for which cell-based in vitro PI susceptibility testing had been performed by the PhenoSense (Monogram, South San Francisco, CA) assay [85]. Susceptibility was quantified by the Monogram-measured fold-change [85], defined as the ratio of the 50% inhibitory concentration (IC 50 ) of the isolate to the IC 50 of a wild-type control. Only those isolates for which susceptibility had been tested against all nine clinically-approved inhibitors were included. The nine inhibitors considered were amprenavir (APV), atazanavir (ATV), indinavir (IDV), lopinavir (LPV), nelfinavir (NFV), ritonavir (RTV), saquinavir (SQV), tipranavir (TPV), and darunavir (DRV). The data set size was limited by the availability of isolates tested for DRV susceptibility. Many clinical isolates contained mixtures at one or more amino acid positions. Due to the limited data, mixtures were not excluded from the data set. In this work, we will refer to clinical isolates as "sequences," though we recognize that some contain mixtures at certain positions.
To estimate the degree to which mutation frequencies in the genotype/phenotype (n = 398) data set are representative of true population frequencies, the frequencies of non-polymorphic treatment-selected mutations within non-WT sequences were compared between a larger genotype-only data set of 12,290 sequences [38] and the data set used here. Reasonable correlation (Spearman's ρ = 0.88) was found between the data sets (Fig. S1, Additional File 1).
Fold-change values were log-scaled such that for a given drug, a constant factor of fold-change is represented by a constant numerical difference. Because the relationship between fold-change and clinical response is different for each drug, scaled values were standardized so that they represent predicted clinical responses, the phenotype of interest in this work. To do this, the logarithm base used for the log scaling of each drug was set to either the Monogram biological cutoff, the geometric mean of the Monogram lower and upper clinical cutoffs, or the single clinical cutoff provided, depending on which type of cutoff was available for a particular drug (Table 1). Monogram biological cutoffs are defined as the fold-change values below which 99% of the WT sequences reside, and therefore fold-changes above this value likely have decreased susceptibility. Monogram lower and upper clinical cutoffs are fold-change values at which reduced clinical response and unlikely clinical response occur for a given drug, respectively. Ritonavirboosted cutoff values were used when available. After log-scaling, scaled resistance values of 1 and 0 qualitatively signify decreased susceptibility and susceptibility equal to WT, respectively, for all drugs. To equalize the range of variation in the scaled resistances for each drug and to confine variation to a clinically meaningful range, we capped the maximal and minimal scaled resistances of all drugs to the least extreme value of these among the nine inhibitorsthose of DRV (Table 1). The upper cap of the scaled values (1.83) corresponded to a raw fold-change value for DRV of 500, the upper-limit value used when the fold-change toward DRV was greater than the upper limit of the assay. Sequences with scaled resistances equal to the capped values are therefore considered either highly resistant (upper cap) or potentially hypersusceptible (lower cap). An interpretation of scaled resistance values is in Table 2.
Clustering
Sequences were clustered based on their drug-resistance phenotypes, quantified by scaled resistance values. A globally-optimal constrained k-medoids clustering approach was implemented via a linear integer program similar to other variations of integer and mixed- programming-based k-means and k-medoids clustering formulations [86][87][88][89]. The k-medoids approach was chosen after exploration of multiple clustering methods (kmeans, hierarchical, and a method based on a tight clustering approach [90]), as it was deterministic, provably optimal, and allowed for the easy implementation of hard constraints, which we felt were crucial here for generating clusters that were phenotypically similar across all drugs. The clustering method was as follows: First, each sequence was assigned a point in a 9-dimensional space whose coordinates are the scaled resistances toward the nine inhibitors. From these points, a distance matrix was generated, in which element d ij is the Euclidean 2norm distance between the i th and j th sequences. The goal was to select k cluster centers (medoids) from within the data set and assign each point in the data set to one of these k medoids such that the sum of the distances from points to their assigned medoids was minimized.
Constraints were placed on this optimization to guarantee phenotypic similarity within a cluster, as the goal of this work is for the clusters to represent major phenotypic patterns. First, a hard constraint was set to bound the distance between any cluster member and its medoid to be less than or equal to a specified value, C. Secondly, a hard constraint was set to cap the maximum infinity norm of the distance between any cluster member and its medoid to a specified value, C ∞ . Such a constraint prohibits grouping together two sequences that are highly similar toward 8 drugs but differ qualitatively in their level of resistance toward only one drug -an undesirable outcome if we wish for our clusters to highlight major cross resistance patterns.
k, the number of clusters, is determined by feasibility; it is the minimum value of clusters for which the constraints are satisfied. In this work we use C = 0.95 and C ∞ = 0.58; the value of C = 0.95 occurs roughly at the "elbow" [91] or "kink" [92] of a plot of the minimum k needed as a function of tightness (C and C ∞ ) (Fig. S2, Additional File 1), suggesting that it allows a reasonable balance between maintaining both a low number of clusters and adequately tight clusters. A C ∞ of 0.58 guarantees that a given cluster members' scaled resistances toward any given drug cannot vary by more than 2 C ∞ = 1.16; there will not be a pair of cluster members in which one sequence shows no resistance to a given drug while another shows high levels of resistance (see Table 2). Higher values of C ∞ would make clusters too diffuse along individual dimensions, preventing their interpretation as clinically-relevant phenotypic patterns. Lower values were found to be too restrictive and generated additional clusters with redundant patterns (data not shown). To check for robustness of clustering as a function of these parameters, C and C ∞ were each varied in turn up to +/-0.05 units in increments of 0.025. Qualitative phenotypic patterns remained very similar, and pairs of sequences that were clustered together in the original clustering remained together an average of 71% as these parameters were varied. Figure S3 (Additional File 1) is a plot of the number of clusters (k) vs. data set size, using random subsets of the data. As our data set is currently not large enough to show robust convergence (k increases with increasing data set size), the quantitative results that are affected by data set size are to be considered preliminary; more data could allow for more robust convergence in future studies and would increase confidence in the quantitative conclusions.
The integer programming formulation used is shown in Supplementary Methods (Additional File 1). All integer programs in this work were implemented using the GAMS interface (GAMS Development Corporation, Washington, D.C.) and were solved using CPLEX 11.0.0 (IBM ILOG, Armonk, NY).
Validation
The clustering was validated by its effectiveness (relative to controls) in predicting the level of drug resistance of a sequence to one drug based on the sequence's levels of drug resistance toward other drugs, using the following n-fold cross-validation procedure [92]: remove each sequence (in turn) from the data setlabel it sequence "A." cluster the remaining sequences using the above method.
choose one of the nine drugs and eliminate its phenotypic data for sequence "A".
Assign sequence "A" to the cluster to whose centroid it is closest, based on 8-dimensional distance (i.e. removing the eliminated drug's dimension) Predict the level of drug resistance of sequence A toward the eliminated drug to equal the cluster centroid's scaled resistance value for the eliminated drug. Based on this value, classify sequence A with a resistance score from 0-4 (Table 2). For each drug, the total RMS error and the percent correctly classified after leaving out each sequence in turn was compared to two controls: Control 1 ("Random Control"): To predict the resistance of a sequence toward a drug, randomly choose a value from the distribution of scaled resistances in the data set toward the particular drug, and classify it using the corresponding resistance score. This control assumes that the level of resistances between drugs is not correlated.
Control 2 ("Average Control"): To predict the resistance of a sequence toward a given drug, simply use the mean of the levels of sequence "A's" scaled resistances to the other eight drugs, and classify with the corresponding resistance score. This control assumes that resistances toward the nine drugs are highly correlated.
Genotypic Analyses
In the absence of amino acid mixtures at positions within isolates, the genotypic distance between any two sequences was defined simply as the number of positions at which their amino acid sequence differed. For some analyses, all 99 protease positions were considered. To reduce noise due to polymorphic positions in certain analyses, only 21 positions that have been associated with resistance or drug treatment by previous statistical learning or analysis methods [26,39,48] were considered, unless otherwise noted: 10, 24, 30, 32, 33, 43, 46, 47, 48, 50, 53, 54, 71, 73, 74, 76, 82, 83, 84, 88, and 90. We note that there may be unavoidable arbitrariness in the selection of such a set without considerable initial genotypic-phenotypic analysis (which was exactly what we sought to avoid in this study), and in the course of our research we tried multiple sets, allowing us to check for robustness.
To account for mixtures in isolates, the contribution toward the genotypic difference between two sequences due to a position, d m , was defined in the general case as follows: where "c" is the number of amino acids that the isolates have in common at that position, and max(s) is the number of amino acids in the mixture with the greater number of amino acids at that position. As an example, if one isolate contained a mixture of leucine and methionine at a position and another contained only leucine, then d m for this position would be (1-(1/2)) = 1/2.
Intracluster genotypic or phenotypic variability was estimated as the average of all the pairwise genotypic or phenotypic distances. A bootstrapping procedure was used to generate p-values to assess statistical significance of either distance for selected clusters. Random clusters of a size equal to the considered cluster were selected with replacement from the unclustered data, and the distance metrics were calculated. This procedure was repeated 10,000 times to generate distributions for both genotype and phenotype distances, from which p-values were calculated. Bootstrap studentized statistics were obtained by dividing the difference between a value and the bootstrapped distribution mean by the standard deviation of the distribution.
From each cluster, representative sequences were selected. For genotypically diverse clusters, we wished to select multiple representative sequences from each cluster to highlight genotypic diversity. To that end, constrained k-medoids optimizations were run on each cluster using integer programming; the resulting medoids became the representative sequences. For each phenotypic cluster, the minimum value of k was determined such that all sequences within the cluster would be within a genotypic distance of t i of at least one medoid. We used a value of t i = 9 when possible, as it produced one representative sequence for all but the most diverse clusters (except for other exceptions noted below), allowing for easy interpretability. Additionally, at this k, the sum of the distances between each sequence and its assigned medoid was minimized. Sequences containing mixtures at any of the 21 positions listed above were excluded from being representative, as were sequences with any of the 99 amino acid positions undefined (only 2 within the data set). With this constraint, it becomes possible for phenotypic clusters (other than single-membered ones containing mixtures at relevant positions) not to generate any representative sequences with t i = 9. To account for this, t i was increased to 10 for clusters 3 and 19 and 10.5 for cluster 10. The integer-programming formulation used here is shown in Supplementary Methods (Additional File 1).
Sets of sequence positions or amino acid residue identities most informative of overall cluster assignment or membership in an individual cluster were identified according to an incremental mutual information (MI)based method described previously (MIST) [93]. Briefly, the method approximates high-order joint entropies to determine an optimal small subset of features (e.g., residue positions) that collectively have the highest mutual information (MI) with a given output (e.g., phenotypic cluster). These approximated MI values have also been shown to correlate with classification error and with exact MI values in analytically solvable systems. First, the MI between variables of interest was computed, using the frequencies to estimate probabilities. For each MI, the bias in the value was estimated by computing the MI of the pair after randomizing the ordering of the sequence data for each variable 100 times. Variables whose MI with the outputs exceeded their maximum shuffled MI were considered statistically significant and included in subsequent steps; remaining positions were omitted. Sequence positions or binary mutation variables were then selected incrementally to maximize the joint-MI (as estimated by MIST) between the set of all chosen variables and either the cluster assignment or membership in a specific cluster. Mixtures were not included in the distributions. Features were added incrementally until all positions or mutations were included, yielding a full ranking.
Cluster Analysis Reveals Specific Phenotypic Resistance Patterns Among Clinical Isolates
Globally-optimal k-medoids clustering was used to find groups of sequences with similar multidrug phenotypes, using the tightness constraints C and C ∞ mentioned in the Methods to enforce thresholds of phenotypic similarity. The clustering yielded 36 multi-membered clusters, along with 14 outliers. Figure 1 shows the resulting clusters; each cluster is represented as a row, with each of the colored boxes within the row representing the resistance score (Table 2) toward the corresponding drug of the cluster's centroid (i.e., average phenotype), according to the legend. At right, representative sequences are shown for each cluster, with non-WT amino acid identities shown at selected positions. A listing of mutations at all positions for each representative sequence is provided as Supplementary Information (Table S1, Additional File 1). For two clusters (5 and 9), more than one representative sequence was needed due to the genotypic diversity.
Generally, the largest clusters were those in which (a) there was no resistance (or very mild resistance) to any drug, (b) there was high resistance to all drugs, (c) there was high resistance toward all drugs except DRV, to which there was moderate resistance, (d) there was high resistance toward all drugs except DRV and TPV, (e) there was resistance toward only NFV and RTV, and (f) there was high resistance to APV, ATV, NFV, RTV, and SQV.
The clusters demonstrate that there is often cross resistance of sequences toward many drugs. Generally, sequences are most commonly resistant to RTV and NFV, followed by ATV and SQV, then APV, IND, and LPV, and finally TPV, and DRV. In general, resistance to DRV implies resistance to nearly all other drugs, with a few exceptions: Three clusters showed moderate to high levels of resistance against all drugs except TPV (clusters 5, 8, and 12), and two clusters showed moderate to high levels of resistance against all drugs except SQV (clusters 11 and 15). In both cases, the representative sequences of the clusters each had at least one mutation that has been associated with hypersusceptibility toward the particular drug in a previous study in which mutations were the independent variables and fold-change was the dependent variable [26]. These mutations include L10F, G48V, I50V, I54L, and L76V in the case of the clusters with unique susceptibility to TPV and I47A in the case of the clusters with unique susceptibility to SQV.
One may ask if grouping 398 sequences into 36 phenotypic clusters and 14 outliers shows that HIV is exploring a large or small part of the available phenotypic space. To address this question, we repeatedly generated sets of 398 random points within the same ninedimensional scaled space of our data set and clustered them using the same constraints applied to the true data set. The average minimum number of clusters needed over 300 trials was 375, with the smallest number of clusters needed being 357. Clearly, the fact that only 50 clusters (including outliers) were needed to partition the actual data within the constraints demonstrates that HIV protease is exploring a very small portion of possible phenotypic space. In fact, due to the constraints used in the clustering, the volume of 9dimensional phenotypic space occupied by each cluster must be less than the smaller of either the volume of a hypersphere of radius C or a hypercube of length 2C ∞ . Using our constraint values, the smaller of these is the former, with a value of~2.1 volume units. The volume of clinically-relevant phenotypic space can be calculated from the maximum and minimum scaled values in Table 1 to be 1800 volume units. Therefore, only (2.1*50)/1800 =~6% of phenotypic space, at best, has been explored by the considered isolates, compared to (2.1*375)/1800 =~44% for a random data set of equal size.
If a drug is removed from the data set, the minimal number of clusters needed to represent the phenotypic diversity must be less than or equal to the minimal number needed with that drug included. One way to measure the additional phenotypic diversity provided by each drug is to remove each drug in turn and re-cluster using the k-medoids approach under the same distance constraints. Drugs that, upon removal, greatly reduce the number of required clusters have phenotypes that vary somewhat independently from the other drugs. Drugs that, upon removal, do not greatly reduce the number of required clusters have phenotypes that vary Figure 1 Optimal phenotypic clustering of clinical data set. The optimal set of clusters obtained by using constrained k-medoids clustering with integer programming. 36 multi-membered clusters and 14 single-member "clusters", or outliers, were obtained. Each row represents one cluster. The second column indicates the cluster size. The next 9 columns represent the cluster centroids' phenotypic drug resistance scores, colored according to the legend. The columns at right indicate mutations in the sequence selected to represent the cluster at selected positions. Because isolates with mixtures at any of the specified positions were not allowed to represent a cluster, certain single-membered clusters do not have a representative "sequence." The representative sequences chosen for clusters 29, 31, 34, and 36 show no mutations at the positions listed here, but they have substitutions at other positions (Table S1, Additional File 1).
predictably with (though not necessarily in a correlated manner with) the remaining drugs. When this analysis was carried out, it was found that removal of TPV reduced the number of needed clusters by the most (from 50 to 31), suggesting that TPV's response toward sequences varies somewhat independently from other drugs. In other words, TPV might show varied, graded responses toward certain groups of sequences toward which other drugs show relatively constant responses. Removal of ATV, SQV, or APV also reduced the number of needed clusters by over 10 (from 50 to 37, 38, and 38, respectively). Removal of LPV, DRV, NFV, RTV, or IDV reduced the number of required clusters the least (to 44,44,43,43, and 41, respectively) suggesting that their scaled resistances either vary predictably with those of the other drugs or do not vary appreciably in general.
Phenotypic clustering allows for potentially improved prediction of unknown drug phenotypes given phenotypic information for other drugs
Our results indicate that a small portion of the full phenotypic space has been explored by the virus, assuming a representative data set; consequently, one may be able to successfully predict resistance to a given inhibitor given resistance data toward other inhibitors, without knowing any genotypic information. To test this hypothesis, we used a cross-validation procedure in which each sequence from the data set was removed in turn and the sequence's resistance toward each drug was estimated based on a clustering assignment using the other eight resistance phenotypes (see Methods). Pairs of sequences that were clustered together in the original clustering remained together an average of 99.3% of the time across all n runs of the validation, not counting runs in which a member of the pair was excluded in turn, demonstrating the stability of the clustering during the cross-validation procedure. The results of the cluster-based prediction are summarized in Table 3.
Two controls were used for comparison and are described in the Methods. Control 1 ("Random"), which randomly reported a value from the distribution of scaled resistances in the data set toward the particular drug, was able to correctly categorize resistance 21%-36% of the time, depending on the drug. The RMSE's of the actual scaled resistance values were often over a whole unit away, meaning that it would often predict no resistance when there was in fact resistance, and vice versa. NFV and RTV were classified correctly most often; the clustering suggests that this may be because they were more likely to exhibit either no resistance or complete resistance, providing a less graded distribution overall from which to sample. Control 2 ("Average"), which guessed the "unknown" phenotype to be the average of the other 8 known phenotypes for the isolate, performed much better overall than Control 1, categorizing resistance correctly for more than half of the sequences for ATV, APV, IND, LPV, and SQV. Its strong performance is additional evidence for the high level of both correlation between drug responses and cross resistance. Performance was worse for (1) NFV and RTV, which are often inactive to viruses toward which other drugs are effective, as Figure 1 indicates, (2) DRV, which, according to Figure 1, often remains effective toward viruses resistant to other drugs, Percent of viruses whose resistance score toward each drug was correctly classified ("% correct"), as well as the RMS error (in scaled resistance units) over all sequences of the phenotypic difference between predicted and actual phenotype ("RMSE") using the two controls described in the text ("CTL1 (Random)" and "CTL2 (Average)" and the cluster-based prediction. The top panel presents results using all 398 sequences, and the bottom panel shows results after removing the two clusters showing little or no phenotypic resistance to any drug. and (3) TPV, which, as shown above, has less phenotypic similarity to other drugs. Compared to either control, the cluster-based prediction correctly classified a higher percentage of viruses for every drug, although the improvement over Control 2 was modest in some cases, with the RMSE's being marginally higher in some cases as well, suggesting that when the cluster-based classification was incorrect, it was quite different. The improvement in classification was largest for NFV, RTV, and DRV. Classification rates overall were well over 50% correct with RMS errors being fairly small (generally <= 0.5 units away). The notable exception is TPV, again supporting TPV's uniqueness.
The relatively large number of sequences susceptible to all drugs in our data set might bias the prediction accuracy of certain methods to be higher than what would be expected from a data set that contained a more even distribution of all multidrug phenotypes. To control for this, we redid the above analysis after having left out the sequences corresponding to the two clusters shown in Figure 1 that show no or very little resistance to all nine drugs (clusters 36 and 34, with 77 and 71 members, respectively). Not surprisingly, Control 1 performs much better with RTV and NFV, as now, nearly all sequences in the data set are resistant to either drug. Also unsurprisingly, Control 2 performs worse because the two clusters that were removed contained sequences whose responses to all drugs were highly correlated.
The cluster-based classifier still has the highest classification accuracy, but again, the RMSE values were sometimes greater than those for Control 2. Nevertheless, these results show that an understanding of major phenotypic resistance patterns can allow for reasonable prediction of a sequence's resistance toward one drug given resistance information toward other drugs, and the strong performance of the controls under certain circumstances further highlights the underlying structure in the resistance patterns.
The accumulation of HIV protease mutations results in a "path" in phenotypic space Principal component analysis (PCA) was used to project the nine-dimensional, columnwise-centered drug-resistance phenotypes of all sequences onto the two dimensions along which there is most variation. Figure 2 is a plot of the sequences in this two-dimensional space, colored by the total number of amino acid differences from consensus-B wild type protease (considering all 99 amino acid positions). The first two principal components are able to capture approximately 90% of the variation in the data, again suggesting that there are large correlations between drug responses toward the sequences. As indicated in Table 4, the first principal component indicates resistance toward all drugs (i.e., complete cross resistance), with slightly less resistance toward TPV and DRV, relative to their means. The second principal component indicates resistance toward Figure 2 Projection of the phenotypic data onto its first and second principal components. Points are colored by the total number of amino acid substitutions relative to the consensus B WT sequence, according to the scale at right; a mixture at a position (including those containing the WT amino acid) is counted as one substitution. The phenotypes and genotypes of selected sequences are indicated. The 9-digit shorthand phenotypic code used to describe the sequences indicates the resistance score ( Table 2) to each of the 9 PIs in the order shown in Fig. 1: RTV, NFV, ATV, APV, IDV, LPV, SQV, TPV, DRV. All "outlying" sequences are fully listed in Supplementary Information (Fig. S4, Additional File 1). NFV and RTV, less resistance to ATV, SQV, and IDV, and low resistance or even increased susceptibility toward APV, LPV, DRV, and especially TPV, relative to each drug's mean resistance value.
Interestingly, the points in Figure 2 form a "path" through phenotypic space. Such "horseshoe"-shaped paths are often indicative of a non-linear ordering or underlying gradient in the data [94]. Here, the path clearly tracks the genotypic mutations accrued by the sequences. Sequences with few mutations appear to have resistance toward NFV, RTV, ATV, SQV, and IDV, but little resistance to APV, LPV, DRV, or TPV (i.e., the phenotypic path "veers upward" in the principal component space), while sequences with many mutations are resistant to all drugs (far right in the principal component space). Three sequences along the path are selected in Figure 2 and their corresponding scaled phenotypes and genotypes are listed to the right of the plot. The point selected on the intermediate portion of the path represents a sequence that includes the mutations M46I and L90M, which have been shown to be highly correlated [59] and to be associated with resistance to NFV, IDV, and RTV, and other drugs to a lesser extent [56]. The point selected at the right end of the path represents a sequence that shows at least moderate resistance to all drugs, and includes the mutations V82T, I84V associated with resistance to TPV [18], and L33F, I47V, and I54M, associated with resistance to both TPV [18] and DRV [20], in addition to containing mutations that harbor resistance toward first-generation drugs.
As a whole, Figure 2 supports the historical "path" of drug development, in that it is relatively easy to become resistant to first-generation drugs with relatively few mutations (RTV, NFV, SQV, etc.), but many accumulated mutations appear to be necessary to confer resistance to the newer drugs, such as darunavir [16,19]. Whether or not this pathway is due to history and treatment regimens or whether it is a fundamental consequence of the structural features of the drugs and the viable evolutionary space of HIV-1 protease requires further study.
A handful of sequences lie "off" the pathway. Three such sequences are indicated in Figure 2, and several more are listed in Fig. S4 (Additional file 1). The top and bottom sequences indicated in Figure 2 are both uniquely susceptible to SQV and have the mutation V82L which has been associated with increased SQV susceptibility [26]. The middle sequence shows low levels of resistance across all nine drugs. All three of these sequences fall off the pathway because of their non-negligible levels of resistance toward one or more secondgeneration drugs while maintaining susceptibility to one or more first-generation drugs. Additional outliers are shown in the Supplementary Information (Additional File 1). Figure 3a is a plot of scaled phenotypic distance vs. genotypic distance for all (398*397)/2 = 79003 sequence pairs, using all amino acid positions to compute genotypic distances. Not surprisingly, sequences that are genotypically similar are phenotypically similar; there are no points in the upper-left corner of the plot. However, there are many sequences that are very different genotypically and yet have similar scaled resistance phenotypes (there are many points in the lower-right corner), suggesting that HIV-1 may arrive at the same multidrug resistance phenotype via rather varied genotypes. Figure 3b is again a plot of all pairwise phenotypic distances vs. their genotypic distances, except now, only the resistance-associated positions specified in the Methods have been included in calculating genotypic distance. While the upper left corner of this plot is still sparse, this plot indicates that polymorphic or accessory positions not considered in genotypic distance may still affect resistance profiles in the absence of mutations commonly associated with drug resistance (i.e. there are pairs of sequences with a genotypic distance of zero in Figure 3b but a moderate phenotypic distance). Again, there are still sequences that are genotypically very different yet show similar resistance phenotypes.
Phenotypic Similarity Does Not Imply Genotypic Similarity
Mutations from two sample pairs of sequences from the lower-right quadrant of each figure are shown. In Figure 3b only the mutations contributing to the genotypic distance are shown. As can be seen, very different genotypes can generate similar resistance patterns. For example, the sequences shown in the lower box at the right of Figure 3a show high levels of resistance toward Table 4 The nine principal components in scaled phenotypic space. all drugs; each sequence has a subset of documented drug resistance mutations, such as V32I, L33F, M46I, I47V, F53L, G73S, V82A, and L90M in the case of the first sequence and M46L, I54V, V82F, and I84V in the case of the second sequence, but the sequences have few mutations in common (K20R, E35D, M36I, L63P, A71V, and I93L), most of which are considered highly polymorphic accessory mutations [95]. The variety of mutations through which the protease is able to achieve similar multidrug clinical phenotypes demonstrates that phenotypic similarity does not imply genotypic similarity. Recall here that two sequences that are both sufficiently above the clinical fold-change cutoff for resistance for a given drug are both considered phenotypically identical toward that drug, due to the capping of scaled resistance values above a threshold. Therefore, while they are phenotypically similar from a clinical perspective, they may possess quite different (but both large enough to be considered resistant) raw fold-change values toward a given drug. Another way to understand the genotypic variation for a given phenotypic pattern is to analyze the genotypic (Table 2) to the PIs in the order used in Fig. 1: RTV, NFV, ATV, APV, IDV, LPV, SQV, TPV, DRV. diversity within each phenotypic cluster. For each individual phenotypic cluster obtained in the above analysis, we used a k-medoids approach to identify representative genotypes for that cluster. Through constraints, a more genotypically diverse phenotypic cluster would require more sequences to represent it. Figure 1 shows the representative sequences chosen for all phenotypic clusters. As can be seen, two clusters (5 and 9), even though they are of similar sizes to others, require multiple representative genotypic sequences. Multiple representative sequences for a cluster suggest multiple genotypic paths to the phenotype.
To quantify phenotypic and genotypic diversity within clusters, resampling was carried within each cluster as described in the Methods. Table 5 summarizes the results for all clusters with more than 6 members. The p-values for intracluster phenotypic distance ("P Pheno") show significantly low variation, but hard constraints in the clustering enforced phenotypic similarity so this low variation is by design. It is also not surprising that the genotypes of non-resistant clusters are also statistically similar (bootstrap studentized statistics for clusters 34 and 36 are -11.3 and -13.3), as none of these sequences would be expected to bear a resistance-associated mutation, so they should all effectively be "wild-type". However, among multidrug resistant phenotypes, there is either no more or no less genotypic variation between members within a cluster than there is between any two random sequences in the data set (insignificant "P_Geno" values), or there is more genotypic variation than would be expected by random sampling in the cases of clusters 5 and 7 (P_Geno < 0.01; bootstrap studentized statistics are 2.26 and 2.16). Furthermore, on average, pairs of sequences from the same cluster generally share less than 50% of their mutations (using resistance-associated positions listed in the Methods); the one exception is the cluster containing sequences resistant to all drugs (cluster 1), whose members share 54% of their mutations on average; indeed the average intracluster genotypic distance for this cluster is in some cases less than that for clusters containing fewer mutations on average, suggesting that a higher number of mutations may not mean greater genotypic variation, and also indicating that the most highly resistant sequences might need to have some "key" mutations in common. When removing from the data set one from each pair of 28 sequences from the same patient at two different time points and reclustering, the most highly resistant cluster still had >50% shared mutations on average and a lower intra-cluster genotypic distance than some other resistant clusters, although it now required two representative sequences, suggesting that some -but not all -of this similarity may be due to including data at different time points from the same patient. This idea is further addressed in the Discussion. Nevertheless, while a larger data set would allow for a "Phenotype" is the nine-digit shorthand describing the binned level of resistance of the cluster centroid toward each of the nine drugs (see Fig. 1 for drug order). "Intra Pheno" is the average intra-cluster phenotypic distance (in scaled resistance unites). "P pheno" are p-values for intra-cluster phenotypic distance. A p-value of 0 indicates that a more extreme distance was not sampled in 10,000 trials. Analogous headings are shown for genotypic distance as well; genotypic distance was defined using the list of non-polymorphic positions in the Methods. "Avg Muts" is the average number of mutations at non-polymorphic positions for sequences within the cluster. "Shared Muts" is the average number of shared mutations between all pairs within a cluster. Shan. Ent. is the computed Shannon Entopy (in bits) for the cluster, adding up the entropies at each non-polymorphic position.
more rigorous control for the number of mutations within a cluster when computing p-values and for the exclusion of data from the same patients at multiple time points, thus allowing for fairer comparisons, this simple analysis suggests again that in general, phenotypic similarity does not imply genotypic similarity, and certain multidrug phenotypes may be achieved by more varied genotypes than others.
Feature selection uncovers important positions and mutations for cluster assignment
Finally, we sought to rigorously determine sets of amino acid positions and mutations that were most informative of membership in the phenotypic clusters. Figure 4a shows the results of greedily selecting one position at a time such that at each step (going left to right), the (approximate) mutual information (MI) between the chosen set of features and the cluster assignment is maximized. Only those positions that had significant MI with the output are included. The red bars indicate the MI between an individual position and the cluster assignment, with the yellow star indicating the threshold for statistical significance (p = 0.01). The blue bars indicate the joint MI between the subset selected thus far and the cluster assignment. Note that positions are not strictly selected in decreasing order of individual MI. Because mutations at certain positions may be highly coupled with positions already in the feature set, less individually informative positions may contribute to a more informative set of positions. This technique therefore chooses highly non-redundant features that are still informative of the output. Finally, the black bar shows the total information content of the output, the cluster assignments. Figure 4a indicates that several positions have significant MI with the final cluster assignment, especially positions 54, 90, 84, 46, 33, 20, 82, 32, 88, and 71. This is consistent with findings that these positions are known to mutate in the presence of drug resistance, either as primary or accessory mutations [4,47,48]. Collectively, these positions are computed to be nearly as informative of ultimate cluster assignments as the entire set of positions considered. The fact that position 54 is chosen as the most informative feature is not surprising, given the large range of drug-resistant mutations commonly found at this position and their varied effects toward certain drugs as either primary or secondary mutations; I54L, I54M, I54V, etc., can have different consequences toward drugs such as TPV, DRV, and APV [4,95] Also interesting is the redundancy of position 10 and, to a lesser extent, position 71; although position 10 has a high mutual information with the cluster output, it does not provide additional information once the identities at the ten positions listed above are known.
Position 71 provides some additional information but is also quite redundant. These results are consistent with the amino acids at positions 10 and 71 both being highly correlated with those at other positions such as 54, 90, 82, 84, and others[54,55,59], as it is believed that mutations at these positions can be compensatory in nature [54,55,96]. Finally, one should note that the approximate joint MI calculated between all of the positions and the output is still quite less than the true information content of the output, suggesting that amino acids considered at all positions still may not result in perfect prediction of these output data. This is likely due to the true importance of higher-order information (i.e. patterns of three or more amino acids occurring together) in contributing to ultimate phenotypes -the importance of which has been noted previously [61] -as well as noise in the measurement and clustering of the phenotypic data, thus highlighting the inherent difficulty of accurately predicting phenotype from genotype in these complex systems. The limitations of the second-order approximation also result in the approximated total joint mutual information between the features and the output (blue bars) failing to be monotonically increasing as they would be were an exact calculation feasible, again highlighting the complex relationship between various protease positions and phenotype. Figure 4b shows the specific amino acid identities calculated to be most informative of ultimate cluster assignment. Here, key resistance mutations are chosen that cause broad resistance to many of the older drugs, such as L90M and I84V. At positions that can bear several identities, such as 54, 46, and 82, the selection of the wild type amino acid suggests the importance of the lack of any mutation at these positions in determining cluster assignment. Figures 4c and 4d show sample results for mutations that are informative of assignment into specific clusters -cluster 1 (c), the most resistant cluster, and cluster 36 (d), the completely nonresistant cluster. All other results for clusters with 8+ members are shown in Figure S5 (Additional File 1). Figure 4c indicates that the amino acid identities most informative of membership into the "most" resistant cluster include several mutations that have been associated with resistance to DRV [97] including V11I, L33F, V32I, L89V, and G73S, as well as mutations such as I84V and L90M that are associated with broad cross resistance toward other PIs.
at these positions are reasonable markers for "any" resistance in general. It is important to note that while this method highlights which mutations are most informative of cluster assignment, it does not identify whether it is the presence or the absence of the mutation that is associated with cluster membership.
Discussion
This study highlighted major patterns of phenotypic resistance across all nine clinically-approved HIV-1 PIs. Cluster analysis yielded several phenotypic patterns, including clusters showing resistance to all drugs, all but one specific drug (such as TPV, SQV, or DRV), a large subset of drugs, a small subset of drugs, and only one drug (such as NFV or ATV). Through choosing representative sequences for each phenotypic pattern, we have corroborated previously reviewed observations [4,27,29] that mutations such as L33F, V82A, I84V, and L90M are associated with broad cross resistance, while others, such as D30N and I50L are associated with resistance to only one drug and still others such as I47A and I54L are linked with hypersusceptibility toward a given drug. While we have uncovered a variety of phenotypic patterns, not every possible resistance pattern was sampled, suggesting that cross resistance and other factors cause highly correlated drug responses, assuming our data set is representative. Indeed, our considered isolates occupy only a small portion (~6%) of the available, clinically-relevant phenotypic space. For example, no cluster shows a moderate or high level of resistance toward DRV without resistance to several other drugs, including APV and LPV. Whether this result is due to patient treatment histories or the intrinsic properties of the drug-protease interactions requires further study. If the latter is at least partly the case, it corroborates the observation that DRV may have a higher genetic barrier to resistance [16,19]. TPV's response toward sequences often shows little relationship to other drugs' responses. The relative lack of cross resistance to TPV may make it particularly useful [14] in conjunction with other inhibitors to "cover" the mutation space of the virus. TPV's differing response profile may follow from its unique structural characteristics. It is the only clinically-approved inhibitor that does not use a water molecule to mediate hydrogen bonds with the flap regions of the protease, suggesting the importance of developing structurally diverse drug molecules toward a target as a strategy to combat resistance [98].
The representative sequences of four clusters (29, 31, 34, and 36) had no mutations at the 21 positions considered in computing genotypic distance for this purpose, and yet their phenotypes were not identical on average. This suggests a potential role for mutations at other positions that may not be associated with primary drug resistance. A rigorous study that analyzes the differences in mutation frequencies in such clusters and considers their impacts on the susceptibilities of individual cluster members, while beyond the scope of the current work, would be interesting potential future work, especially when more data are available.
We demonstrated that phenotypic clustering may allow for prediction of resistance to a particular drug based only on resistance information toward other drugs and no genotypic information. While our goal was not to develop a prediction method that is superior to the available genotypic-based methods specific to each drug, especially as it may be rare to have multidrug phenotypic data available, it is interesting to assess how well our "genotype-blind" method performs when compared to genotype-based methods. Rigorous comparisons to mean standard error values in other studies are difficult due to different scaling and capping procedures used here for phenotypic standardization. Nevertheless, some studies used a Pearson correlation coefficient (R) between predicted and actual log-fold-change as a measure of accuracy. R values for PIs available at the time of selected studies ranged from 0.85-0.97 [69], 0.65-0.93 (across multiple methods) [67], and 0.78-0.89 [64]. From the cross-validation procedure used to generate Table 3, our "genotype-blind" method gave R-values ranging from 0.84-0.94 using all 398 data set members, with the exception of TPV, although these numbers may be artificially high due to our capping of extreme values. Predictions of resistance to TPV had an R value of only 0.45, consistent with the observed difficulty in predicting TPV resistance based on the phenotypes shown toward other drugs. Finally, our reported classification accuracies are lower than those reported for genotype-based predictions, but this is partly because we use five categories as opposed to the binary or 3-way classifications commonly used. If we adopt a naive binary classification scheme (scaled resistance < 1.0 is not resistant; scaled resistance >= 1.0 is resistant), our cluster-based classification accuracies using the n-fold cross validation procedure for the entire data set range from 85%-95% excluding TPV(79%), compared with 85%-95% for binary classification schemes reported in the literature [65,72,74] (TPV and DRV were not part of these studies). It is interesting to note that while not the major goal of our paper, we have shown that with the exception of TPV, it may be possible to approach comparable drug resistance prediction accuracy without any genotypic information; this level of accuracy demonstrates the restricted phenotypic space occupied by the virus.
Our analysis was limited by the number of accessible isolates that have each undergone phenotypic resistance testing against all nine inhibitors. A large priority for future work is acquiring enough data such that the number of clusters is robust to the data set size such that one could be confident that all or nearly all phenotypic patterns have been sampled. One strategy is to pool isolates phenotyped by different assays to bolster the amount of data; indeed, a preliminary clustering was carried out in which the data analyzed here were combined with 196 isolates phenotyped using the Antivirogram (Virco, Mechelin, Belgium) [99] assay, but differences between the assays may have subtle but important effects on the interpretation of scaled resistance values, even when using cutoffs specific to each assay, creating potential artifacts in the clustering (we obtained 67 clusters with the combined data set, a larger number than expected given the pattern shown in Fig. S3). More data would allow for larger cluster sizes in general, and therefore a higher confidence in associating certain genotypic features with cluster assignments; one could also look for differences in phenotypes between virus subtypes if such data were concurrently available. Additionally, more data may allow for cluster sizes to accurately represent the relative frequencies of phenotypes within the population and would allow us to exclude isolates containing mixtures at key positions; such an exclusion would have been too restrictive with the amount of data currently accessible.
Finally, larger clusters would also allow us to account for and potentially exclude sequences that may be from the same patient at different times, allowing for more robust conclusions to be made about the genotypic variability within a cluster. Preliminary analyses were conducted in which one sequence each from 28 sequences pairs from the same patient in our data set was arbitrarily excluded (even if the pair differed significantly in genotype), yielding a 370-member set. Qualitative results of genotypic variability remained similar, in that several resistant clusters showed as much or more genotypic diversity than randomly chosen data set members, although again, the most resistant cluster showed a higher percentage of shared mutations between cluster members on average even though it now required two representative sequences. 48 clusters were needed to cluster the "unique-patient" data set as opposed to 50 for the original data set, suggesting that data from the same patient taken at different time points can provide additional phenotypic diversity. 98% of sequence pairs grouped together in the smaller data set were grouped together in the original data set, showing that the overall clustering remained very similar.
Since the time the manuscript had been originally drafted, we obtained approximately 50 more isolates, and we have carried out very preliminary analyses of a larger (n = 453) data set including these new sequences. 52 clusters were needed to group the data using the same constraints with the original data set, and the phenotypic patterns of most clusters were identical or highly similar; 86% of sequences pairs that had been grouped together originally remained together in the clustering of the larger data set. We also used our original (n = 398) clusters to predict resistance to each drug for each of the new isolates, using the other drugs' resistance values to select the closest centroid (i.e., the same procedure used in the n-fold cross validation). Scaled resistance scores (0-4) were predicted correctly from 66%-82% of the time, depending on the drug; interestingly, predictions for TPV (67%) and DRV (82%) were better than seen in the n-fold cross validation, while those for NFV (66%) and RTV (76%) were worse. Prediction accuracy may be affected by the points in time at which the data were obtained, as resistance patterns may change over time.
Treatment histories were not entirely available for the current data set; acquiring such information and analyzing future data in their context can provide additional insights. For example, one could determine the extent to which treatment histories affect the "path" seen in Figure 2 and the dependence on individual multidrug resistance phenotypes on past treatment; such analyses could highlight the extent to which treatment histories affect the genotypic variation within a phenotypic cluster.
While the methodology and analyses were applied here to the HIV-1 protease system, the framework is generally applicable to any system for which there are phenotypic data across multiple drugs. In addition to continuing to analyze HIV-1 protease as the available data grow, another natural next step is to apply these methods to the HIV-1 nucleoside or non-nucleoside reverse transcriptase inhibitor systems and to compare the patterns of cross resistance within those systems with the ones obtained in the present study. By rigorously studying phenotypic resistance patterns of multiple systems, one may begin to address more general ideas, including whether cross resistance has equally affected all target systems and whether potential genotypic diversity within phenotypic clusters is a general feature of target systems.
Conclusions
To our knowledge, this study provided the first clusterbased analysis of the clinically-explored multidrug phenotypic space of HIV-1 protease, uncovering major multidrug patterns of resistance, cross resistance, and potential hypersusceptibility. We showed that while genotypic similarity implies clinical phenotypic similarity, the converse is not necessarily the case. We also provided genotypic determinants of phenotypic patterns. Rather than consider each drug in turn, as others have done, we have accounted for their relationships and collapsed the vast nine dimensional space into a smaller one through clustering, allowing us to consider genotypic features that are associated with a simultaneous nine-drug response. We have therefore provided a new perspective on existing drug resistance patterns and their associated genotypic features. Such a framework will be useful as new therapies emerge and will require evaluation in the context of existing drug resistance.
Additional material
Additional file 1: Contains the formal integer-programming formulations used within the work, five supplementary figures (Figures S1-S5) and one supplementary table (Table S1). This file also contains a link to a website containing the n = 398 data set used in this work. (http://www.wellesley.edu/Chemistry/Radhakrishnan/projects.html). | 2016-03-14T22:51:50.573Z | 2011-12-01T00:00:00.000 | {
"year": 2011,
"sha1": "e42db5f99b90874600c1d6d38b596a8d027a9d23",
"oa_license": "CCBY",
"oa_url": "https://bmcbioinformatics.biomedcentral.com/track/pdf/10.1186/1471-2105-12-477",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "7e3adc41e57fe9af34c7a90da587aae35cbb19a9",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Computer Science",
"Medicine",
"Biology"
]
} |
252370892 | pes2o/s2orc | v3-fos-license | Automatic tongue image quality assessment using a multi-task deep learning model
The quality of tongue images has a significant influence on the performance of tongue diagnosis in Chinese medicine. During the acquisition process, the quality of the tongue image is easily affected by factors such as the illumination, camera parameters, and tongue extension of the subject. To ensure that the quality of the collected images meet the diagnostic criteria of traditional Chinese Medicine practitioners, we propose a deep learning model to evaluate the quality of tongue images. First, we acquired the tongue images of the patients under different lighting conditions, exposures, and tongue extension conditions using the inspection instrument, and experienced Chinese physicians manually screened them into high-quality and unqualified tongue datasets. We then designed a multi-task deep learning network to classify and evaluate the quality of tongue images by adding tongue segmentation as an auxiliary task, as the two tasks are related and can promote each other. Finally, we adaptively designed different task weight coefficients of a multi-task network to obtain better tongue image quality assessment (IQA) performance, as the two tasks have relatively different contributions in the loss weighting scheme. Experimental results show that the proposed method is superior to the traditional deep learning tongue IQA method, and as an additional task of the network, it can output the tongue segmentation area, which provides convenience for follow-up clinical tongue diagnosis. In addition, we used network visualization to verify the effectiveness of the proposed method qualitatively.
Introduction
Tongue diagnosis is one of the most important diagnostic methods in traditional Chinese medicine, and it provides an effective, non-invasive criteria to assist in the assessment of a patient's physical condition (Li et al., 2019;Xie et al., 2021). Traditional tongue diagnosis is affected by objective and subjective factors, such as the external light environment and the clinical experience of practitioners. With the development of computer information technology, through computer imaging of the tongue in a stable environment, tongue images can be digitally and quantitatively studied based on image processing technology, thus, making the process of tongue diagnosis more objective and standardized. However, the tongue imaging process is inevitably affected by factors such as changes in illumination, camera parameters, and the protruding posture of the tongue, which greatly influence the quality of the tongue image, thereby affecting the performance of subsequent tongue diagnosis. Therefore, evaluating the quality of obtained tongue images has become an important and indispensable part of tongue diagnosis.
Image quality assessment (IQA) is a method to evaluate objective image quality consistent with human subjective judgments (Liu et al., 2019). At present, the clinical evaluation of tongue image quality mainly relies on the doctor's senses and clinical experiences; for example, the illumination is uniform, the color is not distorted, there is no artifact, the tongue is fully stretched, etc. Therefore, it can be concluded that the traditional evaluation process of clinical tongue image quality has the following shortcomings: 1) There is no uniform standard for the high quality of tongue images; 2) Due to the difference of subjective feelings of practitioners, there are deviations in the subjective evaluation performance; 3) It requires huge human labor. In order to overcome the above problems, objective IQA methods based on computer image analysis have been proposed. Wang et al. (Wang and Bovik, 2006) proposed to evaluate the quality of TCM tongue images through geometric, color, and texture features. Zhang et al. (Zhang et al., 2016) proposed to extract texture features, color features, spatial, and spectral entropy features from segmented tongue images, and input them into a support vector machine-based classification model, with an accuracy of 90%. However, the artificially designed traditional image morphological features have limited description performance for image quality, and it is difficult to generalize the quality of tongue maps.
In recent years, deep learning networks have achieved significant results in image recognition by extracting deeplevel features of images in a data-driven manner, demonstrating superiority over traditional hand-designed features. Deep learning technology has been widely used in the study of tongue images in various scenarios, such as tongue image segmentation (Lin et al., 2018;Xue et al., 2018), tongue diagnosis (Li et al., 2021), tongue color feature extraction (Yang and Zhang, 2018;Guangyu et al., 2021), and tongue shape recognition (Huang et al., 2010). Recently, Jiang et al. (Jiang et al., 2021) proposed a deep convolutional neural network for tongue IQA, showing that the deep features of tongue images have a better evaluation performance for tongue image quality. However, their study used the whole tongue image as the evaluation object, including the tongue body and the surrounding background area, while the information obtained by tongue diagnosis mainly comes from the tongue body (e.g., body color, body shape, tongue coating, etc.) (Giovanni, 1995), and the image information around the tongue body will have an impact on tongue quality assessment. Therefore, tongue segmentation prior to tongue IQA is a prerequisite.
Xu et al. (Xu et al., 2020) proposed a multi-task learning model to simultaneously perform tongue segmentation and tongue coating classification. An excellent segmentation may contribute to better classification, as it maximizes useful feature information corresponding to tongue regions while minimizing redundant features corresponding to nonlinguistic region information (Xu et al., 2020). However, specific classification results, especially unqualified tongue images, can provide information on features, such as color and texture, to help identify specific regions for better segmentation results. This motivates us to consider using a multi-task learning (MTL) network for simultaneous tongue segmentation and tongue IQA tasks to improve the performance of tongue IQA.
We propose a multi-task deep learning model to evaluate the quality of tongue images. First, tongue images were manually annotated as high-quality and substandard tongue datasets by Chinese physicians. Second, by augmenting the tongue segmentation subtask, we designed an MTL network for tongue IQA. Finally, we adaptively designed different task network weight coefficients between the two tasks to obtain a better tongue IQA performance. Clinical tongue images were used to demonstrate the effectiveness of our method. To our knowledge, this is the first study to use a multi-task learning framework to evaluate tongue image quality.
Tongue image acquisition
This study was approved by the local ethics committee and the patients provided informed consent. Professional tongue image collection equipment was used to collect tongue image data from the healthy volunteers. All the collected tongue images were independently assessed as high-quality and unqualified by three professional practitioners of traditional Chinese medicine. Tongue images with inconsistent evaluation results were marked separately again, and the three TCM physicians reached a consensus on the quality evaluation results. The image quality data evaluated by multiple professional physicians will serve as Frontiers in Physiology frontiersin.org the gold standard for subsequent deep network training and performance measurements.
Standard of image quality
According to the diagnosis theory of traditional Chinese medicine, the evaluation criteria of high-quality tongue images to meet the clinical needs of traditional Chinese medicine practitioners have the following characteristics (Giovanni, 1995): 1) the tongue image is clear and there is no image blurred area; 2) the light taken is naturally soft, and there is no image color distortion caused by too much brightness or darkness.
3) The tongue body was fully extended and naturally extended to the outside of the lower lip, and the surface was flat. A representative sample of high-quality tongue images is shown in Figure 1A.
In addition, we used professional tongue image acquisition equipment to obtain tongue images of the participants under different lighting conditions, exposures, and tongue protrusion conditions, as control unqualified tongue images. There were four main types of unqualified tongue images, including blurred tongue images, tongue images with too much light or insufficient light, underexposed tongue images, and tongue images with incorrect stretching postures, as shown in Figure 1B-E. Among them, shaking or vibration of the tongue during the shooting process easily leads to blurred focus, which may form a blurred picture, as shown in Figure 1B. In addition, excessive ambient light hitting the tongue surface will make the main area of the tongue too bright, and the image color will be too white, as shown in Figure 1C. As shown in Figure 1D, dark ambient light and insufficient exposure can also lead to darkening of the tongue surface, which affects clinical judgment. Tight tongue muscles and insufficient tongue extension caused by excessive tension or incorrect tongue extension posture during shooting are shown in Figure 1E.
Image preprocessing
To construct the auxiliary task of tongue segmentation in the multi-task learning network, we preprocessed the collected tongue images. First, we manually outlined the tongue region from the captured images using the Labelme software (http:// labelme.csail.mit.edu/Release3.0/), as shown in Figures 2A,B. We then cut out the pixels corresponding to these contour regions in the original image, thereby extracting the tongue region image without the background, as shown in Figure 2C. Finally, we normalized the extracted tongue and face images, as shown in Figure 2D, and uniformly resized the tongue image to 224 × 224, while using random translation and rotation for data augmentation. The segmentation mask of the tongue map and the high-and low-quality labels of the tongue image previously determined by experienced Chinese physicians were used to train the deep learning network for multi-task learning.
The proposed framework
The proposed multi-task deep-learning framework is shown in Figure 3. The network architecture consists of two parts: a shared layer and a task-specific layer. Owing to the strong performance of U-net in tongue image analysis (Ruan et al., 2021), this study adopts it as the typical convolutional neural network (CNN) backbone. The purpose of the shared layer is to extract the common features between two related tasks. While the number of network parameters can be reduced, the common features can be extracted to obtain reliable representation features between tasks. The task-specific layer extracts deep features related to the respective tasks and improves the feature representation performance of the respective tasks. In addition, to balance the difference in the contribution of the two tasks to the network optimization, we designed an adaptive weighting between tasks to obtain the optimal task weight coefficient to further improve the performance of multi-task learning. Each module is described in detail in the following subsections.
Tongue image segmentation subnetwork
We adopted a typical U-Net (Ronneberger et al., 2015) as the baseline model, which is an image-to-image classifier based on a fully convolutional network for pixel-level prediction, as shown in Figure 4. To adapt to the segmentation of tongue images, we made the following improvements to the U-Net network structure: First, a dropout layer with a parameter of 0.5 was added. The decoder consisted of upsampling and concatenation, followed by regular convolution operations. In the symmetrical network architecture of U-Net, the encoder is on the left and the decoder is on the right. The block layers used three 3 × 3 filters and rectified linear activation functions, followed by a maxpooling layer, which reduced the dimensionality of the features and avoided overfitting. Finally, it is output through the convolutional layer and softmax function. In this tongue Frontiers in Physiology frontiersin.org dataset, the tongue occupies a large part of the image, as shown in Figure 2. We used the binary cross-entropy L CE and dice coefficient L Dice as the tongue segmentation loss function L seg followed in (Yeung et al., 2022), which is described as follows: where the sums run over the N pixels, of the predicted binary segmentation pixelsŷ i ϵŶ and ground truth binary pixels y i ϵ Y.
Tongue IQA main work
In the main task of tongue image quality assessment, the encoder consists of an underlying shared layer and task-specific layer for tongue image quality classification. Shared layers were used to extract common features across tasks, and task-specific layers were used to extract deep features for tongue image quality classification, thereby mapping labels to high-and low-quality images. The specific process is to input the normalized tongue image into the network (batch_size, 3, 224, 224), then pass through the shared layer and the specific task layers based on the Resnet18 (He et al., 2016) backbone, and finally enter the fully connected layer and classifier (batch_size, 2), which are mapped to the corresponding high-and low-quality image classification labels. For the tongue image quality assessment classification task, we used cross-entropy as the loss function, as shown in Eq. 4.
L Cla
− y true log y pred + 1 − y true log 1 − y pred Here, y pred and y true denote the flattened predicted probabilities and ground truths of the high-quality tongue image, respectively. 1 − y true and 1 − y pred indicate the flattened predicted probabilities and ground truths of the lowquality tongue image, respectively.
Adaptive loss function of multi-task learning
There are differences in the weights of different tasks during the optimization process in multi-task learning (Cipolla et al., 2018). Therefore, we designed an adaptive task weight coefficient to further improve the performance of multi-task learning.
Inspired by the work of Cipolla et al. (Cipolla et al., 2018) in the field of computer vision, the loss function of two tasks with the same weight in multi-task learning is shown in Eq. 5, whereas the multi-task loss function based on adaptive weighting is shown in Eq. 6.
To avoid negative numbers in log (σ 2 i ), we set the initial value to log (1+ σ 2 i ) greater than or equal to 1. Here, σ i is the trainable hyperparameters of the i th task.
FIGURE 3
Proposed multi-task deep learning framework.
Implementation and training strategy
Our proposed model was implemented using Pytorch (Pytorch. org) and used the Adam algorithm to minimize the objective function. We used an NVIDIA TITAN RTX graphics card with 24GB memory. The initial learning rate is set to 1e-4, weight decay is set to 5e-4, and batch size were set to 4. The performance specifications of the computer are as follows: CPU, Intel(R) Xeon(R) Gold 5,118. RAM is 64.0 GB. The GPU was an NVIDIA TITAN RTX GPU. The basic implementation code for this study is available at GitHub: https://github.com/yanyan121/MTL_ Tongue_IQA.
In addition, we adopted U-Net as the backbone network for multi-task learning because of its excellent performance in image segmentation (Yeung et al., 2022). The weights of the model were obtained from pretraining on ImageNet (Russakovsky et al., 2015), which has a large dataset, rich categories, and great versatility. Therefore, the weight trained by ImageNet was used as the initial value of our model to further train the classification task for tongue image quality assessment. Specifically, at the beginning of training, we chose to freeze the encoder network weights, train the decoder network weights and classification network weights for 10 epochs, and train the last 40 epochs with the unfrozen weights using the loaded training learning rate.
Experiment setup and evaluation metric
The number of images used for tongue image quality assessment was 1,014, and the number of images in each class was high quality (546 images) and poor quality (468 images). The tongue images were marked in advance by professionals, and the marked tongue images were subsequently used as the training (70%), validation (15%) sets and testing (15%) sets. For tongue segmentation subtask, we used Dice similarity coefficient (DSC), Jaccard index (JI) (Bertels et al., 2019), Mean intersection over union (MIoU), frequency weighted intersection over union (FWIoU) for quantitative evaluation. These metrics were calculated as follows: Frontiers in Physiology frontiersin.org DSC is used to measure the similarity of two sets, whereas JI compares members for two sets to see which members are shared and which are distinct. Also known as the JI, IoU is a statistic used for comparing the similarity and diversity of sample sets. In semantics segmentation, it is the ratio of the intersection of the pixel-wise classification results with the ground truth, to their union. MIoU is the class-averaged IoU. FWIoU is a frequencyweighted IoU. For tongue quality classification, we employed accuracy, precision, recall, and F1-score for quantitative evaluation.
where TP, FP, TN, and FN represent true positives, false positives, true negatives, and false negatives, respectively. In the classification task, it represents the prediction and ground truth, whereas in the segmentation task, it represents the pixelwise labels.
To evaluate the effectiveness of the proposed method, several ablation experiments were conducted. The differences between ablation models are listed in Table 1. STL_original_images, MTL_equal_weight, and MTL_adaptive_weight used original tongue images as shown in Figure 2A. To assess the interference of the surrounding background, we compared the original tongue images and the tongue region without the background on the performance of tongue image quality classification.
Performance comparison of different methods
We compare the performance of our method with state-ofthe-art deep learning tongue image quality assessment and tongue image segmentation research. Jiang et al. (Jiang et al., 2021) is a recently proposed tongue image quality assessment method based on deep learning network, which is a binary classification task performed by the ResNet architecture. Due to the discrepancy between datasets, the accuracy of the method tested in our dataset is 0.813, while the accuracy of our proposed multi-task learning based tongue image quality assessment is 0.890, an improvement of 0.077. Furthermore, in the auxiliary task of tongue image segmentation, we compare our method with two state-of-the-art segmentation methods with network architectures Deeptongue (Lin et al., 2018) and DeepLabV3 (Xue et al., 2018), respectively. As shown in Table 2, under the multi-task learning framework, our proposed tongue image segmentation method has a certain degree of improvement compared with the current tongue segmentation methods Deeptongue and DeepLabV3. The main reason for the performance improvement should be the mutual promotion of associated tasks in multi-task learning, thereby promoting the improvement of single-task performance. Table 3 shows the performance comparison of single-task learning (STL) and multi-task learning based on tongue image segmentation, and the performance comparison of using two different loss-weighing strategies in tongue image quality assessment. We found that multi-task learning with the same weight policy yielded better performance than single-task learning with extracted tongue images. Furthermore, in the framework of multi-task learning, the adaptive weighting strategy demonstrates better performance than the equal weighing strategy. Compared to single-task learning using original tongue images, the proposed framework achieved a significant improvement of 0.074 in accuracy. Figure 5 shows the accuracy and loss curves of several typical tongue image-quality assessment models. Throughout the testing process, our proposed multi-task learning framework (MTL_adaptive weight) consistently showed better performance than single-task learning of original tongue images, extracted tongue images, and multi-task learning based on an equal-weight strategy.
Performance of ablation study in the proposed method
These two hyperparameters were set before training and the initial value is set to 1. We have added a graph showing how σ1 and σ2 change with epoch during training, as shown in Figure 6A. After the curve converges and stabilizes, the weights of the classification and the segmentation are finally 0.45 and 1.05, respectively. For heterogeneous MTL problems (e.g., the segmentation and classification of tongue images) that contain tasks of different types, following (Maninis et al., 2019;Vandenhende et al., 2022) different measurement methods lead to a large difference in the calculated scalar, as shown in Figure 6B,
Models
Task Input images where the loss value of the classification task is much larger than that of the segmentation task. These two tasks loss values are regularized by adaptive weighting, and then the back propagation gradient of the segmentation task is increased. In this way the auxiliary ability of the segmentation task is improved. Figure 7 shows a heatmap visualization using the gradientweighted class activation map (Grad-CAM) (Selvaraju et al., 2017), which reflects the main features of the regions that contribute to
FIGURE 5
Accuracy and loss curves for the different methods in tongue images.
Frontiers in Physiology frontiersin.org the prediction results. Darker red areas and brighter pixels indicate areas in which different models are focused. Precisely, the first line represents high-quality tongue images, whereas the second line represents low-quality tongue images. "True" indicates that the prediction is correct and "False" indicates that the prediction is incorrect. Moreover, the numbers (e.g., 0.890) indicate the Visualization of saliency maps. Image1: high quality tongue image; and Image2. True indicates that the prediction is correct, and False indicates that the prediction is wrong.
Frontiers in Physiology frontiersin.org 09 probability value of the predicted outcome of the tongue image quality classification in Figure 7. The single-task model pays more attention to the tongue, whereas the multi-task model pays more attention to the main part of the tongue image and its boundaries. By comparison, it can be found that the feature extractor in the multi-task model can better capture the information of the main part of the tongue body and tongue boundary area. The proposed multi-task model can focus on more comprehensive feature regions, which improves the quality assessment performance of the multi-task model.
In the segmentation task, by quantifying the performance of different models by DSC and JI, it was found that the segmentation performance was also slightly improved. Bayesian neural networks with Monte Carlo dropout (MC dropout) can obtain uncertainty estimations (Gal and Ghahramani, 2015), which is useful and powerful. Using Dropout = 0.5 at test time, we can visualize the uncertainty of the segmentation boundaries. Figure 8 shows the visualization of the uncertainty in the segmentation results. The redder the color, the higher the uncertainty value of the output of the region. The overall certainty is found to be higher in the adaptive multi-task learning model; thus, the results are more reliable. More importantly, we found that the evaluation of image quality is closely related to the tongue boundary, and the higher the certainty of the segmented boundary, the higher is the accuracy of quality classification.
Discussion
In this study, our proposed multi-task learning model mainly addresses the clinical problem in tongue image quality assessment. The performance of tongue image quality assessment was further improved by adding tongue body segmentation as an auxiliary task. To the best of our knowledge, this is the first study to use multi-task learning for tongue image quality assessment. Compared with the existing deep learning tongue image quality assessment research (Xu et al., 2020), our multi-task learning method achieved better results in tongue image quality assessment. The performance was greatly improved, and the auxiliary task of our multi-task learning could output the segmented tongue, which further facilitates subsequent tongue diagnosis. Therefore, this method provides a good reference for the application of artificial intelligence in tongue diagnosis.
Multi-task learning has been widely used in the field of artificial intelligence, especially in the segmentation and classification of medical images . We applied multi-task learning to tongue image quality assessment, mainly considering that tongue map quality and tongue body segmentation are two related tasks. According to the multi-task learning theory, for two
FIGURE 8
Visualization of tongue segmentation, prediction: prediction results; Total uncertainty: data uncertainty and model uncertainty; The value range of uncertainty maps is between 0 and 1, and a larger value represents a higher degree of uncertainty.
Frontiers in Physiology frontiersin.org related tasks, multi-task learning can further boost the performance of both tasks (Ranjan et al., 2018;Xu et al., 2020). In addition, our multi-task learning further considers the different weights of different tasks in the optimization process. By optimizing the design of adaptive weight coefficients, the performance of tongue map quality evaluation and tongue segmentation was further improved. It should be noted that there were certain differences between the tongue quality assessment in this study and the general image quality assessment (Zhu et al., 2020;Ma and Fang, 2021). For a general image quality assessment, more information, such as image color distortion and blurring, should be considered, which has the same requirements as our tongue map quality assessment. However, for the special tongue diagnosis, incomplete tongue extension and an excessive or too dark environment will lead to a low-quality tongue map; thus, making our evaluation of tongue map quality incapable of completely copying the general quality evaluation method. For this reason, we used the tongue images collected by the joint identification of three TCM physicians to construct high-quality and low-quality tongue images as the learning samples of the multi-task learning deep network.
This study had certain limitations. First, low-quality images in this study were obtained by virtually changing the shooting scene, not from clinical practice, and may not completely simulate all low-quality image situations. During follow-up, tongue image data may be obtained from clinical practice and the performance of this method may be independently verified. Second, our multi-task learning required the construction of auxiliary segmentation subtasks, which demands the manual delineation of practitioners and as a result brings a great workload on the clinic. In the future, we will consider integrating unsupervised or self-supervised segmentation tasks into a multi-task learning deep network to reduce the clinical workload in the preprocessing stage. In addition, the module we designed was only a postprocessing image evaluation after image acquisition. Obtaining the results of image quality evaluation in real time during the image acquisition stage can greatly improve the success rate of high-quality tongue image acquisition. Therefore, the focus of future studies is to integrate this method into real-time tongue image acquisition equipment for the real-time identification of tongue image quality.
Conclusion
In this study, we propose a multi-task deep learning model for tongue image quality assessment. By adding the tongue segmentation subtask, the experimental results showed that the performance of the multi-task learning network for tongue image quality assessment was significantly improved. In addition, multi-task learning deep network could output tongue segmentation regions, which could facilitate subsequent clinical tongue diagnosis. We believe that the research method in this study has great value as a reference for the clinical application of tongue diagnosis.
Data availability statement
The original contributions presented in the study are included in the article/Supplementary Material, further inquiries can be directed to the corresponding authors.
Ethics statement
The studies involving human participants were reviewed and approved by Ethics Committee of Guangdong Provincial Hospital of Chinese Medicine. The patients/participants provided their written informed consent to participate in this study. | 2022-09-20T14:03:01.174Z | 2022-09-20T00:00:00.000 | {
"year": 2022,
"sha1": "7af4b7f42fb06d3b890acdc6d7c596347cb77bd4",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Frontier",
"pdf_hash": "7af4b7f42fb06d3b890acdc6d7c596347cb77bd4",
"s2fieldsofstudy": [
"Medicine",
"Computer Science"
],
"extfieldsofstudy": [
"Medicine"
]
} |
8065011 | pes2o/s2orc | v3-fos-license | Clinical aspects of feline retroviruses: a review.
Feline leukemia virus (FeLV) and feline immunodeficiency virus (FIV) are retroviruses with global impact on the health of domestic cats. The two viruses differ in their potential to cause disease. FeLV is more pathogenic, and was long considered to be responsible for more clinical syndromes than any other agent in cats. FeLV can cause tumors (mainly lymphoma), bone marrow suppression syndromes (mainly anemia), and lead to secondary infectious diseases caused by suppressive effects of the virus on bone marrow and the immune system. Today, FeLV is less commonly diagnosed than in the previous 20 years; prevalence has been decreasing in most countries. However, FeLV importance may be underestimated as it has been shown that regressively infected cats (that are negative in routinely used FeLV tests) also can develop clinical signs. FIV can cause an acquired immunodeficiency syndrome that increases the risk of opportunistic infections, neurological diseases, and tumors. In most naturally infected cats, however, FIV itself does not cause severe clinical signs, and FIV-infected cats may live many years without any health problems. This article provides a review of clinical syndromes in progressively and regressively FeLV-infected cats as well as in FIV-infected cats.
Introduction
Feline leukemia virus (FeLV) and feline immunodeficiency virus (FIV) belong to the most common infectious diseases in cats. Both are retroviruses, but FeLV is a γ-retrovirus, while FIV is classified as OPEN ACCESS a lentivirus. Although FeLV and FIV are closely related, they differ in their potential to cause disease. In the United States, prevalence of both infections is about 2% in healthy cats and up to about 30% in high-risk or sick cats [1,2]. Risk factors for infection include male gender, adulthood, and outdoor access [3,4]. Retroviral tests can diagnose only infection, not clinical disease.
FeLV is more pathogenic than FIV. For a long time, FeLV was considered to account for most disease-related deaths, and to be responsible for more clinical syndromes than any other single agent in cats. It was proposed that approximately one third of all tumor-related deaths in cats were caused by FeLV, and an even greater number of cats died of FeLV-related anemia and secondary infections caused by suppressive effects of the virus on bone marrow and the immune system. Today, these statements have to be revised, as in recent years the prevalence and consequently the importance of FeLV as a pathogen in cats have been decreasing. Still, if present in closed households with other viruses, such as feline coronavirus (FCoV), or FIV, FeLV infection has the greatest impact on survival [5]. The death rate of progressively FeLV-infected cats in multi-cat households has been estimated at approximately 50% in two years and 80% in three years [6,7], but is much lower today, at least for cats that are well taken care of and that are kept strictly indoors in single-cat households. A survey in the United States compared the survival of more than 1000 FeLV-infected cats to more than 8000 age-and sex-matched uninfected control cats and found that in FeLV-infected cats median survival was 2.4 years compared to 6.0 years for control cats [2]. Despite the fact that progressive FeLV infection is associated with a decrease in life expectancy, many owners elect to provide treatment for their cats, and with proper care, FeLV-infected cats might live for many years with good quality of life.
Although FIV can cause an acquired immunodeficiency syndrome in cats ("feline AIDS") comparable to human immunodeficiency virus (HIV) infection in humans, with increased risk for opportunistic infections, neurologic diseases, and tumors, in most naturally infected cats, FIV does not cause a severe clinical syndrome. With proper care, FIV-infected cats can live many years and, in fact, can die at older age from causes unrelated to their FIV infection. In a follow-up study in naturally FIVinfected cats, the rate of progression was variable, with death occurring in about 18% of infected cats within the first two years of observation (about five years after the estimated time of infection). An additional 18% developed increasingly severe disease, but more than 50% remained clinically asymptomatic during the two years [8]. FIV infection has little impact on a cat population and does not reduce the number of cats in a household [2]. Thus, overall survival time is not shorter than in uninfected cats, and quality of life is usually fairly high over an extended period of time.
Stages of Infection
Both infections are chronic in nature and develop through different disease stages. Characteristically for both infections, there is a long asymptomatic phase, in which cats do not show clinical signs.
FeLV infection has different stages. Recently, novel diagnostic tools, including very sensitive PCR methods providing new data on the course of FeLV infection, have questioned the traditional understanding of FeLV pathogenesis. Cats believed to be immune to FeLV after infection were found to remain provirus-positive. Antigen-negative, provirus-positive cats are frequently detected and their clinical relevance and role in FeLV epidemiology is still not fully understood. Antigen-negative, provirus-positive cats are considered FeLV carriers. Following reactivation, they can act as an infection source. As FeLV provirus is integrated into the cat's genome, it is unlikely to be fully cleared over time. Antigen-negative, provirus-positive cats do not shed the virus, but reactivation with reoccurring virus shedding is possible [9,10]. Based on this information, a new classification has been proposed, in which the stages of FeLV infection are defined as abortive infection (comparable to the former "regressor cats"), regressive infection (comparable to the former "transient viremia" followed by "latent infection"), progressive infection (comparable to the former "persistent viremia"), and focal or atypical infection (Table 1) [11][12][13][14].
Abortive infection. After infection, the virus starts initially to replicate in the local lymphoid tissue in the oropharyngeal area. In some immunocompetent cats (formerly called "regressor cats)", viral replication may be terminated by an effective humoral and cell-mediated immune response; these cats never become viremic. They have high levels of neutralizing antibodies. Neither FeLV antigen nor viral RNA or proviral DNA can be detected in the blood at any time. Abortive infection is likely caused when a cat is exposed to low doses of FeLV [15]. It is still unknown, how often this situation really occurs in nature, because studies using very sensitive PCR methods have found that in many of the formerly considered "regressor cats", the virus actually can still be found later in these cats when investigating tissue samples. Thus, it appears likely that none or only very few cats can completely clear FeLV infection from all cells.
Regressive infection develops following an effective immune response. In regressive infection, virus replication and viremia are contained prior to or shortly after bone marrow infection. After initial infection, replicating FeLV spreads systemically through infected mononuclear cells (lymphocytes and monocytes). During this stage, cats have positive results on tests that detect free antigen in plasma (e.g., ELISA). They shed virus, mainly with saliva. In cats with regressive infection, this viremia, however, is terminated within weeks or months (therefore formerly called "transient viremia"). In some cats, viremia may persist longer than three weeks. After about three weeks of viremia, bone marrow cells become infected, and infected hematopoietic precursor cells develop into infected granulocytes and platelets that circulate in the body. Even if bone marrow cells become infected, a certain percentage of cats is able to clear viremia. However, they cannot completely eliminate the virus from the body, even if they terminate viremia because the information for virus replication (proviral DNA) is present in bone marrow stem cells. This condition has been called "latent infection" (and is now a part of the regressive infection). The molecular basis of latency is the integration of a copy of the viral genome (provirus) into cellular chromosomal DNA. Although proviral DNA remains present within the cellular genome, no virus is actively produced. Thus, cats with regressive infection have negative results in all tests that detect FeLV antigen. During cell division, proviral DNA is replicated and the information given to the daughter cells. Thus, complete cell lineages may contain FeLV proviral DNA. However, proviral DNA is not translated into proteins, and no infectious virus particles are produced. Therefore, regressively infected cats do not shed FeLV and are not infectious to others. Sensitive PCR methods can detect provirus in the blood of cats with regressive infection that are antigen-negative. In a study in Switzerland it was shown that in addition to the antigen-positive, provirus-positive cats, about 10% of the cat population that were negative for antigen were positive for proviral DNA in the blood [16]. Regressive infection can be reactivated because the information for producing complete viral particles is present and can potentially be reused when antibody production decreases (e.g., after immunosuppression).
In cats with progressive infection, FeLV infection is not contained early in the infection. Thus, extensive virus replication occurs, first in the lymphoid tissues, followed by the bone marrow and mucosal and glandular epithelial tissues. Progressively infected cats remain persistently viremic. They are infectious to other cats for the remainder of their life. This condition has been called "persistent viremia" and is now classified as progressive infection. Cats with progressive infection develop FeLV-associated diseases, and most of them will die within a few years. Regressive and progressive infections can be distinguished by repeated testing for viral antigen in peripheral blood; regressively infected cats will turn negative at latest 16 weeks after infection, while progressively infected cats will remain positive. Initially both, regressive and progressive infections are accompanied by persistence of FeLV proviral DNA in the blood detected by PCR but later are associated with different FeLV loads when measured by quantitative PCR; regressive infection is associated with low, progressive infection with high virus load [11,17].
Focal infections or atypical infections have been reported in up to 10% of experimentally infected cats. Focal infections or atypical infections may also be observed in natural infections, but are probably rare in the field. Focal infections are characterized by a persistent atypical local viral replication (e.g., in mammary glands, bladder, eyes). This replication can lead to intermittent or low-grade production of antigen, and therefore, these cats can have weakly positive or discordant results in antigen tests, or positive and negative results may alternate [14]. Experimental FIV infection also progresses through several stages, similar to HIV infection in people, including an acute phase, a clinically asymptomatic phase of variable duration, and a terminal phase sometimes called "feline acquired immunodeficiency syndrome" ("AIDS") [18,19]. However, there is no clear distinction between these stages in naturally FIV-infected cats, and not all stages are apparent; therefore, the usefulness of this staging in natural FIV infection has been questioned. Moreover, even cats in moribund condition with severe immunosuppression and secondary infections may fully recover with appropriate care and return to an asymptomatic stage. Thus, different from HIV-infected people, cats classified as being in the "AIDS phase" (high virus load, severe clinical signs due to secondary infection) can recover and be asymptomatic again, and their virus loads can even decrease dramatically.
Clinical Signs
Clinical signs in both retrovirus infections are variable. After a long asymptomatic phase, cats can develop tumors, hematopoietic disorders, neurologic disorders, immunodeficiency, immune-mediated diseases, and stomatitis. The pathomechanism of these disorders is different in both retrovirus infections (Table 2).
Although FeLV was named after a tumor that first garnered its attention, most infected cats are presented to the veterinarian not for tumors but for anemia or immunosuppression. Clinical signs associated with FeLV infection can be classified as tumors, immunosuppression, hematologic disorders, immune-mediated diseases, and other syndromes (including neuropathy, reproductive disorders, fading kitten syndrome). Of 8642 FeLV-infected cats presented to North American Veterinary Teaching Hospitals, various co-infections (including FIV infection, feline infectious peritonitis (FIP), upper respiratory infection, hemotropic mycoplasmosis, and stomatitis) were the most frequent findings (15%), followed by anemia (11%), lymphoma (6%), leukopenia or thrombocytopenia (5%), and leukemia or myeloproliferative diseases (4%) [20]. The outcome of FeLV infection and the clinical course are determined by a combination of viral and host factors. Some of the differences in outcome can be traced to properties of the virus itself, such as the subgroup that determines differences in the clinical picture (e.g., FeLV-B is primarily associated with tumors, FeLV-C is primarily associated with non-regenerative anemia). A study aiming to define dominant host immune effects or mechanisms responsible for the outcome of infection by using longitudinal changes in FeLV-specific cytotoxic T-lymphocytes (CTL) found that high levels of circulating FeLV-specific effector CTLs appear before virus-neutralizing antibodies in cats that have recovered from exposure to FeLV. In contrast, progressive infection with persistent viremia has been associated with a silencing of virusspecific humoral and cell-mediated immunity host effector mechanisms [21]. Probably the most important host factor that determines the clinical outcome of cats infected with FeLV is the age of the cat at the time of infection [22]. Neonatal kittens develop marked thymic atrophy after infection ("fading kitten syndrome"), resulting in severe immunosuppression, wasting, and early death. As cats mature, they acquire a progressive resistance. When older cats become infected, they tend to have abortive or regressive infections or, if developing progressive infection, have at least milder signs and a more protracted period of apparent good health [7].
Clinical signs in naturally FIV-infected cats usually reflect secondary diseases, such as infections and neoplasia, to which FIV-infected cats are considered more susceptible. FIV itself may cause some clinical features (e.g., neurologic signs) resulting from abnormal function or inflammation of affected organs. In experimental infection, an initial stage is sometimes noticed usually with transient and mild clinical signs, including fever, lethargy, signs of enteritis, stomatitis, dermatitis, conjunctivitis, respiratory tract disease, and generalized lymph node enlargement [23]. The acute phase may last several days to a few weeks, after which cats will enter a period in which they appear clinically healthy. This phase is usually not noticed by the owners in naturally infected cats. The duration of the following asymptomatic phase varies, but usually lasts many years. Factors that influence the duration of the asymptomatic phase include the pathogenicity of the infecting isolate (also depending on the FIV subtype), exposure to secondary pathogens, and the age of the cat at the time of infection [24,25]. In the last, symptomatic phase ("AIDS phase") of infection, the clinical signs are a reflection of opportunistic infections, neoplasia, myelosuppression, and neurologic disease.
Tumors
While FeLV-infected cats are 62-times more likely to develop lymphoma or leukemia than noninfected cats and FeLV plays a direct role in tumorgenesis, FIV-infected cats have about a five-fold increased risk of tumor development, and the role of FIV is usually indirect. Lymphomas are the most common tumors in FeLV-and FIV-infected cats. While FeLV-infected cats have most commonly T-cell lymphomas, lymphomas in FIV are mostly of B-cell origin [26,27].
FeLV is a major oncogene that causes different tumors in cats, most commonly lymphoma and leukemia, less often other hematopoietic tumors and rarely other malignancies (including neuroblastoma, osteochondroma, and others). The association between FeLV and lymphomas has been clearly established in several ways. First, these malignancies can be induced in kittens by experimental FeLV infection [28][29][30]. Second, cats naturally infected with FeLV have a higher risk of developing lymphoma than uninfected cats [29,31]. Third, most cats with lymphoma were-at least in earlier times when prevalence of FeLV was still higher-FeLV-positive in tests that detected infectious virus or FeLV antigens. Previously, up to 80% of feline lymphomas and leukemias were reported to be FeLV-related [32][33][34][35][36][37][38]. However, since the 1980s a reduction in the prevalence of viremia has been noted in cats with lymphoma [39][40][41]. The decrease in prevalence of FeLV infection in cats with lymphoma or leukemia also indicates a shift in tumor causation in recent years. Whereas 59% of all cats with lymphoma or leukemia were FeLV antigen-positive in one German study from 1980 to 1995, only 20% of the cats were FeLV antigen-positive in the years 1996 to 1999 in the same University Teaching Hospital [41]. In a recent study in the Netherlands, only four of 71 cats with lymphoma were FeLV-positive, although 22 of these cats had mediastinal lymphoma, which previously was strongly associated with FeLV infection [42]. A greater prevalence of lymphoma in older-age cats in now observed. The major reason for the decreasing association of FeLV with lymphoma is the decreasing prevalence of FeLV infection in the overall cat population as a result of FeLV vaccination as well as testing and elimination programs. However, prevalence of lymphomas caused by FeLV may be higher than indicated by conventional antigen testing of blood [43]. Cats from FeLV cluster households had a 40-fold higher rate of development of FeLV-negative lymphoma than did those from the general population. FeLV-negative lymphomas have also occurred in laboratory cats known to have been infected previously with FeLV [44]. FeLV proviral DNA was detected in lymphomas of cats that tested negative for FeLV antigen [43], also suggesting that the virus may be associated with a larger proportion of lymphomas than previously thought. FeLV has been shown to incorporate cellular genes; several such transducted genes also present in regressively infected cells have been implicated in viral oncogenesis [44][45][46]. It is still unclear, how common regressive FeLV infection is responsible for FeLV-associated tumors in the field as study results have been controversial. Proviral DNA was detected in formalin-fixed, paraffin-embedded tumor tissue in 7/11 FeLV-negative cats with lymphoma [43]. However, other groups found evidence of provirus in only 1/22 [45] and in 0/50 FeLV antigen-negative lymphomas [47].
The most important mechanism by which FeLV causes malignancy is by insertion of the FeLV genome into the cellular genome near a cellular oncogene (most commonly myc), resulting in activation and over-expression of that gene. These effects lead to uncontrolled proliferation of these cells (clone). A malignancy results in the absence of an appropriate immune response. FeLV may also incorporate the oncogene to form a recombinant virus (e.g., FeLV-B, FeSV) containing cellular oncogene sequences that are then rearranged and activated. When they enter a new cell, these recombinant viruses are oncogenic. In a study of 119 cats with lymphomas, transduction or insertion of the myc locus had occurred in 38 cats (32%) [48]. Thus, FeLV-induced neoplasms are caused, at least in part, by somatically acquired insertional mutagenesis in which the integrated provirus may activate a proto-oncogene or disrupt a tumor suppressor gene. A recent study suggested that the U3-LTR region of FeLV transactivates cancer-related signaling pathways through production of a non-coding 104 base RNA transcript that activates NF kappaB [49]. Twelve common integration sites for FeLV associated with lymphoma development have been identified in six loci: c-myc, flvi-1, flvi-2 (contains bmi-1), fit-1, pim-1, and flit-1. Oncogenic association of the loci is based on the fact that c-myc is known as a proto-oncogene, bmi-1 and pim-1 have been recognized as myc-collaborators, fit-1 appears to be closely linked to myb, and flit-1 insertion was shown to be associated with over-expression of cellular genes, e.g., activin-A receptor type II-like 1 (ACVRL1). [50]. Flit-1 seems to have an important role in the development of lymphomas and appears to represent a common novel FeLV proviral integration domain that may influence lymphomagenesis by insertional mutagenesis. Among 35 FeLV-related tumors, 5/25 thymic lymphomas demonstrated proviral insertion within flit-1 locus, whereas 0/4 alimentary lymphomas, 5/5 multicentric lymphomas, and 1/1 T-lymphoid leukemia examined had rearrangements in this region. Expression of ACVRL1 mRNA was detected in the two thymic lymphomas with flit-1 rearrangement, whereas normal thymuses and seven lymphoid tumors without flit-1 rearrangement had no detectable ACVRL1 mRNA expression [51].
Fibrosarcomas that are associated with FeLV are caused by FeSV, a recombinant virus that develops de novo in FeLV-A-infected cats by recombination of the FeLV-A genome with cellular oncogenes. Through a process of genetic recombination, FeSV acquires one of several oncogenes, such as fes, fms, or fgr. As a result, FeSV is an acutely transforming (tumor-causing) virus, leading to a polyclonal malignancy with multifocal tumors arising simultaneously after a short incubation period. With the decrease in FeLV prevalence, FeSV also has become less common. FeSV-induced fibrosarcomas are multicentric and usually occur in young cats. Strains of FeSV identified from naturally occurring tumors are defective and unable to replicate without the presence of FeLV-A as a helper virus that supplies proteins (such as those coded by the env gene) to FeSV. Fibrosarcomas caused by FeSV tend to grow rapidly, often with multiple cutaneous or subcutaneous nodules that are locally invasive and metastasize to the lung and other sites. Solitary fibrosarcomas in older cats are not caused by FeSV. These tumors are slower growing, locally invasive, slower metastasizing, and only occasionally curable by excision combined with radiation and/or gene therapy. They usually are classified as feline injection site sarcomas (FISS) caused by the granulomatous inflammatory reaction at the injection site, commonly occurring after inoculation of adjuvant-containing vaccines. It has been demonstrated that neither FeSV nor FeLV play any role in the development of FISS [52].
A few other tumors have been found in FeLV-infected cats; some of them might have an association with FeLV, others likely have just been observed by chance simultaneously in an infected cat. Iris melanomas, for example, are not associated with FeLV infections, although in one study three of 18 eyes tested positive for FeLV/FeSV proviral DNA [53]. In a more recent study, however, immunohistochemical staining and PCR did not find FeLV or FeSV in the ocular tissues of any cat with this disorder [54]. Multiple osteochondromas (cartilaginous exostoses on flat bones of unknown pathogenesis) have been described in FeLV-infected cats. Although histologically benign, they may cause significant morbidity if they occur in an area such as a vertebra and put pressure on the spinal cord or nerve roots [55,56]. In spontaneous feline olfactory neuroblastomas (aggressive, histologically inhomogenous tumors of the tasting and smelling epithelium of nose and pharynx with high metastasis rates), budding FeLV particles were found in the tumors and lymph node metastases, and FeLV DNA was detected in tumor tissue [57]. The exact role of FeLV in the genesis of these tumors is uncertain. Cutaneous horns are a benign hyperplasia of keratinocytes that have been described in FeLV-infected cats [58], but the role of FeLV is also unclear.
FIV-infected cats are about five times more likely to develop lymphoma or leukemia than noninfected cats [26,27]. Lymphomas (mostly B-cell lymphomas) [26,27,59,60], leukemias, but also several other tumors have been described in association with FIV infection [26,[61][62][63][64][65][66], including squamous cell carcinoma, fibrosarcoma, and mast cell tumor. FIV provirus, however, is only occasionally detected in tumor cells [67][68][69][70], suggesting a more indirect role in lymphoma formation, such as decreased cell-mediated immune surveillance or chronic B-cell hyperplasia [68,71]. However, clonally integrated FIV DNA was found in lymphoma cells from one cat that had been experimentally infected six years earlier, indicating the possibility of an occasional direct oncogenic role of FIV [67,70,72]. The prevalence of FIV infection in one cohort of cats with lymphoma was 50% [60], much higher than the FIV prevalence in the population of cats without lymphomas, which is also supportive of a cause and effect relationship. FIV could alternatively increase tumor incidence by decreasing tumor immunosurveillance mechanisms. It also could promote tumor development through the immunostimulatory effects of replicating in lymphocytes.
Myelosuppression
Myelosupression and other hematopoietic disorders can occur in both, FeLV and FIV infection. It is, however, much more common and more severe in FeLV-infected cats.
Hematologic changes described in association with FeLV include anemia (non-regenerative or regenerative), persistent, transient, or cyclic neutropenia, platelet abnormalities (thrombocytopenia and platelet function abnormalities), aplastic anemia (pancytopenia), and panleukopenia-like syndrome. For the majority of pathogenic mechanisms in which FeLV causes bone marrow suppression, active virus replication is required. However, it has been demonstrated that in some FeLV antigen-negative cats, regressive FeLV infection without viremia may be responsible for bone marrow suppression. In a recent study including 37 cats with myelosuppression that tested FeLV antigen-negative in peripheral blood, 2/37 cats (5%) were found regressively infected with FeLV by bone marrow PCR (both had non-regenerative anemia) [73]. In these regressively infected cats, FeLV provirus may interrupt or inactivate cellular genes in the infected cells, or regulatory features of viral DNA may alter expression of neighboring genes. Additionally, cell function of provirus-containing myelomonocytic progenitor and stromal fibroblasts that provide bone marrow microenvironment may be altered. Alternatively, FeLV provirus may cause bone marrow disorders by inducing the expression of antigens on the cell surface, resulting in an immune-mediated destruction of the cell. Anemia is a major nonneoplastic complication that occurs in a majority of FeLV-infected cats [4]. Anemia in FeLV-infected cats may have various causes. Approximately 10% of FeLV-associated anemias are regenerative [74], most FeLV-associated anemias, however, are non-regenerative and are caused by the bone marrow suppressive effect of the virus resulting from primary infection of hematopoietic stem cells and infection of stroma cells that constitute the supporting environment for hematopoietic cells. In vitro exposure of normal feline bone marrow to some strains of FeLV caused suppression of erythrogenesis [6]. In addition to the direct effect of the virus on erythropoiesis, other factors can cause nonregenerative anemia in FeLV-infected cats (e.g., anemia of chronic inflammation promoted by high concentration of cytokines). FeLV infection can cause decreased platelet counts. It also can be responsible for platelet function deficits, and the lifespan of platelets is shortened in some FeLVinfected cats. Thrombocytopenia (resulting in bleeding disorders) can occur secondary to decreased platelet production from FeLV-induced bone marrow suppression or leukemic infiltration. Platelets harbor FeLV, and megakaryocytes are frequent targets of progressive FeLV infection. Immunemediated thrombocytopenia, which rarely occurs as a single disease entity in cats, often accompanies immune-mediated hemolytic anemia (IMHA) in cats with underlying FeLV infection. FeLV infection also can cause decreased neutrophil or lymphocyte counts. Neutropenia is common in FeLV-infected cats [75] and generally occurs alone or in conjunction with other cytopenias. In some cases, myeloid hypoplasia of all granulocytic stages is observed, suggesting infection on neutrophil precursors. In some neutropenic FeLV-infected cats, an arrest in bone marrow maturation can occur at the myelocyte and metamyelocyte stages. It has been hypothesized that an immune-mediated mechanism is responsible in cases in which neutrophil counts recover with glucocorticoid treatment ("glucocorticoidresponsive neutropenia").
Hematopoietic neoplasia ("myeloprolifertaive disorders"), including leukemia, can also cause bone marrow suppression syndromes by crowding out. Myelodysplastic syndrome (MDS), characterized by peripheral blood cytopenias and dysplastic changes in the bone marrow, is a pre-stage of acute myeloic leukemia. It was found that changes of the LTR region of the FeLV genome (presence of three tandem direct 47-bp repeats in the upstream region of the enhancer (URE)) are strongly associated with the induction of MDS [76]. Myelofibrosis, another cause of bone marrow suppression, is a condition characterized by abnormal proliferation of fibroblasts resulting from chronic stimulation of the bone marrow, such as chronic bone marrow activity from hyperplastic or neoplastic regeneration caused by FeLV. In severe cases, the entire endosteum within the medullary cavity can be obliterated.
Feline panleukopenia-like syndrome (FPLS), also known as FeLV-associated enteritis (FAE) or myeloblastopenia, consists of severe leukopenia (< 3000 cells/μl) with enteritis and destruction of intestinal crypt epithelium that mimics feline panleukopenia caused by feline panleukopenia virus (FPV) infection. However, FPV antigen has been demonstrated by IFA in intestinal sections of cats that died from this syndrome after being experimentally infected with FeLV [77]. FPV was also demonstrated by electron microscopy despite negative FPV antigen tests. It appears that this syndrome might actually not be caused by FeLV itself, as previously thought, but by co-infection with FPV. The syndrome also has been referred to as FAE in cats with progressive FeLV infection because the clinical signs observed are usually gastrointestinal, including hemorrhagic diarrhea, vomiting, oral ulceration or gingivitis, anorexia, and weight loss [78,79]. It is still unclear whether all theses syndromes have the same origin and are simply caused by co-infection with FPV (and even modified life FPV vaccines have been discussed) or if they are caused by FeLV itself [77].
Although cytopenias caused by bone marrow suppression are a common finding in FeLV infection, these are rather uncommon in FIV-infected cats. During the acute phase of infection, FIV-infected cats can exhibit mild neutropenia, which resolves as the cat progresses to the asymptomatic phase of infection. Clinically ill FIV-infected cats in a later phase of infection may have a variety of cytopenias, with lymphopenia being most common. Lymphopenia is caused by direct replication of the virus in CD4 + lymphocytes. Anemia and neutropenia (usually mild) may also be seen [4,51], although these abnormalities may be as much a reflection of concurrent disease as direct effects of FIV itself. A recent study in a high number (3784) of client-owned field cats compared hematologic parameters in FIVinfected, FeLV-infected and uninfected cats [4]. Anemia and thrombocytopenia were not significantly more common in FIV-infected versus uninfected cats. Only neutropenia was significantly more often present, in about 25% of FIV-infected cats. Soluble factors have been shown to inhibit bone marrow function in FIV-infected cats, and bone marrow infection has been associated with decreased ability to support hematopoietic potential in vitro or has been proposed as a mechanism underlying the development of cytopenias [51].
Neurologic Dysfunction
Neurologic dysfunction may be present in FeLV-and in FIV-infected cats and is one of the few syndromes directly caused by the retrovirus. However, mechanisms of neurologic dysfunction are different with both viruses.
In FeLV-infected cats, most neurologic signs are caused by lymphoma and lymphocytic infiltrations in brain or spinal cord leading to compression, but in some cases, no tumor is detectable with diagnostic imaging methods or at necropsy. In these cats, FeLV-induced neurotoxicity is suspected. Anisocoria, mydriasis, central blindness, or Horner's syndrome have been described in FeLV-infected cats without morphologic changes. In some regions (such as the southeastern United States), urinary incontinence caused by neuropathies in FeLV-infected cats has been described [80]. Direct neurotoxic effects of FeLV have been discussed as pathogenetic mechanisms. FeLV envelope glycoproteins may be able to produce increased free intracellular calcium leading to neuronal death (this has also been described in HIV-infected humans). A polypeptide of the FeLV envelope was found to cause dosedependent neurotoxicity associated with alterations in intracellular calcium ion concentration, neuronal survival, and neurite outgrowth. The polypeptide from a FeLV-C strain was significantly more neurotoxic than the same peptide derived from a FeLV-A strain [81,82]. Neurologic signs in 16 cats with progressive FeLV infection consisted of abnormal vocalization, hyperesthesia, and paresis progressing to paralysis. Some cats developed anisocoria or urinary incontinence during the course of their illness. Others had concurrent FeLV-related problems such as myelodysplastic disease. The clinical course of affected cats involved gradually progressive neurologic dysfunction. Microscopically, white-matter degeneration with dilation of myelin sheaths and swollen axons was identified in the spinal cord and brain stem of affected animals [80]. Immunohistochemical staining of affected tissues revealed consistent expression of FeLV p27 antigens in neurons, endothelial cells, and glial cells, and proviral DNA was amplified from multiple sections of the spinal cord [80]. These findings suggest that in some FeLV-infected cats, the virus may directly affect CNS cells cytopathically.
Neurologic signs also have been described in both natural and experimental FIV infections [83][84][85][86][87][88]. About 5% of symptomatic FIV-infected cats have a neurological disease as a predominant clinical feature. Neurologic disorders in FIV infection seem to be strain-dependent [89]. Both central and peripheral neurologic manifestations have been described, comparable to the changes in HIV-infected human beings. Dementia in human patients with AIDS is often characterized by a slight decline in cognitive ability or behavior, changes that may be too subtle to be recognized in cats. Neurological abnormalities seen in naturally infected cats tend to be more behavioral than motor. Psychotic behavior, twitching movements of the face and tongue, compulsive roaming, dementia, loss of bladder and rectal control, and disturbed sleep patterns have been observed. Other signs described include nystagmus, ataxia, seizures, and intention tremors [90][91][92]. Abnormal forebrain electrical activity and abnormal visual and auditory-evoked potentials have also been documented in cats that appeared otherwise normal [24,66,93,94]. Although the majority of FIV-infected cats do not show clinically overt neurologic signs, a much higher proportion of infected cats have microscopic CNS lesions. Brain lesions may occur in the absence of massive infection, and abnormal neurologic function has been documented in FIV-infected cats with only mild to moderate histologic evidence of inflammation [8]. Pathologic findings include the presence of perivascular infiltrates of mononuclear cells, diffuse gliosis, glial nodules, and white matter pallor. These lesions are usually located in the caudate nucleus, midbrain, and rostral brain stem [8]. Mostly, abnormal neurologic function is the result of a direct effect of the virus on CNS cells. Neurologic signs upon FIV infection are highly strain-dependent. The virus infects the brain early, with virus-induced CNS lesions sometimes developing within two months of experimental infection [8]. Microglia and astrocytes are infected by FIV, but the virus does not infect neurons. However, neuronal death has been associated with FIV infection; in particular, forebrain signs are often a result of direct neuronal injury from the virus. The exact mechanism of neuronal damage by FIV is unclear but may include neuronal apoptosis, effects on the neuron supportive functions of astrocytes, toxic products released from infected microglia, or cytokines produced in response to viral infection. In vitro studies support the hypothesis that FIV infection may impair normal metabolism in CNS cells, particularly astrocytes [8]. Documented abnormalities of astrocyte function include altered intercellular communication, abnormal glutathione reductase activity that could render cells more susceptible to oxidative injury, and alterations in mitochondrial membrane potential that disrupt the energy-producing capacities of the cell [95]. Astrocytes are by far the most common cell type of the brain and are important in maintaining CNS neuronal vascular microenvironment. One of the most important functions of astrocytes is to regulate the level of extracellular glutamate, a major excitatory neurotransmitter that accumulates as a consequence of neuronal activity. Excessive extracellular glutamate often results in neuronal toxicity and death. FIV infection of feline astrocytes can significantly inhibit their glutamate-scavenging ability, potentially resulting in neuronal damage [95,96]. Sometimes, neurologic signs may also be caused by opportunistic infections such as toxoplasmosis, cryptococcosis, or FIP.
Immunodeficiency and Secondary Infections
The most clinically important consequence of both retrovirus infections is immunosuppression. Immunosuppression can lead to secondary infectious diseases accounting for most clinical signs, but also can lead to decreased tumor surveillance mechanisms causing an increased risk of tumor development. It is important to realize that many of these secondary diseases in FeLV-and FIVinfected cats are treatable. The mechanisms that cause the immunosuppression are different for the two infections.
Many FeLV-infected cats have concurrent bacterial, viral, protozoal, and fungal infections, but few controlled studies exist proving that these cats have a higher rate of infection than FeLV-negative cats. Thus, although FeLV certainly can suppress immune function, it should not be assumed that all concurrent infections are a direct consequence of FeLV infection. Progressively FeLV-infected cats develop immunosuppression similar to that in HIV-infected people. The exact mechanisms of how the virus destroys the immune system are poorly understood, as is why different animals have such varying degrees of immunosuppression. Immunosuppression has been associated with non-integrated viral DNA from replication-defective viral variants [97]. These pathogenic immunosuppressive variants, such as FeLV-T, require a membrane-spanning receptor molecule (Pit1) and a second co-receptor protein (FeLIX) to infect T lymphocytes [98]. The latter protein is an endogenously expressed protein encoded by an endogenous provirus arising from FeLV-A, which is similar to the FeLV receptor-binding protein of FeLV-B [99].
FeLV-infected cats may develop thymic atrophy and depletion of lymph node paracortical zones following infection. Lymphopenia and neutropenia are common. In addition, neutrophils of viremic cats have decreased chemotactic and phagocytic function compared with those of normal cats. In some cats, lymphopenia may be characterized by preferential loss of CD4 + helper T cells, resulting in an inverted CD4/CD8 ratio (as typically seen in FIV infection) [100,101], but more commonly, substantial losses of helper cells and cytotoxic suppressor cells (CD8 + cells) occur [101]. Many immune function tests of naturally FeLV-infected cats are abnormal, including decreased response to T-cell mitogens, prolonged allograft reaction, reduced immunoglobulin production, depressed neutrophil function, and complement depletion. IL-2 and IL-4 are decreased in some cats [7,102], but FeLV does not appear to suppress IL-1 production from infected macrophages. IFN-γ may be deficient or increased. Increased TNF-α has been observed in serum of infected cats and in infected cells in culture. Each cytokine plays a vital role in the generation of a normal immune response, and the excess production of certain cytokines, such as TNF-α, can also cause illness. T-cells of FeLVinfected produce significantly lower levels of B-cell stimulatory factors than do those of normal cats (this defect becomes progressively more severe over time) [72], but when B-cells of FeLV-infected cats are stimulated in vitro by uninfected T-cells, their function remains normal. Primary and secondary humoral antibody responses to specific antigens are decreased and may occur delayed in FeLV-infected cats. In vaccination studies, FeLV-infected cats were not able to mount an adequate immune response to vaccines, such as rabies. Therefore, protection in a FeLV-infected cat after vaccination is not complete and not comparable to that in a healthy cat;thus, more frequent vaccinations (e.g., every six months) have to be considered.
In FIV-infected cats, immunosuppression usually occurs in later stages of the infection, and leads to predisposition for secondary infections. In a survey study of 826 naturally FIV-infected cats examined at North American Veterinary Teaching Hospitals, the most common disease syndromes were stomatitis, neoplasia (especially lymphoma and cutaneous squamous cell carcinoma), ocular disease (uveitis and chorioretinitis), anemia and leukopenia, opportunistic infections, renal insufficiency, lower urinary tract disease, and endocrinopathies, such as hyperthyroidism and diabetes mellitus [78]. Some of these problems, however, are most likely associated rather with the older age at which these cats presented (e.g., endocrinopathies, renal insufficiency) than with their FIV infection. Infections with many different "opportunistic" pathogens of viral, bacterial, protozoal, and fungal origin have been reported in FIV-infected cats. Few studies, however, have compared the prevalence of most of these infections in FIV-infected and non-infected cats, and thus, their relevancy as true secondary invaders is unclear.
The most important immunologic abnormality shown in experimental [104][105][106] as well as in natural [107,108] infection is a decrease in the number and relative proportion of CD4 + cells in the peripheral blood as well as in most primary lymphoid tissues [109]. Loss of CD4 + cells leads to inversion of the CD4/CD8 ratio. In addition, an increase in the proportion of CD8 + cells also contributes to the inversion [104,108,110], in particular a population referred to as "CD8 + alpha-hi, beta-low cells" [111][112][113], a subset of CD8 + cells that may contribute to suppression of viremia in FIVinfected cats. Causes of CD4 + cell loss include decreased production secondary to bone marrow or thymic infection, lysis of infected cells induced by FIV itself (cytopathic effects), destruction of virusinfected cells by the immune system, or death by apoptosis (cell death that follows receipt of a membrane signal initiating a series of programmed intracellular events) [114][115][116][117][118][119][120][121][122][123][124][125][126]. The degree of apoptosis correlates inversely with the CD4 + numbers and the CD4/CD8 ratio [127]. FIV env proteins are capable of inducing apoptosis in mononuclear cells by a mechanism that requires CXCR4 binding [128]. Ultimately, loss of CD4+ cells impairs immune responses, because CD4 + cells have critical roles in promoting and maintaining both humoral and cell-mediated immunity. A certain subset of CD4 + cells, the "Treg" (for T-regulatory cells), also seems to play an important role, and Treg cells with suppressive activity have been documented during early [129] and chronic FIV infection [130]. In FIV-infected cats, increased activity of Treg cells could thus play a role in suppressing immune responses to foreign antigens or pathogens. In addition, Treg cells are themselves targets for FIV infection [129,131], and may serve as a FIV reservoir during the latent stage of infection and be capable of stimulating virus production [132]. In addition, other immunologic abnormalities can be found. Lymphocytes may lose the ability to proliferate in response to stimulation with mitogens or antigens, and priming of lymphocytes by immunogens may be impaired [105,[133][134][135][136][137][138][139]. Lymphocyte function may be reduced by altered expression of cell surface molecules, such as CD4, major histocompatibility complex II antigens, or cytokines and cytokine receptors [140][141][142][143][144], or through over-expression of abnormal molecules, such as receptors [145], leading to disrupted production of cytokines or receptor function. Impaired neutrophil adhesion and emigration in response to bacterial products have been described in FIV-infected cats [146][147][148]. Natural killer cell activity may be diminished [149] or increased [150], in acutely or asymptomatically infected cats, respectively. Changes in cytokine pattern include increased production of IFN-γ, TNF-α, IL-4, IL-6, IL-10, and IL-12 [151][152][153][154], but also differences in cytokine ratios (e.g., IL-10/IL-12 ratio) [155,156].
Immune-mediated Diseases
In addition to a dysregulation of the immune system leading to immunosuppression, retrovirusinfected cats can also develop immune-mediated diseases caused by an overactive immune response. The most commonly seen immune-mediated response is a hypergammaglobulinemia which is caused by an excessive antibody response against the chronic persistent infection. The produced antibodies are not neutralizing and thus, may lead to antigen antibody complex formation. These immune complexes can deposit, usually in narrow capillary beds, leading to glomerulonephritis, polyarthritis, uveitis, and vasculitis. Secondary immune-mediated diseases are more commonly seen in FIV-than in FeLV-infected cats. When comparing plasma electrophoretograms, FeLV-infected cats do not show hypergammaglobulinemia and hyperproteinemia significantly more often than non-infected cats, whereas in FIV-infection, hypergammaglobulinemia and hyperproteinemia occur significantly more commonly [4,157].
Nevertheless, immune-mediated diseases have been described in FeLV-infected cats as well. While humoral immunity to specific stimulation decreases during the course of FeLV infection, nonspecific increases of IgG and IgM have been noted. The loss of T-cell activity in combination with the formation of antigen antibody complexes promotes immune dysregulation [158]. Immunemediated diseases described in FeLV-infected cats include IMHA [159], glomerulonephritis [160], uveitis with immune complex deposition in iris and ciliary body [161], as well as polyarthritis [58]. Chronic progressive polyarthritis can be triggered by FeLV; in about 20% of cats with polyarthritis, FeLV seems to be an associated agent [58]. Measurement of FeLV antigen has shown that cats with glomerulonephritis have more circulating viral proteins than do other FeLV-infected cats. Antigens that can lead to antigen antibody complex formation include not only whole virus particles, but also free gp70, p27, or p15E proteins [162,163].
Immune-mediated diseases observed in FIV-infected cats are caused by an excessive immune response leading to hypergammaglobulinemia [4,104,164]. Hypergammaglobulinemia reflects polyclonal B-cell stimulation and is a direct consequence of FIV infection, because experimentally FIV-infected specific pathogen-free (SPF) healthy cats also develop hypergammaglobulinemia [164]. Increased IgG as well as circulating immune complexes have been detected in FIV-infected cats [165].
Stomatitis
Chronic ulcero-proliferative gingivostomatitis is very common in retrovirus-infected cats, especially in those with FIV infection. In cats naturally infected with FIV, it is the most common syndrome (affecting up to 50%). It characteristically originates in the fauces and spreads rostrally, especially along the maxillary teeth. Histologically, the mucosa is invaded by plasma cells and lymphocytes, accompanied by variable degrees of neutrophilic and eosinophilic inflammation. Lesions are often painful, and tooth loss is common. Severe stomatitis can lead to anorexia and emaciation. The cause of this syndrome is unclear, but the histologic findings suggest an immune response to chronic antigenic stimulation or immune dysregulation. Circulating lymphocytes of cats with stomatitis have increased expression of inflammatory cytokines [103], further implicating immune activation in the pathogenesis of this condition. This type of stomatitis is not always correlated with FeLV or FIV infection [166], and is usually not seen in SPF cats experimentally infected with FeLV or FIV, suggesting that exposure to other infectious agents also plays a role [167]. Concurrent feline calicivirus (FCV) infection is often identified in the oral cavity of these cats, and experimental and naturally occurring co-infection of FIV and FCV infection results in more severe disease [168,169].
Conclusions
FeLV can cause severe clinical syndromes, and progressive FeLV infection is associated with a decrease in life expectancy. Still, many owners still elect to provide therapy for their FeLV-infected cats, and with proper treatment, FeLV-infected cats, especially in indoor-only households, may live for many years with good quality of life. Diseases secondary to immunosuppression account for a large portion of the syndromes seen in FeLV-infected cats, and it is important to realize that many of these secondary diseases are treatable. In most naturally infected cats, FIV does not cause a severe clinical syndrome. Most clinical signs in FIV-infected cats reflect secondary diseases, such as infections and neoplasia, to which FIV-infected cats are more susceptible. With proper care, FIV-infected cats can live many years and, in fact, commonly die at an old age from causes unrelated to their FIV infection. While long-term studies describing clinical outcomes of naturally occurring FeLV and FIV infection are lacking, modalities for treatment of secondary infections or other co-incident diseases are available, and by treating these symptomatically, the life expectancy and quality of life of FeLV-and FIVinfected animals can be significantly improved. rare, direct influence of the virus (specific FIV strains), impairment of astrocyte function Immunodeficiency common, several mechanisms, e.g., replication of virus in all bone marrow cells (including neutrophils), changes in cytokine pattern common, several mechanisms, e.g., decrease in CD4 + cells, changes in cytokine pattern Immune-mediated diseases rare, e.g., immune-mediated hemolytic anemia sometimes, hyperglobulinemia common with immune complex deposition leading to e.g., glomerulonephritis and uveitis Stomatitis common, multi-factorial disease very common, multi-factorial disease | 2014-10-01T00:00:00.000Z | 2012-10-31T00:00:00.000 | {
"year": 2012,
"sha1": "7911bba69f1eb2805376a70acad59c3dc42b0559",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1999-4915/4/11/2684/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "59ac4eae51d6084c2346ebd27125a3b56e25d72e",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
269890507 | pes2o/s2orc | v3-fos-license | Assessment of Cognitive Function in Romanian Patients with Chronic Alcohol Consumption
: Alcoholism presents a significant health concern with notable socioeconomic implications. Alcohol withdrawal syndrome (AWS) can manifest when individuals cease or drastically reduce their alcohol consumption after prolonged use. Non-alcoholic fatty liver disease (NAFLD) is characterized by substantial lipid accumulation in the liver cells of individuals with no history of alcohol consumption. There is evidence suggesting an association between cognitive impairment and both conditions. This study aimed to evaluate cognitive impairment in patients with NAFLD and AWS using the Mini-Mental State Examination (MMSE). This study involved 120 patients admitted to two hospitals in Craiova, Romania. Results indicated that patients with NAFLD did not exhibit cognitive impairment as measured by MMSE (Mean = 29.27, SD = 0.785). Conversely, patients with AWS showed more pronounced cognitive dysfunction, with a mean MMSE score at admission of 16.60 ± 4.097 and 24.60 ± 2.832 after 2 weeks under treatment with Vitamins B1 and B6 and Cerebrolysin. Additionally, our findings suggested that cognitive dysfunction among alcohol consumers was correlated with the severity of clinical symptoms, as demonstrated by the severity of tremors in our study. The two-week period under treatment and alcohol withdrawal was insufficient for cognitive function to return to normal levels. Observational studies on longer periods of time are advised.
Introduction
Nowadays, alcohol is commonly associated with entertainment, with around one in three people worldwide being a consumer [1].According to recent reports from Eurostat, in Europe, 1 in 12 persons is a regular consumer, and almost one out of five of these people has an episode of heavy drinking at least once a month [2].Even though most of the statistics report men as being more avid consumers, recently, the gender gap has gently started to narrow [3].
Alcoholism poses a significant health concern with significant socioeconomic ramifications [4].This condition accounts for 15-20% of psychiatric admissions and numerous medical, surgical and traumatic emergencies [5].
Among the most encountered irreversible effects of chronic alcohol consumption is cognitive dysfunction [6], which can be more and more observed in patients with no other neurological impairment and can be traced to chronic alcohol intake [7].Executive functions [8], orientation skills [9] and memory are also affected [6].Usually, by the time these patients seek medical attention, the cognitive impairments are already established [10,11] with proven effects upon all cognitive domains [12].
The development of multimodal neuroimaging systems have helped the medical community better observe the effects of alcohol consumption on the brain such as cortical shrinkage and white matter degradation, differences in brain activity during task processing, visual perception impairment, damaged cognitive control mechanisms and many others [13].
Some researchers investigated the effect of alcohol abstinence on the improvement of various components of cognitive function such as memory recovery, orientation and inhibition [14][15][16].Regarding the duration of abstinence necessary for the normalization of cognition, there are not many authors that can agree on a timestamp [6], as some studies reported a significant improvement after two to four weeks [17], whilst others did not find any modification as late as the seventh week [18].
Alcohol withdrawal syndrome (AWS) can occur when patients stop drinking or drastically decrease their alcohol intake after long-term consumption [19].The symptoms of withdrawal can range widely, from slight tremors to delirium tremens, a condition that causes seizures and, if left untreated, can be fatal [20].
Non-alcoholic fatty liver disease (NAFLD) is characterized by a significant lipid storage in the hepatocytes of patients with negative history of alcohol consumption [21,22].About 30% individuals can suffer from NAFLD, which is the most common liver damage in the world [23].Certain authors have observed that individuals diagnosed with NAFLD may exhibit subpar cognitive function across various domains such as general cognition and attention [24,25].However, the literature presents inconclusive findings, as some studies fail to establish a clear association between NAFLD and cognitive impairments [26].Introduction of the term metabolic-dysfunction-associated fatty liver disease (MAFLD) presents a recent development aimed at replacing the term NAFLD [27].Unlike NAFLD, MAFLD does not necessitate excluding other causes of liver disease, such as excessive alcohol consumption or viral hepatitis [27,28].NAFLD literally refers only to non-alcohol related hepatopathy and does not adequately explain the links with metabolic impairment and related cardiovascular risks [29].
Mini-Mental State Examination (MMSE) is a simple-to-use screening tool broadly used by medical professionals in their daily practice to assess a patient's cognition [30], consisting of 11 questions that can be administered by healthcare practitioners [31].
Cerebrolysin is a medication that, for some time now, has been used in hospital settings to treat neurological problems such as strokes [32] and traumatic brain injuries [33], and it has captured the interest of the medical world for its beneficial effects [34].It is a composed of low-molecular-weight peptides and amino acids [35], extracted from the braincells of a porcine [36].There are some studies on animal models and observational studies on humans suggesting a beneficial effect of Cerebrolysin usage on cognition enhancement in patients with moderate to severe brain injuries [37].Also, there are a few studies that determined that this compound possesses neurotrophic properties explained in Figure 1, promoting neuronal sprouting and enhancing neuronal survival and neurogenesis [38].In 2019, a study on rats concluded that it has the potential to improve memory function in individuals with chronic alcoholism by reducing oxidative damage and inhibiting apoptosis in the hippocampus, indicating its potential as a treatment for cognitive impairment associated with alcohol abuse [39].
The aim of our study was to repeatedly assess the cognitive function of chronic alcohol users at hospital admission and 2 weeks after they were abstinent and under supportive treatment in order to check if there were significant improvements.Also, we wanted to check if there was a cognitive impairment in patients diagnosed with NAFLD and to see if there are differences between the two groups.
Selection of Patients
For our analysis, we included patients hospitalized in Craiova's Clinical Neuropsychiatry Hospital for acute AWS who had a hospitalization period of more than 14 consecutive days and patients diagnosed with NAFLD hospitalized in the Gastroenterology Department of the County Clinical Emergency Hospital of Craiova, Romania, for routine check-up.As there is no definitive evidence of cognitive alteration among patients with NAFLD [26,40], we will consider these patients as our control group.
We conducted a prospective study.The inclusion period lasted for 12 consecutive months, from January 2022 to December 2022.
Patients with autoimmune liver disorders, virus B and virus C chronic hepatitis, liver cirrhosis, hepatocellular carcinoma or other malignancies were excluded due to altered values of hepatic enzymes.Also, we excluded all patients with signs of infection due to possible misinterpretation of liver enzymes and, also, patients with an uncertain history of alcohol consumption.
All the patients, or where it was needed, a legal representative, signed the formal consent for taking part in this study.
The University of Medicine and Pharmacy of Craiova, Romania's ethical committee approved the current study protocol, which complied with all Declaration of Helsinki criteria (no.237/20 December 2021).
Clinical Evaluation
For the screening of cognitive function, we used the MMSE test which is a test most commonly used to detect and evaluate the progress of a cognitive disorder associated with neurodegenerative diseases [41].It includes five sections: orientation, attention, concentration and calculation, memory and language.The administration of this simple, structured scale requires no more than 5-10 min.The maximum, total score is 30 points.The MMSE ranges were 24-30 for no cognitive impairment, 19-23 for mild, 10-18 for moderate and 0-9 for severe cognitive impairment [41][42][43][44].MMSE was used at the time of admission in the clinics, and for the patients admitted with symptoms of acute alcohol withdrawal, a second evaluation was performed after two weeks.The evaluation was performed by S.M.
Definition of NAFLD and Ultrasound Assessment
NAFLD was defined by the presence of hepatic steatosis on ultrasound, excluding heavy drinking individuals.All patients were evaluated by ultrasound conducted by a gastroenterologist with approximately 35 years of experience in intra-abdominal ultrasound (IUS) utilizing a Hitachi Arietta V70 ultrasonography system (Hitachi Ltd., Tokyo, Japan) along with the convex transducer.Also, all the patients with NAFLD had an evaluation of liver fibrosis using FibroScan Mini+ 430 (Echosens, Paris).Patients were instructed to follow a fasting period of at least 6 to 8 h prior to the examination.The ultrasound assessment was focused on the liver of all patients hospitalized in the Gastroenterology Department of the County Clinical Emergency Hospital of Craiova, Romania.We have only used the definition of NAFLD and not MAFLD because the latter diagnosis applies to patients displaying both hepatic steatosis and any of the following metabolic conditions: overweight or obesity, diabetes mellitus or evidence of metabolic dysregulation in lean individuals, and does not exclude the use of alcohol [45].
Biological Analyses
From all the patients included in the study, a "fasting" blood sample was drawn by an experienced nurse from a peripheral vein by routine phlebotomy.Those samples were used to determine the complete blood count (CBC), erythrocyte sedimentation rate (ESR), C reactive protein (CRP), aspartate aminotransferase (AST), alanine aminotransferase (ALT), gamma-glutamyl transferase (GGT), all assessed by routine (automated) laboratory procedures.CRP, ESR and CBC were analyzed for all the patients considered for the study.Based on their abnormal values that were not in line with acute withdrawal syndrome and the NAFLD diagnosis, we excluded a number of 30 patients (9 from the AWS group and 21 from the NAFLD group) and referred them for further clinical and paraclinical testing.
Treatments
During their admission in the hospital, the AWS group patients were closely monitored by the medical and auxiliar staff.These patients had no access to alcoholic beverages or similar substances.The treatment they received included Cerebrolysin, which we determined in a previous experimental study to have protective effects on the brain cells [46], as well as Vitamin B6 and B1 infusion and symptomatics.
The NAFLD group did not require prolonged hospitalization as the patients were solely attending for a routine check-up and did not receive any medication.
Statistical Analysis
Statistical analysis was performed using IBM SPSS Statistics Version 26 and Microsoft Excel 2021 (Microsoft Corp., Redmond, WA, USA).Unless noted otherwise, we show in figures and tables the mean value and standard deviation (SD).We considered there is a statistical significance if p < 0.05, p < 0.01 or p < 0.001.
Results
Our study included a total of 120 patients, with a preponderance of males, representing 74.16% of the participants.In AWS patients (study group), there was a predominance of males, representing 83.3% of the total, while in patients with NALFD (control group), the gender gap was not that well represented, with 53.3% of the total participants being females.More than half of the AWS patients were from a rural area, while in the NAFLD group, the balance was in favor of the urban area with 63.3% of patients living in the cities.The hospitalization period was longer for AWS patients than the control group (20.44 ± 14.59 days vs. 1.8 ± 0.95 day, with a minimum = 14 days and a maximum = 34 days vs. minimum = 1 day and a maximum = 4 days).All the patients from the NAFLD group had their liver fibrosis level assessed by fibroscan; all of them had no fibrosis with a mean of 4,6 KPa and a standard deviation of 1.2 KPa.
Table 1 shows the descriptive data about the patients included in the study.In order to see if there is a correlation between the hospitalization period and the MMSE score at admission for the AWS group, we used a Pearson correlation coefficient.There was a significant strong negative correlation between the two abovementioned variables, r(87) = −0.638,p < 0.0001.This shows that the lower the MMSE score was at admission, the longer the patients needed to be kept in the hospital, in order to be monitored and receive treatment.The ones with the highest scores were discharged faster.In Figure 2, we present these results in a scatterplot for a better and more accurate representation.We present in Table 2; Table 3 the descriptive statistics of both groups.We performed an Independent Samples t-test which compares the means of GGT between the NAFLD group and the AWS group.There was a significant difference in mean GGT between alcohol consumers and the NAFLD patients (t(19.733)= 6.717, p < 0.001).The average GGT for alcohol consumers was 200.8 U/L higher than the average GGT for non-consumers.A one-way ANOVA was performed to evaluate the relationship between patients' MMSE score at both admission and after 2 weeks and the age category defined as before.The means and standard deviations are presented in Table 4 below.
The control group, consisting of patients with NAFLD, did not have any subject with cognitive impairment measured by MMSE, with a minimum score of 28 and maximum score 30 (see also Table 3).
Mean MMSE score at admission in patients with AWS was 16.60 ± 4.097, compared to patients with NAFLD which was 29.27 ± 0.785.Also, a one-sample t-test was run to determine whether there is a difference in cognition measured using MMSE score between the control group and the alcohol consumers' group after 2 weeks of withdrawal and treatment.Mean MMSE score at 2 weeks (M = 24.6,SD = 2.832) was lower than the MMSE score of the control group at admission which did not have any participants with abnormal results, a statistically significant mean difference of 4.667, 95% CI [4.011 to 5.322], t(115.87)= 14.095, p < 0.001.The distribution of MMSE score can be observed in Figure 3.
In Figure 4 is presented the scatterplot of MMSE at admission and MMSE after 2 weeks of treatment which shows a strong, positive, linear association between the two.There are not many outliers in the data.We analyzed the frequency of the tremor type in correlation with the severity of cognitive impairment as indicated by the MMSE score; the obtained results are presented in Figure 5 and Table 4.A chi-square test of independence was performed to evaluate the relationship between the neurological exam for which the results are described in Table 5 and MMSE score at admission as presented in Figure 6.The relationship between these variables was significant, χ No correlation was found between MMSE at admission and environment, gender or age group for the AWS group.
Discussion
Peer pressure, defined as the attempt to compel an individual to make a decision based on a norm, still plays an active role in the gender gap regarding alcohol consumption [47].Traditional masculine norms often include beliefs such as the importance of being tough, aggressive, dominant and self-reliant [48] which are associated with the increased consumption of alcoholic beverages.Even if there is clear recent evidence that this difference between genders is becoming smaller [49], it is still a fact that worldwide, men are more likely associated not only with alcohol consumption but also with heavy drinking [50].On top of that, the latest statistics show a high alcohol consumption in Eastern European countries which Romania is a part of [51].The findings in our study endorse these reports as the group of alcohol consumers had a ratio of five men to one woman.However, we must admit that the results of our analysis show there is no correlation between the MMSE score of patients with acute symptoms of alcohol withdrawal and gender, even though there are some studies reporting a relationship between those variables [52].
High levels of GGT are traditionally linked with alcohol intake [53] and liver dysfunction [54].According to Sueyoshi S et al., GGT serum profile determination could be used for the differential diagnosis of alcoholic-induced liver disease and NAFLD [55].
The updated concept and criteria of MAFLD could enable physicians to identify a larger cohort of patients at risk of adverse outcomes in clinical practice [45].However, given the novelty of this concept, further investigation is necessary to evaluate its utility in clinical settings and especially in patients with cognitive disorders.As there are not many papers in the literature which tackle MAFLD and related cognitive impairment, we only refer to NAFLD [56,57].Even if there is an elevation of the GGT, AST and ALT levels associated with excessive weight that, in time, can lead to NAFLD [58][59][60] and Metabolic syndrome [61], the GGT elevation in chronic alcohol consumption is far greater [62].Our target group of alcohol consumers definitely had GGT levels superiorly increased in comparison to their counterparts.This supports the hypothesis of GGT being used as an accurate marker for alcohol consumption.
A meta-analysis in this area yielded diverse outcomes regarding the relationship between NAFLD and cognitive function, with only individuals with biopsy-proven liver fibrosis demonstrated to be having a confirmed association with cognitive dysfunction [24,25].Regarding our study, we found no dysfunction in any of our patients diagnosed with NAFLD, even though they had no liver fibrosis assessed by FibroScan, but this can be a flaw due to the small size of our group with only 30 individuals.
In our study, we determined that even after 2 weeks of alcohol withdrawal and adequate treatment and under the careful supervision of medical professionals, in the cognitive assessment measured by the MMSE score, the patients from the AWS group performed poorly compared with the control group.A meta-analysis on the effect of alcohol on cognitive function showed that global cognitive dysfunction can be detected after weeks or even months of abstinence [12].There should be more research conducted in order to determine a more accurate time period of recovery for patients with chronic alcohol consumption.
Even though after 2 weeks, the MMSE score of the AWS group was not comparable with that of the NAFLD group, a significant improvement in cognition was observed.
We can attribute a part of this improvement to the fact that patients with AWS received a treatment which consisted of a cocktail of Cerebrolysin and Vitamins B1 and B6.The results from our study are consistent with the ones found by other researchers, which supports the hypothesis that cognitive impairment among chronic alcohol consumers is a problem that can improve with alcohol withdrawal and adequate therapy.This topic of neurocognitive changes in neurological patients who received these drugs/supplements also chosen for our patients is a topic that has captivated the interest of many researchers.Some studies proved the beneficial effect of Vitamin B1 on cognitive recovery in elders [63,64] and the impact of Cerebrolysin in the recovery of patients after neurological damage [35][36][37]46].According to a couple of studies from the current literature, nutritional techniques or supplements may improve cognitive function, despite many unknown biochemical mechanisms [65][66][67].An important nutrient is creatine, which is used by the brain and muscle mass when consumption increases.The mechanisms of creatine action include fast energy provision by transferring the N-phosphoryl group from phosphocreatine to adenosine diphosphate, restoring adenosine triphosphate, and energy, which involves shifting the energy from the mitochondria to the cytosol [68].Another micronutrient recently used in patients with cognitive disorders is beta-carotene, which acts like an antioxidant and has anti-inflammatory effects [69].Further studies should be conducted in order to determine the mechanism of action and adequate duration of treatment.
Tremors are one of the most encountered clinical findings in neurological disorders [70].In patients with acute alcohol withdrawal, they are often present in different levels of intensity [71], as we also pointed out in our target group.A curious fact is that there was a strong correlation between the severity of the tremor and the MMSE score at admission, which proves, once again, the more severe the cognitive impairment is, the more obvious and louder the clinical symptoms are.
The fact that we only surveyed our patients for 2 weeks, which showed only a mild improvement, and did not continue to monitor them after is a flaw in our study design.
On the other hand, we also managed to confirm there is no cognitive impairment associated with NAFLD, as some studies reported [24,25].We still have to mention that the patients included in our study with NAFLD were not severe cases, and we did not perform a liver biopsy in order to determine the extent of the illness.
The fact that we only surveyed our patients for 2 weeks, and did not continue to monitor them after, is a flaw in our study design, as we did not manage to observe the final outcome of the patients.Also, another limitation to be mentioned is the fact that our studied group was not big enough to draw a generally accepted conclusion, so we would like to encourage researchers to continue the research in this field.As we mentioned before, the men to women ratio is quite disproportionate, which can be partially attributed to the alcohol-consuming pattern of people in Romania.Also, another fact is that, in Romania, the man-to-woman ratio of alcohol consumers is more disproportionate in rural areas.However, we consider it necessary for more investigations to be carried out in this direction.
Conclusions
Through our study, we managed to conclude that GGT is a good marker for evaluating and differentiating alcohol consumers from non-consumers.
Also, we managed to identify the fact that among alcohol consumers, there are cognitive dysfunction that are associated with the severity of clinical symptomatology, represented in our study by tremors.
The two-week period under treatment and alcohol withdrawal was not enough for cognitive function to reach the normal limits, even though there were significant improvements in most of the patients.The mechanism of action of Cerebrolysin and Vitamins B1 and B6 should be better described.There is a need for more studies in this area, and they should be conducted for longer periods of time.
Even though no cognitive impairment among NAFLD patients was observed in our study, we would like to encourage more researchers to study this matter as these results are controversial.
Figure 1 .
Figure 1.The mechanism proposed for Cerebrolysin effects on seizure-induced neuronal death.Increased brain-derived neurotrophic factor (BDNF) levels by Cerebrolysin can induce the suppression of glial cell activation, which is believed to take place after seizures induced by pilocarpine.A red cercle and a red arrow indicates inhibition, and the green arrow indicates enhancement.Created with BioRender.com.
Figure 2 .
Figure 2. Scatter plot of the correlation between MMSE score at hospital admission (MMSE t0) and hospitalization period for the AWS group.
Figure 3 .
Figure 3. Distribution of MMSE scores of control group at admission and MMSE score of the alcohol consumers' group after 2 weeks of withdrawal and treatment.A Pearson correlation coefficient was calculated to evaluate the relationship between MMSE at admission and MMSE after 2 weeks of hospitalization.There was a significant very strong positive relationship between the MMSE scores, r([88]) = [0.881],p = [<0.001].A paired samples t-test was performed to evaluate whether there was a difference between the MMSE score at admission and MMSE score after 2 weeks of alcohol withdrawal and treatment (as mentioned before).The results indicated that the MMSE after 2 weeks of alcohol withdrawal and treatment (M = [24.60],SD = [2.832])was significantly higher than the MMSE score at hospital admission (M = [16.60],SD = [4.097]),t([89]) = [36.349],p = [<0.001],showing a considerable improvement.We analyzed the frequency of the tremor type in correlation with the severity of cognitive impairment as indicated by the MMSE score; the obtained results are presented in Figure5and Table4.
Figure 4 .
Figure 4. Scatter plot of the association between MMSE score at admission in the hospital (MMSE at t0) and MMSE score after 2 weeks of treatment (MMSE t2 weeks).
Figure 5 .
Figure 5. Neurological exam dispersion according to the MMSE severity represented in a bar chart.
2 [6], N = [30] = [6.718],p < [0.001].Patients with a severe score of MMSE were more likely to have generalized tremors than the ones with no impairment identified by the MMSE score.
Figure 6 .
Figure 6.Scatter plot of the association between MMSE at admission (MMSE t0) and the neurological exam for the AWS group.Nota bene: Neurological exam was coded as follows: 0-Normal, 1-Extremities tremor, 2-Generalized tremor.
Author Contributions: Conceptualization, S.M. and C.-M.I.; methodology, S.M.; software, M.-A.P.; validation; formal analysis, I.R.; investigation, S.M.; resources, S.M.; data curation, C.-M.I.; writingoriginal draft preparation, M.-A.P.; writing-review and editing, C.-M.I.; visualization, M.-A.P.; supervision, D.-N.F. and I.R.; project administration, I.R.All authors have read and agreed to the published version of the manuscript.Funding: This research received no external funding.Institutional Review Board Statement: The University of Medicine and Pharmacy of Craiova, Romania's ethical committee approved the current study protocol (no.237/20 December 2021), which complied with all Declaration of Helsinki criteria.Informed Consent Statement: Informed consent was obtained from all subjects involved in the study.
Table 1 .
General characteristics of patients.
* Expressed as mean ± standard deviation.
Table 2 .
Descriptive statistics of the AWS group.
Table 3 .
Descriptive statistics of the NAFLD group.
Table 4 .
Descriptive statistics of the MMSE at admission (MMSE t0) and MMSE after 2 weeks (MMSE t2) of the AWS group split according to age groups.
Table 5 .
Neurological exam and MMSE category at admission in the AWS group. | 2024-05-19T15:02:16.334Z | 2024-05-17T00:00:00.000 | {
"year": 2024,
"sha1": "57b9e905fbbf11c8fda79156dcc52a499c1ec907",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2036-7422/15/2/31/pdf?version=1715958253",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "e207a7720fcfe932512b3be32b2f1fe5738e8165",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
} |
116908116 | pes2o/s2orc | v3-fos-license | CKM matrix elements from tree-level decays and b-hadron lifetimes
We give an updated summary of the topics covered in Working Group I of the 2nd Workshop on the CKM Unitarity Triangle, with emphasis on the results obtained since the 1st CKM Workshop. The topics covered include the measurement of |V_{ub}|, of |V_{cb}| and of non-perturbative Heavy Quark Expansion parameters, and the determination of b-hadron lifetimes and lifetime differences.
Introduction
Uncovering the origin of flavor mixing and CP violation is one of the main goals in elementary particle physics today. As part of this program, the constraining of the CKM unitarity triangle through the redundant measurement of its angles and sides plays a central rôle. The aim of the present summary is to review the state of the art regarding the measurement of the sides |V ub | and |V cb |, as it was discussed in Durham at the 2nd Workshop on the CKM Unitarity Triangle, taking into account developments which have occurred since then.
The determination of |V ub | and |V cb | from inclusive decays relies heavily on the Heavy Quark Expansion (HQE). Many of the assumptions underlying these calculations can be tested by comparing HQE predictions for b-hadron lifetimes to their experimental values. Such comparisons are also useful for testing lattice calculations of non-perturbative hadronic matrix elements which also enter these predictions. The study of these lifetimes has thus been included in the subjects covered by our working group. Of course, lifetimes are important in themselves. They are required to convert branching fractions into the rates necessary for the determination of |V ub | and |V cb |. Moreover, lifetime differences of neutral B s and B d mesons may be helpful for uncovering new physics.
Spectral moments of inclusive B-decay spectra are also covered, as they too play an important rôle in testing the HQE. They further provide experimental determinations of many of the non-perturbative parameters which appear in these expansions. As the latest spectral moment studies described below show, we are entering a new era, where the currently achieved ex-perimental precisions require a much tighter interplay between experiment and theory. A similar interaction is taking place in the exclusive determination of |V ub |, as made clear by the latest CLEO results [2], where non-perturbative methods such as light-cone sum rules and lattice QCD are being put to serious test.
The remainder of this summary is organized as follows. In Section 2 we review inclusive determinations of |V ub |. Moments of inclusive B-decay spectra are discussed in Section 3, while the measurement of |V ub | from exclusive semileptonic B decays is the subject of Section 4. In Section 5 we review exclusive determinations of |V cb | and b-hadron lifetimes and lifetime differences in Section 6. We end with a brief conclusion in Section 7.
2 Inclusive determinations of |V ub | 2.1 Inclusive |V ub |: theory The status of theoretical calculations relevant for the determination of |V ub | from inclusive B → X u ℓν decays is nicely summarized in the contribution by Luke [3]. The main problem is the need for severe phase-space experimental cuts to eliminate the approximately 100 times larger B → X c ℓν background: these cuts tend to destroy the convergence of the HQE used to describe these decays. Theoretical expressions for a variety of cuts exist: , where E ℓ is the lepton energy; • m X < m D , where m X is the invariant mass of the final hadronic system; where q is the fourmomentum of the leptons; • combined (q 2 , m X ) cuts.
As detailed in [3], all of these cuts have advantages and disadvantages, some experimental, others theoretical. Because the different methods for measuring |V ub | have different sources of uncertainty, agreement between the measurements (including those obtained from exclusive decays) will give us confidence that both theoretical and experimental errors are under control.
The main theoretical development since the last CKM workshop [1] is the study of 1/m b corrections in ratio of rates such as: where the cut can be on the lepton energy, as in Eq.
(1), or on the hadronic invariant mass. These ratios are important because the dependence of the individual rates on the universal, non-perturbative shape function f (k + ), which describes the distribution of the light-cone component of the residual momentum of the b quark, cancels up to perturbative and subleadingtwist corrections. This allows for a model-independent determination of |V ub | (rather |V ub |/|V tb V * ts |) at that order.
Obviously, the accuracy of these methods depends on the size and uncertainty of subleading corrections. Perturbative corrections are dominated by large Sudakov logarithms which were summed to subleading order some time ago [4,5]. 1/m b corrections, on the other hand, have only been studied recently. There are, in fact, two large effects.
The first is a higher-twist correction which arises at O (1/m b ) [6][7][8][9]. For a cut E ℓ > 2.3 GeV, models indicate that it leads to an order 15% upward shift in the value of |V ub |. Though substantial, this correction may be insensitive to the model chosen for the subleading distribution function [9]. Moreover, these effects can be circumvented for a large part in hadronic-mass-cut measurements [10].
The second effect is due to a weak annihilation (WA) contribution and arises at relative order 1/m 2 b [11,12]. This contribution is the first of an infinite series which re-sums into a subleading distribution function [8]. For a cut E ℓ > 2.3 GeV, it is estimated to be roughly 10% with unknown sign [8]. Both this and the highertwist correction are significantly reduced when the cut on E ℓ is lowered below 2.3 GeV.
As emphasized by Luke [3], experiment itself can be used to further reduce theoretical errors in the inclusive measurement of |V ub |. Studying the dependence of |V ub | on the lepton cut, for instance, can help test the size of subleading twist contributions. Moreover, comparing the value of |V ub | extracted from inclusive semileptonic decays of charged and neutral B mesons will give a handle on WA contributions. Of course, more precise measurements of the photon spectrum in B → X s γ will improve the determination of |V ub | from ratios such as the one in Eq. (1). And for (q 2 , m X ) cuts, better determinations of m b from moments of inclusive B decay rates will reduce the largest source of uncertainty in the corresponding theoretical expressions.
Inclusive |V ub |: experiment
There have been many new results and analyses since the last CKM workshop, notably from BABAR and BELLE [13], as presented by Sarti [14] and Kakuno [15] [17,14] The measurement is based on 20.6 fb −1 of on-peak and 2.6 fb −1 of off-peak data. The cut on the charged lepton energy is 2.3 GeV ≤ E ℓ ≤ 2.6 GeV. They measure a partial branching fraction ∆B(B → X u ℓν) = (0.152 ± 0.014 ± 0.014) · 10 −3 , where the systematics come from, in order of importance: the estimate of the efficiency; the continuum background subtraction; the variations in the beam energy; the BB background modeling. They use CLEO's determination [18] of the fraction, f u , of the spectrum that falls into their momentum interval to obtain the full rate and the PDG 2002 average B lifetime [19] to measure |V ub |. They find: where the labelling of errors should be obvious.
BABAR 2003
: m X cut with fully reconstructed B's [14] This analysis is based on about 88 million BB events (82 fb −1 ) in which one of the B's is fully reconstructed through decays of the form B → D ( * ) hadrons while the semileptonic decay is measured on the opposing B with a cut p ℓ > 1 GeV and m X < 1.55 GeV, the latter being optimized to reduce the total error. This allows to reconstruct both the neutrino and the hadronic system X and to separate charged and neutral B mesons. It has the advantage of giving a large phase-space acceptance and a high purity of the sample. To reduce systematics due to uncertainties in the efficiency, they normalize the signal by the total semileptonic branching ratio. They use CLEO's determination ofΛ and λ 1 [18] to determine their signal selection efficiency and to extrapolate the partial rate to the full phase space. Using BABAR's semileptonic branching fraction [20] and the PDG 2002 average B lifetime [19], they obtain: where again the labelling of errors should be obvious. This determination is the most precise one to date.
BELLE 2003
: m X cut with B → D ( * ) ℓν tagging [13,15] This analysis is based on a sample of approximately 84 million BB pairs or 78.1 fb −1 . Though it leads to a two-fold degeneracy in the decaying B meson direction which results from the presence of a second neutrino, B → D ( * ) ℓν tagging improves on the efficiency of full reconstruction without degrading the m X resolution. With a cut on the signal lepton momentum p ℓ > 1 GeV and on the hadronic recoil mass m X < 1.5 GeV, BELLE obtains the preliminary branching ratio, B(B → X u ℓν) = (2.62 ± 0.63 stat ± 0.24 syst ± 0.39 extrap. ) · 10 −3 . This implies |V ub | = (5.00 ± 0.60 stat ± 0.24 syst ± 0.39 extrap. ±0.36 HQE ) · 10 −3 .
BELLE 2003:
(q 2 , m X ) cut with advanced neutrino reconstruction [13,15] This method is introduced to increase efficiency while avoiding the degradation in (q 2 , m X ) resolution brought about by hermiticity based neutrino reconstruction. Events with only one charged lepton (e or µ) are retained. The neutrino momentum is then calculated by subtracting the four-momenta of all reconstructed particles from that of the Υ(4S). This calculation is improved by reconstructing the other B decay through a simulated annealing technique. The signal region is then defined by m X < 1.5 GeV and q 2 > 7 GeV. The resulting branching fraction is B(B → X u ℓν) = (1.64 ± 0.14 stat ± 0.46 syst ± 0.22 extrap. ) · 10 −3 , yielding |V ub | = (3.96 ± 0.17 stat ± 0.56 syst ± 0.26 extrap. ±0.29 HQE ) · 10 −3 . CLEO's (q 2 , m X ) determination to obtain the following B factory value for |V ub | from inclusive B decays: which improves on the LEP average: |V ub | = (4.09 ± 0.70)·10 −3 . All of the relevant measurements are summarized in Figure 1.
Moments of inclusive B decay spectra and |V cb |
Since different moments depend differently on the various parameters or non-perturbative quantities which appear in the HQE that describe inclusive B decays, a measurement of moments allows a determination of these quantities and numerous consistency checks of the HQE. One should not forget, however, that there is an assumption behind these determinations: in order to be sensitive to non-perturbative 1/m b corrections which are formally smaller than any term in perturbation theory, one must assume that the former are larger than the perturbative terms neglected.
The moments which have been considered most frequently up to now are moments of the photon energy spectrum in B → X s γ decays, moments of the charged lepton energy spectrum and of the hadronic recoil mass in B → X c ℓν decays. Moments analyses were pioneered by CLEO [22]. Today, a sufficient variety of moments have been measured to allow a global fit to the corresponding HQE expressions up to and including 1/m 3 b terms [23,24]. Experimental aspects of the subject were reviewed very nicely by Calvi [25] at this workshop and Luke [3] and Uraltsev [26] provided very interesting discussions of some of the theoretical issues involved. Results from BABAR were presented by Luth [27] and from CLEO by Cassel [28]. The subject was also covered quite extensively in the proceedings of the 1st CKM workshop [1].
The main novelty since the last workshop are the global fits mentioned above, whose results were actually summarized in the proceedings [1]. In [24] the authors use preliminary DELPHI measurements of the first three moments of the hadronic mass and charge lepton energy spectra [29] and obtain where no expansion in 1/m c is performed and matrix elements up to O 1/m 3 b are also obtained. The masses given here are the running kinetic masses. The corresponding 1S mass for the b is m 1S b = 4.69 ± 0.08 GeV. The quality of their fit indicates the consistency of the HQE description of these moments at the order considered. In [23], the authors use a total of 14 moment measurements from CLEO [30], BABAR [31] and DELPHI [29]. Imposing the constraint on m b − m c given by the B ( * ) and D ( * ) masses, thereby introducing an 1/m c expansion, they obtain along with matrix elements up to O 1/m 3 b . The 2-3% accuracy of these |V cb | measurements are impressive. They are currently limited by the accuracy of the moments measurements and should therefore be improved in the near future with additional data from the B factories.
It is important to note that the results of [23] include the m 2 X moment measurement of BABAR presented at this workshop by Luth [27], whose dependence on the charge lepton energy cut appears to be in contradiction with the HQE. This measurement is based on a sample of 55 million BB pairs. In these events, one of the B's is fully reconstructed while the semileptonic decay of the second is identified by a high momentum charged lepton. The discrepancy arises when the HQE parameters are fixed at E cut ℓ = 1.5 GeV and the predicted moment is compared with data for lower values of E cut ℓ , as shown in Figure 2. [27] vs. lepton energy cut (squares) with the HQE expansion obtained [23] by fixing its parameters with CLEO's E cut ℓ = 1.5 GeV, m 2 X [22] and B → Xsγ, Eγ measurements [32].
A possible resolution was proposed by Luke and collaborators [23]. The measurement depends on the assumed spectrum of excited D resonances, in which there are no contributions from excited states with masses below ∼ 2.4 GeV. The addition of a nonnegligible fraction of excited D states with masses less than 2.45 GeV could help reconcile the discrepancy.
Another solution to this problem was put forward by Uraltsev [26]. He emphasized that the convergence of the HQE is governed by the maximum energy release or hardness of the moment considered. For E ℓ = 1.5 GeV, the hardness is only 1.25 GeV and smaller than 1 GeV for E ℓ > 1.7 GeV , implying rather poor convergence in this region of phase space. And the hardness only decreases for higher moments. Thus, the problem with the BABAR moment measurement is not at low E ℓ , but rather at high E ℓ , where the matching to the HQE is actually performed. His recommendation is therefore to perform comparisons to the HQE at the lowest practical value of E ℓ .
The main message of this discussion is that experimental groups should strive as much as possible to obtain model-independent measurements of these spectral moments and the applicability of the HQE in dangerous regions of phase space should be considered with care.
Since this discussion took place, new measurements of hadronic moments as a function of lepton-energy cut have been presented by BABAR [33] and CLEO [34]. Both these measurements are preliminary. CLEO derives the m 2 X moment for a number of E cut ℓ in the range of 1 to 1.5 GeV, from the branching fractions and average hadron mass distributions of a number of charm meson resonant and non-resonant states, as advocated in [22]. They obtain, BABAR has inaugurated a new method in which the m X and m 2 X moments are extracted directly from the measured m X and m 2 X distributions. This analysis reduces dependence on the mass distributions and branching fractions of individual charm states which are poorly known for higher mass states. Combining their results for m 2 X with their earlier measurements of semileptonic branching ratios and B lifetimes, they obtain in good agreement with the results of Eqs. (7) and (8).
As a result of several changes to their analysis and data selection, BABAR find that their new results for m 2 X vs. E cut ℓ fall substantially below those reported in [27] and depicted in Figure 2 at low E cut ℓ . This has the effect of reconciling experiment with theory as shown in Figure 3, where BABAR and CLEO's results are plotted together with the theoretical prediction constrained by CLEO's measurement at E cut ℓ = 1.5 GeV (Eq. (10)) and the first B → X s γ photon energy moment [32]. Agreement between the two experiments is excellent.
|V ub | from exclusive semileptonic B decays
Many new measurements of exclusive b → uℓν decays using new techniques have been presented recently, with more to come. These were very nicely reviewed by Gibbons/Cassel [35], with presentations from BABAR, BELLE and CLEO by Schubert [36], Schwanda [37] and Gibbons/Cassel [35]. The improving statistics of experiments are beginning to permit the measurement of partial rates as a function of lepton recoil squared, q 2 , allowing reduction of the dependence of the measured rates and |V ub | on the still rather poorly known theoretical form factor shapes. Such measurements also help eliminate incorrect form factor models. The overall normalizations of the form Figure 3. Comparison of BABAR '03 [33], CLEO '03 [34] and CLEO '01 [22] measurements of the moment of the hadron invariant mass spectrum vs. lepton energy cut. The theory bands shown in the figure reflect the variation of the experimental errors on the two constraints, the variation of the third-order HQET parameters by the scale (0.5 GeV) 3 , and variation of the size of the higher order QCD radiative corrections [23]. Figure taken from [34].
factors, however, cannot be tested experimentally and dominate the extraction of |V ub | from measured rates. One therefore needs model-independent determinations of these form factors such as those which upcoming, unquenched lattice QCD calculations should provide.
Exclusive |V ub |: theory
The theory of exclusive, semileptonic b → uℓν has evolved little since the publication of the proceedings from the last CKM workshop [1]. The status of lattice QCD (LQCD) calculations of B 0 → π − (ρ − )ℓ + ν form factors was very nicely reviewed by Onogi [38] and that of light-cone sum-rule (LCSR) calculations by Ball [39]. One important feature of lattice calculations is that they are currently limited to smaller recoils (q 2 > ∼ 10 GeV 2 ) while LCSR calculations are more reliable at larger recoils (q 2 < ∼ 15 GeV 2 ).
The current situation regarding quenched lattice calculations of B 0 → π − form factors is summarized in Figure 4. There is good agreement amongst the different methods used to obtain f + (q 2 ), which determines the rate in the limit m ℓ = 0, with errors at the level of 15-20%. Agreement is also good with the recent LCSR results of [40] (see also [41]). Agreement is less clear for the lattice results for f 0 (q 2 ), due to sensitivity of this form factor on light and heavy quark masses.
form factors, both with LQCD and LCSR. Moreover, quenching effects may be more important here than in B → π decays because the ρ cannot decay into two π in the quenched theory. Nevertheless, quenched lattice calculations do provide a first estimate of the relevant matrix elements which is worth considering. While there are a number of older lattice calculations [47][48][49], for clarity we only show in Figure 5 the recent, preliminary results of the SPQcdR collaboration [50], obtained at two values of the lattice spacing. The small dependence on lattice spacing of A 1 , which dominates the rate at large q 2 , indicates that discretization errors on this form factor are small. Similar results have been obtained recently by the UKQCD collaboration [51]. Also shown in Figure 5 are the LCSR results of [52]. These results look like a rather natural extension of the lattice results to smaller q 2 , suggesting rather good agreement between the two methods.
Another interesting feature of the lattice B → ρ calculations is the agreement with SCET constraints such as [53][54][55][56]: as shown in Figure 6.
To extend lattice results to smaller values of q 2 in a model-independent way one can make use of dispersive bounds [57,58]. While there are ways of improving these bounds, for the moment they do not provide B → ρ l ν l Figure 5. Example of quenched lattice results for B 0 → ρ − ℓ + ν form factors plotted as a function of q 2 [50]. These results were obtained at two values of the inverse lattice spacing 1/a = 3.7 GeV and 2.7 GeV, corresponding to bare couplings values β = 6.45 and 6.2 respectively. Also shown at low q 2 are the light-cone sum rule results of [52]. sufficient accuracy. One may therefore wish to consider a combination of LCSR results at low q 2 and quenched LQCD results at high q 2 . This approach offers a reasonably reliable determination of the form factors over the full kinematic range and has already been used for |V ub | determinations [2]. To avoid the problem "extrapolating" lattice results to lower values of q 2 altogether, one can also consider extractions of |V ub | from the partial rates measured for q 2 > ∼ 12 GeV 2 , as was suggested in [49] and already implemented in [2].
As they stand, quenched LQCD and LCSR results have errors of order 20%, which is not sufficient given the quality of the experimental measurements to come. While significant improvement of LCSR results cannot be expected, lattice predictions can and will be improved. Other than the issue of extending lattice results to smaller values of q 2 , one of the main issues in these calculations is that of quenching. Indeed, only fully unquenched lattice calculations will provide completely model-independent determinations of the relevant form factors. Partially unquenched results with two flavors of Wilson sea quarks are expected soon from JLQCD and UKQCD and three Kogut-Susskind (KS) flavor calculations based on MILC configurations should also be forthcoming. While JLQCD and UKQCD will be limited to light quark masses > ∼ m s /2, the MILC configurations extend down to ∼ m s /8. This means that the uncertainties associ- ated with the necessary extrapolations to the physical u and d quark masses should be much smaller in the calculations performed on these configurations. On the other hand, the methods used to produce the MILC configurations may introduce non-localities and KS fermions suffer from flavor violations which can be accounted for but which significantly complicate chiral extrapolations.
Another important avenue to explore to reduce errors in LQCD calculations are ratios of semileptonic B and D meson rates, as many systematic and statistical errors are expected to cancel in such ratios. Results for B mesons can then be recovered by combining these lattice ratios with the high-precision measurements of D decays promised by CLEO-c. For a more complete discussion of both LQCD and LCSR calculations, please see the CKM workshop yellow book [1] and the reviews by Onogi [38] and by Ball [39].
Exclusive |V ub |: experiment
As already mentioned, the last year has seen many new measurements of exclusive b → uℓν decays, many of which are still preliminary. All of these measurements make use of detector hermiticity to reconstruct the four-momentum of the neutrino. They are: • BABAR 2003 [36], reported at this workshop by Schubert: measurement of B → ρℓν rate with an on resonance integrated luminosity of L on = 50.5 fb −1 , an off resonance luminosity of L of f = 7.8 fb −1 and the following cut on the lepton momentum: 2.0 GeV < p ℓ < 2.7 GeV.
• CLEO 2003, reported at this workshop by Gibbons/Cassel [35]: measurements of B → πℓν and B → ρℓν rates based on a sample of 9.7 million BB pairs with p ℓ > 1.0 GeV for pseudoscalar final states and p ℓ > 1.5 GeV for vector final states. This study pioneers a new method in which rates are measured independently in three q 2 bins, yielding reduced model-dependence and allowing for model discrimination.
CLEO's results for the q 2 dependence of the partial rates for B 0 → π − ℓ + ν and B 0 → ρ − ℓ + ν are shown in Figure 7, as obtained using different form factor calculations to estimate efficiencies. The results for B 0 → π − ℓ + ν show negligible dependence on the calculation used, indicating that their binning method has essentially eliminated form factor dependence. The situation is less good for B 0 → ρ − ℓ + ν decays, likely a result of the cut on the angle between the lepton and the W directions [35]. The poor χ 2 for the ISGWII model [60] fit to the B 0 → π − ℓ + ν rate and for the Melikhov et al. model [61] and Ball et al. LCSR [52] fit to the B 0 → ρ − ℓ + ν rate indicate that these theoretical descriptions of the form factors are disfavored by the data.
BELLE also has a determination of dΓ/dq 2 (B 0 → π − ℓ + ν) as a function of q 2 as shown in [59]. Unlike CLEO, BELLE determines its efficiency without binning in q 2 . The model-dependence of their result is therefore expected to be more important, though it has not yet been determined.
A compilation of results for |V ub | obtained from exclusive B → X u ℓν decays is shown in Figure 8. Gibbons refrains from giving an average number because the current information provided by the different experiments is insufficient to determine the size of correlations in their results. Indeed, there are a number of common systematics which could lead to large correlations [35]. These include: Figure 7. The dΓ/dq 2 distributions obtained in the CLEO '03 analysis for B 0 → π − ℓ + ν (left) and B 0 → ρ − ℓ + ν (right). Shown are the variations in the extracted rates (points) for form factor calculations that have significant q 2 variations, and the best fit of those shapes to the extracted rates (histograms). Plot taken from [35]. • the common use of the ISGW2 model [60] and the Neubert-Fazio model [62] to determine the b → uℓν background coming from feed down modes not considered; • common GEANT base for detector simulation; • common signal models: LCSR, LQCD, quark models.
The first item should probably be treated as a correlated systematic. To deal with the issue of common signal models, it seems appropriate to first average the rates and |V ub | obtained by the different experiments for a given model and then combine the measurements.
Nevertheless, the good agreement between the different experiments in their measurements of |V ub | and in the branching ratios from which these measurements were obtained is encouraging. It should be noted, however, that all of these |V ub | determinations are systematically below those obtained from inclusive decays.
With the growing data sets from the B factories, fully reconstructed B-tag analyses, such as those used in the study of inclusive B → X u ℓν decays, will become possible. This will reduce background significantly, allowing for selection criteria which yield a more uniform efficiency. Consequently, systematic uncertainties associated with form factor uncertainties and detector and background modeling will be reduced. At that point, measurement of exclusive B → X u ℓν decays may yield the most accurate determinations of |V ub |.
Exclusive
At the workshop, Mikami from BELLE [63] suggested measuring |V ub | from wrong charm exclusiveB 0 → π + D − s decays [64] and semi-inclusiveB 0 → X + u D − s decays [65], and presented measurements for the relevant rates and yield. These decays occur through the tree-level b → ucs diagram. The corresponding measurements of |V ub | are meant as consistency checks for the usual semileptonic determinations and for the methods used in the calculation of non-leptonic B decays.
In the semi-inclusive case, the idea would be to obtain |V ub /V cb | from the endpoint of the D s spectrum in B 0 → X + D − s . The advantage with respect to the semileptonic endpoint measurement is that the signal fraction is larger, with more than 50% of the spectrum forB 0 → X + u D − s beyond the kinematic limit forB 0 → X + c D − s [65]. These semi-inclusive decays also have higher statistics than the exclusive mode.
The problem with both the exclusive and semiinclusive proposals for determining |V ub | is that the theoretical formalism to describe the corresponding decays does not yet exist. Indeed, BBNS factorization [66] does not apply to B 0 → π − D + s , because the π − contains the spectator d quark. The situation is even more complicated for the semi-inclusive case.
More promising, at least theoretically, is the fully inclusive b → ucs ′ , as first proposed in [67]. In [68], it is shown that when the rate is normalized by the inclusive semileptonic b → c rate, the corresponding theoretical expressions have a well a behaved HQE. This is, of course, a very challenging measurement to make. However, because it is so theoretically clean, it would be interesting to investigate experimental feasibility.
|V cb | from exclusive decays
Experimental aspects of these determinations were reviewed very nicely at this workshop by Oyanguren [69] and lattice results for the relevant decay form factors were very nicely summarized by Onogi [38]. The status of these measurements, as well as the theory behind them, has not evolved significantly since the publishing of the CKM workshop yellow book [1]. There are no new measurement and no new calculations of F (1) and G(1), the values of the B → D * ℓν and B → Dℓν form factors at zero recoil. Thus, F (1) = 0.91(4) and G(1) = 1.04(6) [1]. And the extrapolation of the measured rate to the zero-recoil point, which is required to obtain |V cb |, is still best done with the model-independent, dispersive parameterizations of [70,71], which are given in terms of a single parameter: the slopes ρ D * and ρ D of the relevant form factors.
The Heavy Flavor Averaging Group has averaged the results for |V cb | and the slopes ρ D * and ρ D obtained by the different experiments, after rescaling them to common input [21]. They find: |V cb | = (42.6 ± 0.6 stat ± 1.0 syst ± 2.1 thy ) · 10 −3 ρ D * = 1.49 ± 0.05 stat ± 0.14 syst from B → D * ℓν decays and |V cb | = (40.8 ± 3.6 expt ± 2.3 thy ) · 10 −3 ρ D = 1.14 ± 0.16 expt for B → Dℓν decays. These measurements are in good agreement with each other as well as with those obtained using inclusive B → X c ℓν decays (e.g. Eqs. (7) and (8)), though errors on the exclusive measurements are currently larger. These errors are dominated by the uncertainties in the theoretical determination of the form factors at zero recoil. It is thus important that the quenched lattice calculations [72], which enter the determination of these form factors as explained in [1], be repeated by other groups and be unquenched. On the experimental side, the limiting systematics are inputs such as the b →B 0 and Υ(4S)B 0 rates, the contributions of the D * * and the D decay branching ratios.
It should be noted that a new set of very interesting lower bounds has been derived for the moduli of the derivatives of the Isgur-Wise function, ξ(w), as explained by Oliver at this workshop [73]. They are based on derivatives of non-zero recoil sum rulesà la Uraltsev [74]. In particular, it is shown that the n-th derivative at zero recoil, ξ (n) (1), can be bounded by the (n − 1)-st one and that one obtains an absolute lower bound on the n-th derivative, (−1) n ξ (n) (1) ≥ (2n + 1)!!/2 2n . Moreover, these bounds are compatible with the dispersive parameterizations of [70,71] and reduce the allowed range of parameters, though it should be noted that the latter include finite mass corrections which are absent in the new bounds. It would be interesting to investigate how radiative corrections and subleading corrections in powers of 1/m c affect these bounds.
b-hadron lifetimes and lifetime differences
The current experimental situation for b-hadron lifetimes and lifetime differences was very nicely reviewed by Rademacker at this workshop [75]. The status of both experiment and theory has not changed significantly since the publication of the CKM Workshop yellow book [1].
Lifetimes
On the theory side, there are no new results and regarding experiment, the halving of errors on τ B + and τ B 0 brought about by the measurements of the B factories based on 1999-2001 data were already taken into account in [1]. There are, nevertheless, new measurements from BABAR [76][77][78], CDF [79] and DELPHI [80] for both τ B 0 and τ B + and from D0 for τ B + [81]. CDF has also reported new determinations of τ Bs and τ Λ b [79]. These results are summarized in Table 1. The new BABAR measurements are based on partial reconstruction of the B mesons, either through B 0 → D * − (π + , ρ + ), where only the D 0 in D * − → D 0 π − is reconstructed [76]; through B 0 → D * − ℓ + ν [77]; and in the di-lepton channel [78]. Partial reconstruction works thanks to the decay kinematics at the Υ(4S). Table 2. World averages of b-hadron lifetime measurements, together with the theoretical predictions reviewed in [1]. All averages are from [82], except for τB s /τ B 0 and τΛ b /τ B 0 which are taken to be the ratio of the corresponding world average lifetimes.
Table 2 [82]. The corresponding lifetime ratios are also given in Table 2, together with the theoretical predictions reported in [1]. The experimental accuracy on τ B + /τ B 0 , which has reached a stunning 1.4% thanks to the B factories, is now better than that of the HQE prediction. And further improvement can be expected from B factories and the Tevatron soon. The agreement between theory and experiment is excellent, a clear vindication of the HQE approach.
Agreement for τ Bs /τ B 0 and τ Λ b /τ B 0 is less good, though the discrepancy is less than two standard deviations. The Tevatron is currently producing large numbers of B s mesons and Λ b 's such that the error on these lifetime ratios are expected to be below 1% by the end of Run IIa [75]. This will provide a stringent test of the HQE and may force theorists to consider penguin contributions, which are absent in τ B + /τ B 0 , and which are currently neglected.
Lifetime differences
On the theory side, the preliminary unquenched, twoflavor results of the JLQCD collaboration for one of the two matrix elements relevant for (∆Γ/Γ) B d,s at leading order in 1/m b have been finalized [83]. These results will not modify the predictions for (∆Γ/Γ) B d,s reviewed in [1]. Experimentally, the only change comes from the new measurements of the average B d lifetime, which is used to obtain (∆Γ/Γ) B d,s from ∆Γ B d,s . The current experimental and theoretical situation for (∆Γ/Γ) Bs is summarized in Table 3.
The status of experimental measurements for (∆Γ/Γ) Bs should change dramatically in the near future: a statistical uncertainty of ∼ 2% is expected by the end of Run IIa. It is not clear, however, that theory will be able to follow. Indeed, the main source of uncertainty comes from 1/m b corrections which are enhanced by a rather large cancellation between the leading-order contributions. A calculation of these corrections requires the non-perturbative estimate of many dimension-7, ∆B = 2 matrix elements which is very challenging. Unfortunately, until these corrections are calculated with reasonable precision, it is unlikely that a measurement of (∆Γ/Γ) Bs will allow detection of physics beyond the Standard Model.
Conclusion
We are witnessing very exciting times, with the B factories and the Tevatron reducing errors tremendously on all of the quantities studied in our working group. This presents theorist with a great challenge and will allow for very stringent tests, sending many models to the grave. The improved experimental accuracy also permits the exploration of new methods, in which the reliance on non-perturbative calculations is greatly reduced, such as in the spectral-moments determination of |V cb |. As this example shows, the close interplay between theory and experiment is crucial to take advantage of the improved accuracies. Further gains should be sought by optimizing comparison between experiment and theory in region of phase space where the combined errors are minimized, such as in the inclusive and exclusive determination of |V ub |. It is also important to emphasize the rôle of CLEO-c, which will not only provide accurate branching ratios necessary for B physics measurements, but will also be very useful for testing non-perturbative approaches such as lattice QCD and for calibrating the predictions of these approaches in B physics. | 2019-04-14T02:35:55.504Z | 2003-10-22T00:00:00.000 | {
"year": 2003,
"sha1": "41259463bcab9b70be9e869bfb6f09ba735f6de8",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "40bb19755b2f6a968e43543e752c1a9a0c052b67",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
119227390 | pes2o/s2orc | v3-fos-license | Backaction of a driven nonlinear resonator on a superconducting qubit
We study the backaction of a driven nonlinear resonator on a multi-level superconducting qubit. Using unitary transformations on the multi-level Jaynes-Cummings Hamiltonian and quantum optics master equation, we derive an analytical model that goes beyond linear response theory. Within the limits of validity of the model, we obtain quantitative agreement with experimental and numerical data, both in the bifurcation and in the parametric amplification regimes of the nonlinear resonator. We show in particular that the measurement-induced dephasing rate of the qubit can be rather small at high drive power. This is in contrast to measurement with a linear resonator where this rate increases with the drive power. Finally, we show that, for typical parameters of circuit quantum electrodynamics, correctly describing measurement-induced dephasing requires a model going beyond linear response theory, such as the one presented here.
I. INTRODUCTION
Two-level systems (TLS) and harmonic oscillators are the two simplest systems that can be described exactly with quantum mechanics. Consequently, many physical systems are described at least approximately by either of these two building blocks. As an example, in cavity quantum electrodynamics (CQED) [1], an atom, modeled as a TLS, interacts with a photon field inside a high quality optical or microwave resonator, modeled as a harmonic oscillator. Another example is circuit quantum electrodynamics (cQED) [2], cavity QED's little brother and a promising candidate for the realization of a future quantum computer [3]. In circuit QED, a superconducting artificial atom (or qubit) [4] is coupled to a coplanar waveguide resonator. In the context of quantum information processing, the resonator both acts as a filter, partly protecting the qubit from decoherence and relaxation, and as a measurement device for the qubit state.
However, contrary to cavity QED where the atomic properties are fixed, the engineered devices studied in circuit QED can be tuned and are custom built. Therefore, while devices dating from the early stages of circuit QED [2,5] were well described by two-level systems coupled to harmonic oscillators, more recent qubits, such as the transmon [6][7][8], the low impedance flux qubit [9], and the tunable coupling qubit [10] are better described by multi-level systems (MLS). This is also the case for the phase qubit [11]. Moreover, while the standard architecture for qubit readout has long been linear resonators [2], many recent results [12][13][14][15][16] now use resonators made nonlinear with embedded Josephson junctions. Not only do these nonlinear resonators provide a bifurcation amplifier regime which considerably improves the readout -a key requirement for quantum information processing -but they also exhibit remarkably enriched physics. As examples, they have been used to parametrically amplify small signals [17,18] and generate squeezed light [19].
The performance of nonlinear resonators as parametric amplifiers for small signals [20] as well as their backaction on a qubit have also been studied theoretically [21,22]. However, in Refs. [21,22], the qubit was assumed to be a two-level system, something which is often insufficient to understand many types of superconducting qubits. Moreover, a linear response of the output signal to the input (qubit) signal was assumed. While linear response holds away from the nonlinear resonator's critical point, where bifurcation becomes possible, and away from the switching thresholds in the bifurcation amplifier regime, we show that it breaks down close to these points. We show that linear response is unlikely to be sufficient to describe a qubit readout with a nonlinear resonator when considering typical cQED parameters. Finally, the usual dispersive theory with linear resonators assumes driving of the resonator close to its resonance frequency for measurement [2,23]. As a result, the theory obtains a dependence of the ac-Stark shift on the frequency detuning between the qubit and the resonator, rather than between the qubit and the measurement drive. This is especially important when measuring with a nonlinear resonator since there is always a significant frequency detuning between the drive and the resonator in such cases.
In this paper, we derive a reduced qubit model going beyond these assumptions. We do so using unitary transformations, especially the dispersive [23,24] and the polaron transformations [25][26][27]. We are especially interested in describing the ac-Stark and Lamb shifts of the qubit as well as its measurement-induced dephasing [28]. We note that this theory was developed in parallel to and already tested against the experimental results of (Color online) Representation of one possible implementation of the system considered in this paper. This represents a stripline resonator (blue) made nonlinear with an embedded Josephson junction (dark green), capacitively coupled to a transmon qubit between the central conductor and the ground planes. The model described in this paper however applies to various other nonlinear resonators and qubits (see text).
In section II, we write the general master equation that is used to describe the multi-level qubit coupled to the nonlinear resonator. In section III, we recall the minimal multi-level system model of linear circuit QED in the dispersive regime. In section IV, we describe the basic characteristics of nonlinear resonators and explain why we need to go beyond the assumptions given in the previous paragraphs. In section V, we derive a reduced model for the qubit through a series of unitary transformations. In section VI, we compare the predictions of the analytical model to experimental [16] and numerical data and find quantitative agreement within the limits of the model. We also explain how the ac-Stark and Lamb shifts as well as the measurement-induced dephasing are changed by the nonlinearity of the resonator. We finally test the regime of validity of the linear response theory and show that it is unlikely to be sufficient to describe any high-fidelity qubit readout with a nonlinear resonator.
II. PRESENTATION OF THE SYSTEM
We consider a system made of a multi-level qubit coupled to a nonlinear resonator. We describe the nonlinear resonator with the Hamiltonian ( = 1) [20] where a ( †) are the annihilation (creation) operators, ω r is the resonator low-power resonance frequency, and K and K are quadratic and cubic Kerr constants. Such a Kerr nonlinear resonator could be an LC-circuit with an added Josephson junction [12] or a stripline resonator with one [15] (see Fig. 1) or many [17,29] embedded Josephson junctions. In all these cases, the Josephson junctions act as nonlinear dissipationless inductances, rendering the resonator nonlinear. We describe the qubit by the generic many-level system where M is the number of qubit levels, ω i is the frequency of the qubit eigenstate |i , Π i,j ≡ |i j|, and where we have introduced the short-handed notation which we will use on multiple occasions throughout this paper. The eigenstates {|i } could be for example charges tunneling on and off a superconducting island such as for a Cooper-pair box [30], superposition of such charges for a transmon qubit [6] or current flowing clockwise or counterclockwise in a superconducting loop for a flux qubit [31]. We assume a dipolar coupling between the qubit and the resonator and describe it by the interaction Hamiltonian where g i are the coupling constants. The only constraint on the qubit that we impose for our model is that the selection rules only allow transitions between the qubit states |i and |i ± 1 through the resonator. This restriction is fullfilled for good two-level qubits such as the Cooper-pair box [30], the phase [32] and flux [31] qubits, but is also realized for some more recent multi-level qubits such as the transmon [6,7,33] and the low impedance flux qubit [9].
To understand the experiment of Ref. [16], we also consider driving of the resonator. We allow for multiple qubit-detuned drives d ∈ {d 1 , d 2 , ..., d n } as well as one spectroscopy drive s, quasi-resonant with the qubit frequency, that we model by the Hamiltonians 5b) where and d,s and ω d,s are the drives' amplitude and frequency. By quasi-resonant, we mean that ω s is always much closer to the |0 ↔ |1 qubit frequency than to any other qubit transition frequencies. In experiments, these drives take the form of microwave signals sent to one port of the resonator and either transmitted to the other port or reflected back depending on the circuit design. As in the experiment of Ref. [16], we will later on take the amplitude of the spectroscopy drive s to be small such that its contribution to the intra-resonator field is small. The case of high amplitude spectroscopy will be treated in a following publication [53]. Finally, to model dissipation, we use the Lindblad-type master equatioṅ and H = H r + H q + H I + H d + H s . In this master equation, κ and κ NL are the resonator's rates of one-and two-photon loss [20], γ is the qubit |1 → |0 decay rate and γ ϕ is the qubit pure dephasing rate for the same as the X-dispersion, where X is some control parameter (could be flux or charge for example), with ε 0 = 0 and ε 1 = 1 by definition. This master equation can be obtained by modeling the coupling of the qubit and the resonator to baths of harmonic oscillators and then tracing over the baths [34]. When obtaining this master equation, we made three assumptions. First, we assumed that the noise spectra are white around the relevant frequencies for relaxation (∼ GHz) and dephasing (< 1 MHz). For this approximation to hold, the baths must be white on a frequency range comparable to the resonator or qubit linewidths. While this approximation should hold for relaxation (∼ GHz frequencies) if the resonator and the qubit have high quality factors, it may fail for dephasing (< 1 MHz frequencies) if, for example, the noise has a 1/f spectrum and hence varies by many orders of magnitudes over a single resonator or qubit linewidth. In this latter case, one needs to be more careful and take the noise spectrum into account when deriving the master equation [6,35]. Second, we assumed that the noise causing qubit relaxation couples to the qubit through dipolar interaction, yielding the scaling in g i /g 0 for the γ dissipator. Finally, we considered that dephasing is caused by (white) noise at low frequencies in the control parameter X.
III. LINEAR CIRCUIT QED IN A NUTSHELL
Before going to the nonlinear case, it is useful to review some aspects of the more standard linear case. In linear circuit QED, one is interested in the system described in section II, but with K = K = κ NL = 0, and with a qubit which can have two or more states. Many aspects of this system have been studied extensively both theoretically and experimentally, ranging from qubit measurement [2,36] and single-and two-qubit gates [5,[37][38][39][40][41] to dissipation and dephasing [8,23,27,28,[42][43][44]. In this section, we present the minimal theory of the dispersive regime where the couplings g i are much smaller than the qubit-resonator detunings ∆ i,j ≡ ω ij − ω r ≡ ω i − ω j − ω r .
In this regime, there is no direct exchange of energy between the qubit and the resonator, and most of the physics can be understood from an approximate diagonalization of the undriven Hamiltonian H r + H q + H I [2]. To second order in perturbation theory and assuming that the qubit is a TLS, this diagonalization yields where the effective qubit frequency is Lamb-shifted by a quantity χ = g 2 /∆ 1,0 . The last term of this Hamiltonian can either be seen as a qubit state-dependent pull of the resonator frequency -which allows for qubit measurement [2] -or as an ac-Stark shift of the qubit frequency that depends on the number of photons in the resonator [42].
In addition to the Lamb and ac-Stark shifts of the qubit frequency, the qubit's coupling to the driven resonator leads to additional sources of relaxation and dephasing. Among these are Purcell relaxation [8] in which the qubit relaxes through the resonator's photon loss channel, dressed dephasing [23,43,44] in which pure dephasing of the dressed qubit-resonator states leads to effective relaxation and heating of the qubit, and measurementinduced dephasing [27,28,42] which is the unavoidable dephasing caused by acquisition of information about a quantum system. For a linear resonator and in a dispersive measurement regime, it is shown in Refs. [27,28] that the measurement-induced dephasing rate is given by where D = |α 1 −α 0 | is the distinguishability of two pointer states of the resonator andn is the average number of photons inside the resonator. Under resonator driving, the pointer state α i is the coherent state |α i that represents the resonator's field if the qubit is in the state |i . For a linear resonator and a two-level system described by the dispersive Hamiltonian Eq. (3.1) with a single added drive of amplitude p and frequency ω p , these coherent states are given by and are represented in phase space on Fig. 2 for a resonant drive. The distance D between these pointer states in phase space depends on the cavity pull χ and, for a dispersive measurement with a linear resonator, increases with the number of photons or equivalently with the strength of the measurement drive. It is further shown in Ref. [27] that, in the linear case, the measurement-induced dephasing rate reaches the smallest value permitted by quantum mechanics. In other words, it saturates the inequality where Γ meas. is the measurement rate [45], corresponding to the rate at which information is gained on the system being measured. One of the questions that we will try to answer in this paper is whether or not this inequality can be saturated when using a nonlinear resonator for homodyne dispersive measurement of the qubit.
IV. FEATURES SPECIFIC TO NONLINEAR CIRCUIT QED
Depending on the amplitude d and frequency ω d of the drive, the response of a Kerr nonlinear resonator can be either mono-or bi-valuated. The stability diagram describing this behavior can be parametrized by the reduced detuning frequency Ω ≡ 2(ω r − ω d )/κ and by the drive amplitude d . If the reduced detuning is smaller than -but close to -a critical value Ω C = √ 3, the nonlinear resonator can be used as a low-noise parametric amplifier [17]. This has been used recently to amplify microwave signals at the single photon level [18]. For Ω/Ω C > 1, the stability diagram, illustrated in Fig. 3(b), shows two bistability thresholds [46]. Below the first one (dashed green line), a low (L) amplitude response [see Fig. 3(a)] of the resonator is observed. Above the second one (full red line), one rather observes a high (H) amplitude response. Between the two thresholds, both L and H are stable. Because of the coupling to the qubit, this stability diagram depends on the qubit state. This dependence allows the nonlinear resonators to be used as a sample-and-hold detector as has been demonstrated in Refs. [13,15,47].
Before going forward with the theory, we want to highlight two peculiarities of circuit QED with a nonlinear resonator that are often overlooked. These two aspects -the detuning of the readout drive from the resonator frequency and the limits of the linear response theoryas well as their impact on the theory are discussed further in the following subsections.
A. Detuned measurement drive
Both in usual low power dispersive measurement of a TLS [2] and the more recent high power avalanche readout [48][49][50], measurement with a linear resonator is done with a drive at or very close to the resonator frequency ω r . On the contrary, measurement with a nonlinear resonator is always done with a drive source significantly detuned from ω r [13,15,47,51]. As can be seen from in Fig. 3, this detuning is required to bias the system either in the region of highest parametric gain or in the bistability region.
Because of the Jaynes-Cummings interaction, the drive on the resonator also acts on the qubit. Since the cavity is acting as a filter, the effective drive amplitude as seen by the qubit is expected to scale as 1/(ω r − ω d ). Photons entering the cavity because of this drive will cause an ac-Stark shift of the qubit χ a † a . The shift per photon χ should depend on the drive frequency. This is however not the case for the usual expressions for a TLS, where . Indeed, these expressions scale with the inverse of the qubit-resonator detuning. One would rather expect to find χ i ≡ g 2 i /(ω i+1,i − ω d ) since the drive photons are at frequency ω d . While a relative change of a few percents on ∆ 1,0 yields the same relative change on χ for a the two-level system, the effect can be twice as big for a MLS because of the reduced value of χ. To obtain quantitative agreement with the results of Ref. [16], we obtain below an expression for the ac-Stark that contains the expected frequencies.
B. Limits to the validity of linear response in circuit QED As stated before, Kerr oscillators have been used experimentally as parametric amplifiers for small signals. They have also been studied theoretically extensively. As examples, Yurke and Buks have studied their performance and calculated their gain [20], while Laflamme and Clerk have shown that these amplifiers are not quantum limited in the sense of Eq. (3.4) for a qubit measurement [22]. Moreover, these last authors show that the quantum limit can be reached if one makes use of correlations between the resonator and the system coupled to it.
These two results were however obtained in the limit of linear response theory. In this limit, one finds the driven resonator's stationary stateᾱ without the coupling to the qubit and then expands the solution including the qubit around the stationary solution α ≈ᾱ + δα. For a qubit measurement, the signal that is amplified by the resonator takes the form of a pull ±χ of the resonator frequency which in turns depend on the qubit state as expressed in Eq. (3.1). For a linear resonator in the dispersive regime, the α i 's given by Eq. (3.3) can be rewritten as where the linear response expressed by the second line holds if |i(ω r −ω p )+κ/2| |χ|. Therefore, the validity of linear response in this linear dispersive case is not affected by the driving strength, but is rather determined by the ratio 2χ/κ 1. This analysis however does not hold for a nonlinear resonator. Indeed, in order for linear response theory to stay valid with a nonlinear resonator, α must change linearly with the pulled frequency -or equivalently with the drive-resonator detuning -over a frequency range 2χ. While for a linear resonator, it has been shown [28] that the optimal SNR is obtained for 2χ = κ, the improved measurement efficiency with a nonlinear resonator allows for smaller cavity pulls. Taking χ = 0.2κ as a typical value of the cavity pull translates into a range of Ω/Ω C ∼ 0.5 over which the signal must be linear in frequency for the linear response to stay valid. This range is illustrated on Fig. 3(a) with the horizontal lines. The full green lines represent regimes for which linear response would be a good approximation, while dashed red lines represent regimes for which the response is not linear over the appropriate range. We argue that the linear response approximation will break down as soon as the slope of the response -and hence the gain of the amplifierbecomes significant.
In the following section, we derive a theory that goes further than linear response theory using the polaron transformation approach of Ref. [27].
V. REDUCED QUBIT MODEL
In this section, we derive a reduced qubit model that captures the ac-Stark and Lamb shift of the qubit transition frequencies as well as measurement-induced dephasing. This is done by performing unitary transformations on the master equation (2.6). These transformations have two objectives. First, transforming the system into its eigenbasis in which the physics is easier to understand. Second, eliminating the resonator to obtain a master equation for the qubit alone.
In order to reach these objectives, many transformations have been used in the litterature. The dispersive transformation [24,43] (here generalized for a MLS) , diagonalizes the Jaynes-Cummings Hamiltonian and reveals the Lamb and ac-Stark shifts. This transformation however only knows about photons that are at the resonator frequency ω r and fails to correctly model the measurement drive-resonator frequency detuning as discussed in Sec. IV A. Another useful transformation is the displacement operator [52], which displaces a coherent state |−α of a resonator to the ground state |0 . In operator representation, it corresponds to the change a → a + α, where α represents the classical average field and a its quantum fluctuations. Doing this transformation before the dispersive transformation, as was done for example in Ref. [38], yields the correct qubit-drive detuning in the ac-Stark shift. The ac-Stark shift then depends on the mean field amplitude α and the ac-Stark shift per photon depends on the drivequbit frequency. However, doing this transformation in the context of a nonlinear resonator is akin to doing a linear response theory. Indeed, it is the same as assuming that the intra-resonator field is |α and then look at all further perturbation, such as the cavity-pull, with respect to this mean field value. This will be discussed further in Sec. VI C. A third transformation that was used in Ref. [27] to calculate the measurement-induced dephasing rate, as well as in Ref. [26] to study a qubit coupled to a mechanical resonator beyond the rotating wave approximation (RWA), is the polaron transformation [25] (here generalized for a MLS) This corresponds to a displacement transformation that is conditional on the qubit state. It allows for different cavity states |α i for each qubit state |i , which makes it possible to go beyond the linear response approximation. It is important to note that the field amplitudes α i are free parameters in this transformation. In practice, these amplitudes will be chosen such as to cancel specific terms in the transformed Hamiltonian. Moreover, and as will become clear below, these different α i 's will be independent solutions of qubit-state-dependent nonlinear equations, and not expansions around a mean solutionᾱ of a single mean nonlinear equation.
In the following subsections, we perform three transformations in order to approximately diagonalize the Hamiltonian and transform the full master equation Eq. (2.6) to a reduced qubit master equation containing all the relevant physics needed to account the low power spectroscopy of a qubit coupled to a nonlinear resonator driven by an external field.
A. Polaron frame
While the polaron transformation can be performed exactly on terms that are diagonal in the qubit subspace, applying it on non-diagonal terms unfortunately yields complicated expressions. For example, applying it on a qubit ladder operator σ − yields which, through the displacement operator D, contains all powers of a and a † . For this reason, the polaron transformation was used in Refs. [23,27,43] after doing the dispersive transformation which eliminates the offdiagonal qubit operators. In this paper, we instead apply it before the dispersive transformation, assume that |α 1 − α 0 | 1 and take as a simplification P † σ − P ≈ σ − . The small distinguishability approximation |α 1 − α 0 | < 1 will be made throughout this calculation and will limit the range of validity of the theory in a way which will be discussed later.
The application of the polaron transformation on the master equation (2.6) is presented in Appendix A. Following this Appendix, we use the notation H i to represent a part of the Hamiltonian in this first transformed frame that contains i resonator ladder operators a ( †) . First, for i = 0 corresponding to the qubit-only Hamiltonian we find where Π α is defined according to Eq. (2.3). In this Hamiltonian, the second term of the first line acts as drives on the qubit at the frequencies contained in the time dependence of α. The last two lines will be partly cancelled below by the choice of α given in Eq. (5.19) and we will neglect the small remaining parts.
We also obtain the qubit-resonator Hamiltonian, limited to terms with one resonator ladder operator, We will see below that the two first terms of H 1 can be cancelled by a proper choice of Π α . The last term will yield the Lamb shift of the qubit frequencies once the dispersive transformation is done. Finally, we find for the Hamiltonian containing terms with two resonator ladder operators where corresponding to the number of photons associated to the different qubit states, we see from the expression for ω r (α) that the resonator frequency is changed by the nonlinearity as expected. Moreover, the last term of Eq. (5.8) will squeeze the resonator field. This will be studied in elsewhere [53] and, for the scope of this paper, we will consider squeezing to be negligible.
Having transformed the Hamiltonian, we now apply the polaron transformation to the dissipative parts of the master equation Eq. (2.6). We note that one could alternatively apply the transformations on the systembath Hamiltonians before deriving the master equation. In this way, it would be possible to relax the white noise approximation [23], something we will not focus on here.
Applying the transformation, we arrive at the master equation of the system in the polaron framė When obtaining the dissipative terms, we have neglected non-Linbladian terms of the form a[ρ , Π * α ] under the assumption that in the polaron frame, the resonator is in, or close to, its ground state (see Appendix A and Ref. [27]). In this equation, the two first lines are the Hamiltonian part as well as the unchanged parts of the dissipative terms. The last line contains measurementinduced dephasing through the single-photon (first term) and two-photon (second term) loss decay channel, as well as some additional resonator decay (last term).
In this polaron frame, we end up with a resonator whose frequency is shifted by the nonlinearity and the amplitude of the classical fields α i . This resonator is driven with an adjustable strength G which could be set to zero by a proper choice of α i . It is important to note that we did not make that choice yet because, if we did, we would have α i = α j and therefore would lose all dependence of the field amplitudes α i over the qubit state. The choice of the value of the qubit-state dependent fields α i will be made only after moving, in the next subsection, to what we call the classical dispersive frame. Finally, in the polaron frame, the qubit is driven off-resonantly at frequencies ω d and quasi-resonantly at frequency ω s with amplitudes α i,d and α i,s . As we will now show, the offresonant drives will yield the correct ac-Stark shifts of the qubit frequencies.
B. Classical dispersive frame
We now focus on the qubit Hamiltonian Eq. (5.5). Since Π α has a time dependence involving the drive frequencies, this Hamiltonian is that of a qubit driven with multiple direct drives. We have not yet computed the amplitude of the fields yet and we will do so now taking This choice assumes that the multiple drives are spread out enough in frequency such that one drive does not contribute significantly to the field oscillating at another drive's frequency. We therefore take Transitions |i ↔ |i + 1 are then driven by an off-resonant drive with amplitude g i α i,d and frequency ω d , as well as by a quasi-resonant drive with amplitude g i α i,s and frequency ω s . Focussing for now on the drives ω d = ω s , the first line of this Hamiltonian can be approximately diagonalized with an analog of the dispersive transformation Eq. (5.1) where ξ i is a classical analog of the operator λ i a † . Because of this analogy, we will refer to this as the classical dispersive transformation. This transformation is performed on the master equation (5.10) in Appendix B where we take In the spirit of the dispersive transformation, D C assumes an off-resonant driving and therefore cannot be applied to transform the spectroscopy drive s. When doing the transformation, we drop time-dependent terms involving two different drive frequencies ω d1 ± ω d2 under the rotating wave approximation. We also assume that for the purpose of getting the qubit transition frequencies, α i,d = α 0,d . This is the same as taking |α i − α i+1 | to be small. Essentially, we assume that the difference in the pointer states is not important to describe the value of the qubit transition frequencies, but is important to describe their widths. In other words, we say that the mean transition frequency depends on the mean cavity field, which is approximately α 0,d at low spectroscopy power (for the qubit is mostly in its ground state), while the width of the transition frequencies depend on the deviation of the cavity field from α 0,d .
Performing the above transformation on the qubit Hamiltonian H 0 to fourth order in perturbation theory together with the simplifications just outlined, we find (5.14) where are the ac-Stark shifted qubit frequencies and are the quadratic and quartic ac-Stark shift coefficients with We note that g i = 0 for all i / ∈ [0, M − 2] in the initial model such that terms with a negative index or an index above M − 2 on the right hand side of the equations above vanish. Comparing these expressions with equations (3a) and (3b) of Ref. [49], we highlight a few differences. First, both S d i and K d i now depend on the drive frequency ω d instead of the resonator frequency ω r . As explained in section IV A, this follows from considering that the driving photons can be at a frequency significantly detuned from ω r . Actually, Eqs. (5.16) and (5.17) also hold for linear cQED, where the measurement drive is in practice chosen to be quasi-resonant with ω r . Next, the equation for S d i does not involve terms of higher order than g 2 . In Ref [49], these higher-order terms came from choosing a specific order for ladder operators when computing K i (i.e. a † a † aa = a † aa † a + a † a). Here, the field is classical and there is no such ordering choice to be made. Finally, in Ref. [49], a second-order coupling caused by two-photon transitions was diagonalized, yielding fourth order corrections. This second-order coupling is however only significant in the straddling regime where the resonator frequency is between two qubit transition frequencies [6]. Since we are not considering this regime here, this two-photon transition was neglected.
The next step is to apply the transformation D C on H 1 to find where G can be found in Eq. (B7). The Hamiltonian H SB , whose definition can be found in Eq. (B8), corresponds to red and blue sideband transitions. This Hamiltonian is the multi-level equivalent of the one obtained in Eq. (B10) of Ref. [38] for a two-level system driven by two detuned drives and experimentally studied in Ref. [54]. The drive strength G can be set to zero with a proper choice of the fields α i , yielding an undriven resonator in this frame. Assuming that |ω d1 − ω d2 | is sufficiently large to neglect time-dependent cross terms, choosing G = 0 implies for each qubit-detuned drive and for the spectroscopy drive. In writing these expressions, we have again assumed that, even though α i = α j , these amplitudes are close enough to replace one by the other in order to uncouple the equations for i = j. We stress that because Eq. (5.19) contains the qubitstate dependent cavity pull, the solutions α i,d obtained here go beyond the linear response theory for the response of the field to a change of the qubit state. As explained briefly in section IV B and as we will detail further later, a linear response theory would instead have solutions of the form α i =ᾱ + f (S i ), whereᾱ would be the solution of Eq. (5.19) with S d i = K d i = 0 and f (S i ) would be some linear function of the cavity pull.
We note that the equations for two different drives d 1 = d 2 are coupled through the total field |α i |. However, in the interest of reproducing the results of Ref. [16], from this point on we will consider only a single qubit-detuned drive which we will label d = p (the pump drive) in addition to the spectroscopy drive s. This implies that only the first line of H 1 will remain. For the purpose of calculating α s , we will also assume that the spectroscopy amplitude s is small enough so that |α i,p | |α i,s | and that α i,p ∼ α j,p , such that we can replace α i by α i,p ≈ α p in the equation for α i,s . Finally, since in practice K K g, performing the classical dispersive transformation on H 2 would yield corrections smaller than those that we have kept so far. We therefore neglect those and take Finally, applying the transformation D C on the dissipation yields the master equation in this doubly transformed framė where and These two transformations result in an ac-Stark shifted qubit that is driven with a spectroscopy drive of amplitude α s,i and frequency ω s , coupled with a Jaynes-Cummings coupling to an undriven resonator whose frequency is shifted by the nonlinearity. This resonator sees additional relaxation κ > κ due to the two-photon-loss relaxation channel. The qubit sees its intrinsic dephasing at a rate γ ϕ , as well as relaxation at rate γ ↓,i and heating at rate γ ↑,i . These relaxation and heating rates are modified by dressed-dephasing [43] (first term of γ DD,i ), but also dressed measurement-induced dephasing (second term). These rates were obtained assuming white noise for all the dissipation channels. If the noise is not white, the rate γ DD,i will depend on the noise spectra of the qubit dephasing and resonator relaxation channels at ±(ω i+1,i −ω d ) [23]. In addition to intrinsic dephasing, the last two lines of Eq. (5.21) contain three other sources of dephasing. The first term will yield measurement-induced dephasing [28], while the second and the third represent respectively measurement-induced dephasing through the resonator two-photon loss decay channel and through the emission of an excitation by the qubit in its environment. While not measurable, this excitation in principle carries information about the qubit state and thus causes dephasing.
C. Quantum dispersive frame and reduced master equation
The final effect that we would like our model to capture is the Lamb shift of the qubit frequencies due to vacuum fluctuations of the resonator. To obtain this shift, we perform the dispersive transformation D of Eq.
and where the relaxation rate has an added Purcell relaxation rate. The new Lambshifted frequencies ω i are given by .
Since the resonator and qubit frequencies are pulled by the classical field due respectively to the nonlinearity and the ac-Stark shift, the Lamb shift depends on these pulled frequencies, and therefore on the amplitude of the cavity field.
Finally, projecting the qubit onto its {|0 , |1 } subspace and tracing out the resonator degrees of freedom yields a reduced qubit master equatioṅ (5.27) In this expression, we have defined H = ω 10 2 σ z + g 0 (α 0,s e −iωst σ − + h.c.), (5.28) where ω 10 ≡ ω 1 (α) − ω 0 (α), and and D ≡ |α 1 − α 0 | is the distance between the pointer states. In the equation for the effective dephasing rate Γ ϕm , we see the measurement-induced dephasings due to single-photon cavity losses (second term), to two-photon cavity losses (third term), and to the information carried out by the excitation emitted when the qubit relaxes. While these three channels leak information about the qubit state, only the single-photon cavity loss channel is usually monitored. Moreover, since in practice κ γX 2 /g 2 0 , κ NL , only this last channel will convey any significant amount of information and contribute to qubit dephasing.
VI. BACKACTION ON THE QUBIT
Following the reduced qubit model derived in section V, here we revisit the results presented in section III for the dispersive regime of linear circuit QED. In this section, we compare the theoretical model to experimental data and numerical simulations. The parameters used throughout are given in the caption of Fig. 4. These parameters were adjusted to fit independent spectroscopic and time domain measurements of the device used in Ref. [16]. This device was composed of a transmon qubit [6] coupled to a coplanar waveguide resonator made nonlinear by a Josephson junction embedded in its central conductor.
In subsection VI A, we first quickly present the experiment already described in Ref. [16]. We then look more precisely at the Lamb and ac-Stark shifts of the qubit transition frequency ω 1,0 in subsection VI B and at its linewidth in subsection VI C.
A. Experiment and qubit spectra
In Ref. [16], we presented spectroscopic measurements of a transmon qubit coupled to a driven nonlinear resonator. The qubit was probed through the resonator with a drive of amplitude s and frequency ω s ∼ ω 1,0 . Meanwhile, that resonator was pumped with a drive of amplitude p and frequency ω p ∼ ω r . The pump field was applied long before the qubit probe was turned on, enabling the resonator to reach its stationary state. Two detunings between the pump frequency ω p and the resonator frequency ω r where studied in detail. This was done in order to explore both the parametric amplification and the bifurcation regimes. Consequently, two biasing points ω p /2π = (6430, 6450) MHz corresponding to Ω/Ω C = (3.1, 0.7), are presented below. Here, we redefined Ω with respect to the effective resonator frequency as pulled by the qubit in the ground state rather than the bare resonator frequency. These two biasing points are illustrated by the two vertical lines in the stability diagram of Fig. 3 (b). After probing the qubit, a bifurcation measurement was performed in order to determine the probability P (|1 ) that the qubit was excited by the probe drive.
The resulting experimental spectra are presented in the top panels of Fig. 4 for Ω/Ω C = 3.1 (top left) and Ω/Ω C = 0.7 (top right) as a function of the pump drive amplitude. The pump amplitude (horizontal axis) is converted to a logarithmic scale to match the experimental power in decibels, up to a constant offset that was calibrated in Ref. [16]. In the bifurcation regime (top left, Ω/Ω C = 3.1), we clearly see the jump in the qubit frequency associated with the jump from the low amplitude to the high amplitude dynamical states of the resonator. We also see that the line remains narrow and actually tends to narrow down at higher powers. In the parametric amplification regime (top right), we see a more monotonous shift of the qubit line with the measurement power with an important broadening around 20 log( p /2π) = 22.
These spectra are then compared to the analytical steady-state solution of the reduced qubit master equation Eq. (5.27) in the bottom panels. The exact analytical solution of this equation yields [55] P (|1 ) = γ ↑,0 (γ 2 2 + δ 2 ) + 2γ 2 |g 0 α 0,s | 2 (γ ↑,0 + γ ↓,0 )(γ 2 2 + δ 2 ) + 4γ 2 |g 0 α 0,s | 2 , (6.1) where and δ ≡ ω 1,0 − ω s . When comparing the experimental to the analytical spectra, we notice small deviations between the background level as well as the amplitude of the spectroscopy lines. Aside from the limits of our model, three effects can cause these deviations. First, there is experimental thermal noise -which should not exceed 50 mK -that is not taken into account in the theory and may yield a minor thermal qubit excited state population. Second, the experimental excited state population is extracted from the probability of bifurcation, which can yield an Ref. [16]) and analytical qubit excited state |1 population for the two operating points indicated in Fig. 3. The qubit is a transmon with bare parameters (ω1,0, ω2,1, γ, γϕ)/2π = (5720, 5421.6, 0.22, 0.25) MHz. The resonator's bare parameters are (ωr, K, K , κ, κNL)/2π = (6453.5, −0.625, −0.00125, 9.6, 0) MHz and the qubit-resonator couplings are (g0, g1)/2π = (42.4, 58.4) MHz. These parameters were chosen to fit those of Ref. [16]. Couplings to higher transitions as well as higher transition frequencies can be computed from the transmon Hamiltonian [6]. The experimental attenuation required to link the experimental power in dB to the theoretical parameter p was calibrated in Ref. [16]. Top: experimental spectroscopy results from Ref. [16]. Bottom: analytical stationary solution Eq. error of at most 0.05 in the estimated population. Third, the correspondance between the theoretical amplitude s of the spectroscopy drive and the experimental amplitude could not be calibrated as precisely as the calibration provided by the ac-Stark shift for the pump drive [16]. Overlooking these deviations, other experimental features such as the spectroscopy lines' position and width are qualitatively reproduced by our analytical spectrum. In the following sections, we quantitatively compare these to our model.
B. Lamb and ac-Stark shifted qubit frequency
The experimental spectra presented in Fig. 4 were fitted using Lorentzian and the peak positions and widths Fig. 3 and used in Ref. [16]. Parameters are the same as were extracted from those fits, yielding the qubit transition frequency and dephasing rate. We also numerically integrated the multi-level Jaynes-Cummings master equation (2.6) to obtain numerical spectra that were fitted using the same procedure. The qubit frequency extracted from experimental (black circles) and numerical (orange squares) spectra is plotted in Fig. 5 as a function of the pump power for the two operating points. Numerical simulations and experimental data almost coincide, suggesting that the initial master equation (2.6) contains all the relevant physics. We then compare these data points to three versions of the dispersive approximation. Full black lines correspond to the complete equation (5.26a), dotted red lines correspond to the second order approximation for the dispersive shift (i.e. K p i = 0), and dashed green lines correspond to setting ω p = ω r when calculating S p i and K p i . Since the parametric amplification regime (right panel) correspond to a pump drive very slightly detuned from the resonator frequency, as well as to a low number of photon (n ∼ 20), all three curves almost coincide in this regime.
On the other hand, in the bifurcation regime (left panel), both the pump-resonator detuning and the number of photons after bifurcation are larger (n ∼ 50), yielding a significant difference between the three curves above bifurcation. We see that the assumption ω p = ω r (dashed green lines), which as discussed in Sec. IV A is often made when calculating the ac-Stark shifts, yields a shift that is too small. This is expected since assuming ω p = ω r yields a larger qubit-pump detuning, and correspondingly smaller values of S p i and K p i . This effect can also be confirmed at lower power although it is not visible in these plots. We also see that the second order approximation (dotted red lines) yields a dispersive shift that is too large. This is also expected since the sign of each order in perturbation theory alternates sign in the dispersive regime and since the fourth order is contained in the full model.
With this model, the qubit can be used as a tool to characterize the nonlinear resonator. Indeed, the distance between the resonator's low and high amplitude states at the threshold of bifurcation directly depends on the resonator nonlinearity K and the drive frequency ω p and amplitude p . While experimentally ω p is known to a very high precision, the resonator nonlinearity K can only be estimated to about ±30% from the design parameters due to its nonlinear dependence on sample parameters [16]. Moreover, the experimental line attenuation A between the source and the input of the sample -which is required to make the correspondance between the experimental power P p and the theoretical parameter p -can only be estimated up to about 2 dB [16]. Performing a series of spectroscopic measurements for many pump frequencies ω p and fitting the extracted qubit frequencies to the model derived here then makes it possible to extract both K and A with improved precision. This was done in Ref. [16] and resulted in an uncertainty of 2.4% for K and 0.2 dB for A; a ten-fold improvement in precision.
C. Qubit linewidth and validity of linear response
We now examine the linewidth of the qubit transition. We know that, in addition to the intrinsic dephasing rate γ 2,int = γ ϕ +γ/2, the lines are broadened by measurementinduced dephasing [28] and by dressed-dephasing [43]. In addition, there is always some power broadening due to the finite spectroscopy power. Here, we are mostly interested in the measurement-induced dephasing and how it is modified by the nonlinear nature of the resonator. The experiments presented in Ref. [16] and whose results are reproduced here were therefore carried in a regime where power broadening is small. Moreover, since there is no dependence of the experimental background population over the pump power, we assume that dressed-dephasing is also negligible due to a small amplitude of dephasing noise at GHz frequencies. The only additional dephasing source is therefore measurement-induced dephasing in γ ϕ given in Eq. (5.29) and in practice is dominated by the We present in Fig. 6 the half-width at half-maximum of the spectroscopy lines as a function of the pump power for the two operating points Ω/Ω C = 3.1(a), 0.7(b). Grey circles (orange squares) are again the widths extracted from experimental (numerical) data. Full black lines are the analytical widths γ 2 /2π given by Eq. (6.2). Dashed green lines are the same as the full black lines, but using linear response theory for the fields α i,p instead of the solutions of Eq. (5.19). More precisely, we obtained the dashed green lines taking whereᾱ is the solution of Eq. (5.19) with S p i = K p i = 0. Finally, dotted red lines we obtained by replacing Γ ϕm by the result of Ref. [28] for a linear resonator where χ = S p 1 − S p 0 . The first striking observation is that, contrary to circuit QED with a linear resonator [42], the linewidth does not strictly increase with the drive power or equivalently with the number of photons in the resonator. In fact, in the bifurcation regime [ Fig. 6(a)], the linewidth shows a sharp maximum at the bifurcation power, whereas in the parametric amplification regime, the linewidth shows a smooth maximum at a power that corresponds to the maximum gain of the amplifier [16]. This is illustrated by the lack of even qualitative agreement between both experimental and numerical data points and the result expected for a linear resonator (dotted red line).
Narrowing of the linewidth at high power is predicted both by the nonlinear (full dark lines) and the linear (dashed green lines) response theory. However, while both give a qualitative agreement with experimental and numerical data points, only the nonlinear response theory gives a quantitative one. In the bifurcation regime (Ω/Ω C = 3.1), the nonlinear response theory reproduces the experimental behavior with good accuracy on the whole range of powers, whereas linear response predicts bifurcation at too low power and linewidths twice as large at bifurcation. In the parametric amplification regime (Ω/Ω C = 0.7), only the nonlinear response solution gives semi-quantitative agreement near the maximum linewidth, while linear response theory predicts a much lower linewidth. However, even the nonlinear response solution mispredicts the linewidth when it is above ∼ 5 MHz. We explain this by the breakdown of the |α 1 − α 0 | < 1 approximation, which corresponds to a measurement-induced dephasing rate of about Γ ϕm ∼ κ/4π ∼ 5 MHz.
To understand the non-monotonous behavior of the linewidth with drive power, we refer to Figs. 6 (c) and (d), where we plot the value of the fields α 0(1),p as black (red) lines in the complex plane for the two operating points, for a range of power p /2π ∈ [0, 150] MHz and for nonlinear (full lines) and linear (dashed lines) response solutions. We see with these plots that even though the number of photons increases as the distance to the origin grows, the distance between the solutions α 1,p and α 0,p does not. In fact, the distance D can be as small at higher power than at small power.
For reference purposes, we also plot two sets of four points in pannels (c) and (d). Each set corresponds to a given pump amplitude p , for nonlinear (full symbols) and linear (empty symbols) theory, and for α 0 (black circles) and α 1 (red squares). Comparing the points within a given set of four points, we can see that a larger distance between a circle and its corresponding square -and hence the larger the gain of the amplifier -correspond to a larger disagreement between the linear and nonlinear solutions (distance between a full and a corresponding empty symbol).
We can compute a range of validity of the linear response theory by computing the fields α i,p to second order (i.e. quadratic response theory). If we define i,p is the second term of equation Eq. (6.3) and α (2) i,p is the next order correction, linear response theory will be valid if the ratio r = α (2) i,p /α (1) i,p is small. Since for a qubit measurement, the signal that is amplified is a frequency shift S = ±(S 1 − S 0 ), we can define a maximal value of S that allows r to be smaller than a threshold r t in the region of highest gain. This maximal value S max , computed using a conservative value of 10% for the ratio of the quadratic correction over the linear correction, is plotted in Fig. 7 as a function of the reduced detuning Ω/Ω C . We see that the maximal coupling for the parameters given in the caption of Fig. 4, typical for circuit QED, never exceeds about 0.5 MHz. Moreover, the maximal coupling in fact vanishes when approaching the critical detuning Ω C . This maximal coupling is to be compared with the resonator linewidth κ in order to determine if it is viable for a qubit measurement. With a realistic criteria of χ ≥ 0.2κ to get a good measurement, one therefore needs either κ/2π ∼ 1 MHz or a smaller nonlinearity K in order for linear response theory to be valid in this system. The former however implies a longer measurement time, while the latter implies a smaller gain, both impairing the efficiency of the measurement. It therefore seems unlikely that linear response theory will be sufficient to describe any superconducting qubit readout using a nonlinear resonator until the qubit lifetimes become long enough for longer measurement time to be viable.
D. Quantum limit to the added noise
Using the results presented in Fig. 6, we can try to answer the question of whether or not a dispersive homodyne measurement using a nonlinear resonator can reach the quantum limit Γ ϕm = Γ meas /2 as is the case for a linear resonator [27]. Indeed, assuming small squeezing, if one were to make a homodyne measurement using the pump drive, the measurement rate would be given by Γ meas = κ|α 1 − α 0 | 2 [27]. Since this measurement rate is exactly twice the dominant part of the measurement induced dephasing caused by these same pump photons Γ ϕm given at Eq. (5.30), we can say that the quantum limit is reached if the theoretical prediction fits the experimental linewidth. If the experimental linewidth is larger than the theoretical prediction, it however means that the limit is missed. Finally, if the experimental linewidth is smaller than that predicted by the model, it means that one of the approximation is probably breaking down.
Looking at Fig. 6 (b), we then reach a different conclusion whether we consider linear or nonlinear response. Indeed, around 20 log( p /2π) ∈ [20,30], the experimental linewidth is much higher than the prediction from lin-ear response, and we would therefore conclude that the quantum limit is missed by the measurement. This is qualitatively the same conclusion as the one obtained by Laflamme and Clerk [22], also in a linear response theory. However, we know from Fig 7, that for Ω/Ω C = 0.7 as in Fig. 6 (b), the maximum dispersive coupling supported by a linear response treatment is S max /2π ∼ 200 kHz, about four times smaller than the one used here. If we now compare the nonlinear response model prediction (black line), we see that it matches the experimental observations on a much wider range, and we recover the quantum limit in this range. There is also a regime where the theoretical prediction is above the experimental observation. This regime corresponds to a linewidth ∼ κD 2 /2 > ∼ 5 MHz since κ/2π ∼ 10 MHz, and therefore to D > ∼ 1, breaking the small distinguishability approximation that we have made. Therefore, while our result shows that the quantum limit can be reached with a nonlinear resonator, the question remains open in the case of large distinguishability or large squeezing where our model breaks down.
VII. CONCLUSION
In summary, we have derived an analytical model to describe the backaction of a driven nonlinear resonator on a multi-level qubit. This is done using unitary transformations, and especially using the polaron [25][26][27] and dispersive [23,24] transformations. We obtain a reduced model that contains the physics of the linear and quadratic ac-Stark shifts as well as the Lamb shift of the qubit frequencies. The model also contains dressed-dephasing [23,43,44], Purcell relaxation [8] and measurement-induced dephasing [27,28,42]. Contrary to other theoretical models, both qualitative and quantitative agreements are found for the ac-Stark and Lamb shifted qubit transition frequencies as well as for the qubit linewidth.
Moreover, the model that we have derived here goes beyond some assumptions that are frequently made and that are valid in the case of a driven linear resonator, but not in the nonlinear case. These assumptions are the resonant driving of the resonator, the linear response of the resonator field to the qubit signal and the two-level character of the qubit. Considering detuned driving of the resonator yields linear and quadratic ac-Stark shifts that depend on the qubit-drive frequency detuning rather than the qubit-resonator frequency detuning and are therefore slightly different than usual dispersive shifts [2]. Going beyond linear response theory yields measurement-induced dephasing rates that are qualitatively different from those found with linear response and that are found to match the experimental and numerical data in most regimes considered. In particular, we show that the measurementinduced dephasing rate does not increase with the measurement power or the number of photons, but rather with the distance between two pointer states α 1 and α 0 of the resonator fields. The precise quantitative agreement between the model and the experiment has also allowed us in Ref. [16] to characterize the nonlinearity of the resonator and the attenuation of the transmission line with an accuracy ten times better than what was otherwise achievable.
We have finally also shown that the results given by linear reponse theory are unlikely to apply to any highfidelity qubit measurement using a nonlinear resonator. One consequence of this is to reopen the question of whether or not measurement with a nonlinear resonator is quantum limited in the amount of dephasing it causes on a qubit. Indeed, while Laflamme and Clerk [22] have shown that the quantum limit is missed by a factor G, the gain of the amplifier, this result was obtained in a linear response theory and therefore is not applicable in the systems considered here. This question then remains open and could be answered using a quantum trajectory approach as was done before for a linear resonator [27]. | 2019-04-13T01:27:26.022Z | 2011-11-01T00:00:00.000 | {
"year": 2011,
"sha1": "2e4a50a8b7f60c1647ad9da98a22a734856c5c8e",
"oa_license": null,
"oa_url": "https://espace.library.uq.edu.au/view/UQ:268413/UQ268413.pdf",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "d29e01fd1b35bf31f19a933ad98d237208209094",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
16572987 | pes2o/s2orc | v3-fos-license | Transcriptome Sequencing Reveals Potential Mechanism of Cryptic 3’ Splice Site Selection in SF3B1-mutated Cancers
Mutations in the splicing factor SF3B1 are found in several cancer types and have been associated with various splicing defects. Using transcriptome sequencing data from chronic lymphocytic leukemia, breast cancer and uveal melanoma tumor samples, we show that hundreds of cryptic 3’ splice sites (3’SSs) are used in cancers with SF3B1 mutations. We define the necessary sequence context for the observed cryptic 3’ SSs and propose that cryptic 3’SS selection is a result of SF3B1 mutations causing a shift in the sterically protected region downstream of the branch point. While most cryptic 3’SSs are present at low frequency (<10%) relative to nearby canonical 3’SSs, we identified ten genes that preferred out-of-frame cryptic 3’SSs. We show that cancers with mutations in the SF3B1 HEAT 5-9 repeats use cryptic 3’SSs downstream of the branch point and provide both a mechanistic model consistent with published experimental data and affected targets that will guide further research into the oncogenic effects of SF3B1 mutation.
Author Summary
A key goal of cancer genomics studies is to identify genes that are recurrently mutated at a rate above background and likely contribute to cancer development. Many such recurrently mutated genes have been identified over the last few years, but we often do not know the underlying mechanisms by which they contribute to cancer growth. Unexpectedly, several genes in the spliceosome, the collection of RNAs and proteins that remove introns from transcribed RNAs, are recurrently mutated in different cancers. Here, we have examined mutations in the splicing factor SF3B1, a key component of the spliceosome, and identified a global splicing defect present in different cancers with SF3B1 mutations by comparing
Introduction
One of the biggest surprises to emerge from the growing catalog of somatic mutations in various cancer types is the recurrent mutation of genes encoding the RNA spliceosome [1]. Recurrent mutations in the highly conserved HEAT 5-9 repeats of splicing factor 3B subunit 1 (SF3B1) have been reported in myelodysplastic syndrome, chronic lymphocytic leukemia (CLL), breast cancer (BRCA), uveal melanoma (UM), and pancreatic cancer [2][3][4][5][6][7]. SF3B1 mutation is associated with poor prognosis in CLL but improved prognosis in myelodysplasia and UM [2,[7][8][9]. Prior studies have shown that mutated SF3B1 CLL samples have differential exon inclusion and use some cryptic 3' splice sites (3'SSs) relative to wild-type SF3B1 CLL samples [5,6,8,10,11]. However, it is unknown whether SF3B1 mutation is associated with the same 3'SS selection defects in different cancers. The mechanism underlying the cryptic 3'SS selection and the functional consequences thereof remain unresolved as well.
SF3B1 is a core part of the U2-small nuclear ribonucleoprotein (U2-snRNP) complex and stabilizes the binding of the U2-snRNP to the branch point (BP), a degenerate sequence motif usually located 21-34 bp upstream of the 3'SS [12,13]. SF3B1 also interacts with other spliceosomal proteins such as U2AF2, which binds the polypyrimidine tract (PPT) downstream of the BP [2,14,15]. The binding of the U2-snRNP and other spliceosome proteins around the BP prevents 3'SS selection in a~12-18 bp region directly downstream of the BP due to steric hindrance [16,17]. Inherited cis-acting splicing mutations beyond this~12-18 bp region downstream of the BP that result in the use of cryptic 3'SSs have been shown to occur in Mendelian disease genes [18]. Additionally, a competitive region exists~12 bp downstream from the first 3'SS after the protected region where AG dinucleotides can compete to be used as 3'SSs based on sequence characteristics such as the PPT length, distance from the BP, nucleotide preceding the AG dinucleotide, and other features [17].
The role of SF3B1 and the U2-snRNP in recognizing and binding the BP and the localization of mutations to HEAT 5-9 repeats suggest that SF3B1 mutations are dominant drivers that may alter 3'SS selection [6]. To test this, we examined splice site usage in transcriptome data from SF3B1 mutant and SF3B1 wild-type CLL, UM and BRCA cases. We identified 619 cryptic 3'SSs used more frequently in SF3B1 mutants and clustered 10-30 bp upstream of canonical 3'SSs. The majority of these cryptic 3'SSs were observed in all three tumor types despite the divergent clinical implications of SF3B1 mutation. Our analysis of tumors with SF3B1 mutations shows that cryptic 3'SS selection occurs only in samples with missense mutations at 10 amino acid hotspots in the fifth to ninth HEAT repeats. We analyzed the organization of splicing motifs around the cryptic 3'SSs and found that only introns with an AG dinucleotide at the boundary of the sterically protected region downstream of the BP but >10 bp upstream of the canonical 3'SS are susceptible to cryptic 3'SS selection in SF3B1 mutants. We assessed the functional impact of SF3B1 mutation and found that the cryptic 3'SSs are typically used at low frequency in the SF3B1 mutants (<10% relative to the canonical splice site) and are sometimes present in the SF3B1 wild-types but at an even lower frequency (<0.5% relative to the canonical splice site). However, we identified 10 candidate genes, some previously implicated in tumorigenesis, for which there is a high amount of out-of-frame cryptic splice site usage that may affect the function of these genes.
Results
Cryptic 3' splice sites 10-30 bp upstream of canonical 3' splice sites are used in SF3B1 mutants We used RNA-sequencing data from SF3B1 mutated and SF3B1 wild-type chronic lymphocytic leukemia (CLL; seven mutant, nine wild-type), breast cancer (BRCA; 14 mutant, 18 wild-type), and uveal melanoma (UM; four mutant, four wild-type) samples (S1 Fig., S1 File) to test 219,476 splice junctions present in the Gencode v14 gene annotation [19] along with 87,941 novel splice junctions (not annotated in Gencode) for differential usage by comparing junction-spanning reads using a generalized linear model as implemented in DEXSeq [20]. A splice junction is considered differentially used between mutant and wild-type samples if the expression level of that junction differs significantly after accounting for overall expression differences of the corresponding gene locus. All tested junctions were covered by at least 20 reads summed over all cancer samples in a given analysis, shared a 5' splice site and/or 3'SS with a Gencode splice junction, and had a known splice site motif. We identified 1,749 junctions that were significantly differentially used between the SF3B1 mutant and SF3B1 wild-type samples across the three tumor types including 1,330 novel junctions, of which 1,117 are novel 3'SSs (BH-adjusted p < 0.1, S2 File). These 1,749 significant junctions were highly enriched for novel splice junctions compared to annotated junctions (Fisher exact, p < 10 -200 ) and the novel junctions were enriched for novel 3'SSs (Fisher exact, p < 10 -200 ) showing that SF3B1 mutations result in the usage of a large number of novel 3'SSs. These 1,749 significant junctions include 61 of 79 splice sites recently reported as specific to CLL cases with SF3B1 mutations [11] supporting the specificity of our approach while demonstrating an increased sensitivity that has allowed us to identify many more cryptic 3'SSs than previously reported. We plotted the distance between each significant novel 3'SS and its associated canonical 3'SS (defined as the nearest Gencode 3'SS that shared the same 5' splice site-see Methods). Of the 1,117 significant novel 3'SSs, 619 were proximal cryptic 3'SSs clustered 10-30 bp upstream of their associated canonical 3'SSs while the remaining 498 cryptic 3'SSs were widely distributed (herein referred to as distal cryptic 3'SSs) (Fig. 1A, S3 File). All of the 619 proximal cryptic 3'SSs were used more often in the SF3B1 mutant samples compared to the wild-type samples and 58% were out-of-frame relative to the nearby canonical 3'SSs, suggesting that these are not canonical 3'SSs missing from Gencode. 417 of the 498 distal cryptic 3'SSs were also used more highly in the SF3B1 mutants (S4 File). The distribution of the 1,117 significant novel 3'SSs is different from that of novel 3'SSs whose usage did not differ significantly between the SF3B1 mutants and wild-types (Fig. 1B,C), further demonstrating that the usage of proximal cryptic 3'SSs is a property of SF3B1 mutants. Examining each tumor type individually, we observed the same enrichment of cryptic 3'SSs 10-30 bp upstream of canonical splice sites (S2 Fig.). Given these observations, SF3B1's role in binding the BP, and the organization of the BP and splicing motifs in the last 30 bp of the intron [12], we focused our initial analyses on the 619 proximal cryptic 3'SSs.
Cryptic 3'SS selection is limited to tumors with mutations in HEAT repeat hotspots
We clustered all samples based on the read coverage of the 619 proximal cryptic 3'SSs and found that four SF3B1-mutated BRCA samples did not cluster with the other mutants Upper red and blue heatmap shows for each sample the log 2 library-normalized count z-score for 619 cryptic 3'SSs used significantly more often in the SF3B1 mutants and located 10-30 bp upstream of canonical 3'SSs (DEXSeq, BH-adjusted p < 0.1). Grey bars at left indicate frequency of SF3B1 mutant allele in RNA-seq data. Colorbars indicate SF3B1 mutation status, cancer type, and whether the SF3B1 mutation is located in the HEAT 5-9 repeats. Black and white colorbar indicates whether novel 3'SSs are out-of-frame (black) relative to canonical 3'SSs. Bottom green heatmap shows relative expression levels for the genes containing each cryptic 3'SS. We calculated the average expression of each gene in each cancer type and normalized by the maximum expression for each gene so that the maximum value in each column is one (see Methods 1D). The SF3B1 mutation for one of these BRCA samples was a nonsense mutation not located in the HEAT 5-9 repeats while another sample had a subclonal (8.4%) HEAT 5-9 mutation with attenuated cryptic 3'SS selection (S3 Fig.). The other two samples had mutations in the HEAT 5-9 repeats but outside of the apparent~10 amino acid mutational hotspots (Fig. 1E). We observed cryptic 3'SS selection in a TCGA lung adenocarcinoma sample with a hotspot mutation but not in lung cancer samples with SF3B1 mutations outside of the five hotspots (S4 Fig.). These results show that cryptic 3'SS selection only occurs in tumors carrying mutations in one of the five~10 amino acid hotspots in the HEAT 5-9 repeats and is not limited to cancers in which SF3B1 is recurrently mutated.
Cryptic 3'SSs are shared across different cancer types
The majority of the 619 proximal cryptic 3'SSs were used in SF3B1-mutated samples in all three cancer types suggesting that the mechanism of cryptic 3'SS selection in SF3B1-mutated tumors is the same between different cancers (Fig. 1D). Some cryptic 3'SSs were not used in one or two of the cancer types due to lower expression of the corresponding genes in those cancers. Differences in cryptic 3'SS usage due to varying gene expression may contribute to the divergent prognostic implications of SF3B1 mutation in various cancers [2,7].
To characterize the roles of the genes affected by cryptic 3'SS usage, we performed a gene set enrichment analysis for the 912 genes that contained the 619 proximal and 417 distal cryptic 3'SSs used significantly more often in the SF3B1 mutant samples (S5 File). The gene set with the second smallest p-value consists of genes up-regulated in chronic myelogenous leukemia and the seventh gene set contains genes up-regulated in aggressive uveal melanoma samples (GSEA [21], q < 10 -35 ). These results may reflect the fact that we are more likely to identify cryptic 3'SSs in genes that are highly expressed which may bias such a gene set enrichment analysis. Nonetheless, several gene sets with potential importance for cancer development are enriched such as genes positively correlated with BRCA1, ATM, and CHEK2 expression across normal tissues (GSEA, q < 10 -28 ).
Cryptic 3'SSs are located~13-17 bp downstream of the branch point
We characterized the sequence features of the 619 proximal cryptic 3'SSs and their associated canonical 3'SSs to gain further insights into the mechanism of cryptic 3'SS selection ( Fig. 2A). We chose 23,066 control 3'SSs (see Methods) and plotted the nucleotide frequency [22] for the last 50 bp of the introns for all control, associated canonical, and cryptic 3'SSs as well as the enrichment of adenines relative to the control introns. The control introns have a typical nucleotide composition with a 4-24 bp PPT preceding the 3'SS ( Fig. 2B) [13]. The associated canonical 3'SS introns are enriched for adenines~15-20 bp upstream of the 3'SS since the proximal cryptic 3'SSs are located in this region (Fig. 2C). However, the introns for proximal (Fig. 2D) and distal (Fig. 2E) cryptic 3'SSs have a strong enrichment of adenines concentrated 15 bp upstream of the splice sites. These results suggest that the increased usage of the 619 proximal and 417 distal cryptic 3'SSs in the SF3B1 mutants may result from the same mechanism. The human BP motif is highly degenerate except for a largely invariant adenine [13] leading us to suspect that the adenine signal upstream of the cryptic 3'SSs is caused by the associated canonical 3'SSs' BP adenines. We used SVM_BP [23] to predict BPs for the associated canonical 3'SSs and calculated the distance from the highest scoring predicted BPs to the cryptic splice sites. We found that AG dinucleotides that serve as cryptic 3'SSs are enriched~13-17 bp downstream from the predicted BP ( Fig. 3A) relative to random AG dinucleotides present in control 3'SS introns (Fig. 3B, p < 10 -7 , Mann Whitney U). For cryptic 3'SSs not located 13-17 bp downstream from the highest scoring BP in Fig. 3A, we calculated the distance from the second highest scoring BP to the cryptic 3'SSs and found that overall, the majority of the cryptic 3'SSs were located 13-17 bp from either the highest or second highest scoring BP (Fig. 3C).
Proposed mechanism of cryptic 3'SS selection 3'SSs are typically not located within~12-18 bp downstream of the BP because the proteins bound to the BP sterically hinder AG dinucleotides in this region and prevent them from being used as 3'SSs [16]. Our results suggest that AG dinucleotides serving as cryptic 3'SSs in SF3B1 mutants are located at the end of this sterically protected region downstream of the BP (Fig. 3D). Additionally, during the splicing reaction, the spliceosome searches~12 bp downstream from the first 3'SS after the BP for any other 3'SSs and chooses the strongest 3'SS based on sequence features [16]. The lack of cryptic 3'SSs in the last 10 bp of the intron (Fig. 1A) indicates that cryptic 3'SSs used in SF3B1 mutants are located far enough upstream of the associated canonical 3'SSs to avoid competition for splicing. We observed that the distance between associated canonical 3'SSs and their predicted BPs is significantly greater than the distance between control 3'SSs and their BPs such that the cryptic 3'SSs at the edge of the protected region do not compete with the canonical 3'SS for splicing (p < 10 -23 , Mann Whitney U, Fig. 3E,F). We also predicted BP's for the 619 proximal and 417 distal cryptic 3'SSs (as opposed to above where we predicted BP's for the canonical 3'SSs associated with the 619 proximal 3'SSs) and found that the majority of these cryptic 3'SSs were 13-17 bp downstream of their predicted BP's (S5 Fig.) providing further evidence that most cryptic 3'SSs (both proximal and distal) associated with SF3B1 mutations are located at the edge of the sterically protected region.
Our results suggest that the mechanism of cryptic 3'SS selection in SF3B1 mutants is not altered BP recognition because a more varied distribution of distances from the cryptic 3'SS to the canonical 3'SS BP would be expected if BP recognition was altered. Studying the role of cryptic 3'SS in inherited Mendelian disease genes, Královicová et al. 2005 used splicing reporters with cryptic 3'SSs located in the PPT and found that moving the cryptic 3'SS into the~12-18 bp sterically protected region reduced or eliminated cryptic 3'SS selection. On the other hand, moving an AG dinucleotide out of the sterically protected region allowed for its selection as a cryptic 3'SS [18]. These published experimental results and the rigid distance between the BP and the cryptic 3'SSs observed in our study are consistent with a model of altered 3'SS selection in SF3B1 mutants due to a change in the size of the sterically hindered region downstream of the BP.
To test whether the sequences requirements defined here are sufficient for cryptic 3'SS usage, we identified 11,302 introns whose canonical 3'SSs passed our coverage cutoff of 20 reads summed over all samples and had potential cryptic 3'SSs (intronic AG dinucleotides that were 10-30 bp upstream of an annotated 3'SS and 13-17 bp downstream of the highest-scoring predicted BP). For 900 of these introns, the potential cryptic 3'SSs also passed the coverage cutoff, of which 310 were used significantly more often in the SF3B1 mutants. This analysis demonstrates that not every potential cryptic 3'SS is differentially used in the mutants, so the sequence requirements described here appear to be necessary for cryptic 3'SS usage but not sufficient.
Cryptic 3'SSs are used infrequently relative to canonical 3'SSs
Although the cryptic splice sites described here are used significantly more often in the SF3B1 mutants, the biological effects are likely dependent on the proportion of transcripts that use the cryptic 3'SSs relative to the canonical 3'SSs. We therefore calculated the percent spliced in (PSI) for the proximal cryptic 3'SSs relative to their associated canonical 3'SSs in the CLL samples since they have a higher sequencing depth than the other tumor samples (S1 Fig.) that allows for more accurate quantification of splicing and because the distribution of wellcharacterized low-and high-risk CLL prognostic factors was similar between the SF3B1 mutated and wild-type samples (Fig. 4A). To calculate PSI for the 325 proximal cryptic 3'SSs used significantly more often in the SF3B1 mutants from the CLL-only analysis (S6-S7 Files), we divided the number of reads that span the cryptic 3'SS by the number of reads that span both the cryptic 3'SS and its associated canonical 3'SS. We observed that some cryptic 3'SSs are used exclusively in SF3B1 mutants while others are also used in SF3B1 wild-type samples but at a lower frequency relative to the mutants (Fig. 4A). 67% of the cryptic 3'SSs were included in <10% of transcripts compared to their associated canonical 3'SS. These results suggest that the cryptic splice sites are either included rarely even in the SF3B1 mutants or that transcripts with cryptic splice sites are subject to a higher rate of nonsense-mediated decay (NMD). To investigate the potential role of NMD, we identified differentially expressed genes between the SF3B1 mutant and wild-type samples in a joint analysis of all three cancers and performed a gene set enrichment analysis. We found that genes in the "Reactome NMD enhanced by the exon junction complex" set were enriched (GSEA [21], q < 10 -28 ) among the 272 differentially expressed genes (DESeq2, BH-adjusted p < 0.1, S8-S9 Files) suggesting that NMD may be different between the SF3B1 mutants and wild-types. 33 of the 582 genes that contained the 619 proximal cryptic 3'SSs were differentially expressed with the expression of 29/33 of these genes lower in the SF3B1 mutants. Genes containing a proximal cryptic 3'SSs were more likely to be differentially expressed (Fisher exact, p < 10 -8 ) and more likely to have lower expression in SF3B1 mutants (Fisher exact, p = 0.0009). These results suggest that cryptic 3'SS selection may affect gene expression for a subset of genes. However, the observation that in-frame cryptic 3'SSs likely not subject to NMD and out-of-frame cryptic 3'SSs potentially subject to NMD are included at similar rates relative to their associated canonical 3'SSs ( Fig. 4A) suggests that most genes' expression are not affected by cryptic 3'SS selection and most cryptic 3'SSs are observed at a low frequency because they are spliced in infrequently compared to their associated canonical 3'SSs.
To identify cryptic 3'SSs with relatively high PSI values in the SF3B1 mutant versus wildtype samples, we searched for cryptic 3'SSs that were 1) used more than 50% of the time in the CLL SF3B1 mutants; 2) used less than 20% of the time in wild-type samples; and 3) had an average coverage of at least 30 junction-spanning reads in the mutant samples. Despite the generally low PSI values for the 325 cryptic 3'SSs from the CLL-only analysis, we identified four genes previously implicated in cancer (TTI1 [24][25][26], MAP3K7 [27][28][29], FXYD5 [30], PFDN5 [31]) and six others (YIF1A, ORAI2, ZNF91, ZNF548, RP11-1280I22.1, RP11-532F12.5) with out-of-frame cryptic 3'SSs that were consistently preferred to the associated canonical 3'SS in the CLL SF3B1 mutant samples (Fig. 4B). Ferreira et al. identified the junctions in ORAI2, ZNF91, and TTI1 in CLL SF3B1 mutants as well [11]. Nine of the ten junctions were significant in our BRCA-only analysis and showed high differences in relative inclusion (S6 Fig., S10-S11 Files). These genes are not differentially expressed between the CLL SF3B1 mutant and wildtype samples (S12 File) but the frequent inclusion of out-of-frame cryptic 3'SSs may affect their biological function.
Discussion
Here we have shown that a consequence of SF3B1 mutations in different cancer types is genome-wide selection of hundreds of cryptic 3'SSs. We have shown the cryptic 3'SSs have specific sequence requirements; AG dinucleotides used as cryptic 3'SSs in SF3B1 mutants are located at the end of the sterically protected region~13-17 bp downstream of the BP but are >10 bp upstream of nearby canonical 3'SSs allowing them to avoid competition for splicing. These sequence requirements limit the introns susceptible to cryptic 3'SS selection to those where the BP is located farther from the 3'SS than the typical~24 bp. While these requirements appear necessary for cryptic 3'SS usage, they are not sufficient, as we did not detect cryptic 3'SS usage in all introns with AG dinucleotides that satisfy these requirements. Characteristics such as RNA conformation, RNA binding protein sites, BP prediction inaccuracies, cryptic or downstream canonical 3'SS strength, gene/transcript expression, sequencing depth, or other factors may also play a role in determining whether cryptic 3'SSs are used and detected by RNA sequencing.
Examining differential splice junction usage allowed us to identify many more cryptic 3'SSs than previous studies while still identifying 61 of 79 cryptic 3'SSs recently reported for CLL SF3B1 mutants using a method based on relative inclusion [5,6,8,10,11]. When examining the three cancer types in our study individually, the number of cryptic 3'SSs identified was highly dependent on the sequencing depth of the samples (S1-S2 Figs., S2 File). Additionally, examining cryptic 3'SSs expressed higher in the SF3B1 mutants but not significantly (Fig. 1B) shows a modest enrichment of novel 3'SSs 10-30 bp upstream of canonical 3'SSs. These observations suggest that deeper sequencing will continue to reveal proximal cryptic 3'SSs in SF3B1 mutants that are used very infrequently or are present in lowly expressed genes.
Selection of cryptic 3'SSs in the region downstream of the BP has been reported for some inherited diseases including those resulting from disrupted tumor suppressor genes such as ATM, NF1, and TP53 [18]. Using a curated a list of aberrant splice sites associated with different diseases from the literature, Královicová et al. 2005 found that in cases where cryptic 3'SS selection was not caused by mutation of the 3'YAG consensus sequence, cryptic 3'SSs were often located 19 bp upstream of associated canonical 3'SSs and~11-15 bp downstream of the BP [18]. Most of the diseases considered in Královicová et al. 2005 are Mendelian diseases where a cryptic 3'SS disrupts or abolishes the function of a single disease gene. In these cases, a mutation in the PPT between the sterically protected and competitive regions has introduced a cryptic 3'SS (Fig. 3D). For cancers with SF3B1 mutations, we suspect that the size of the sterically protected region is slightly altered allowing for existing AG dinucleotides to be used as cryptic 3'SSs in hundreds of genes. It is also possible SF3B1 mutations could cause destabilization of the U2 snRNP complex or alter interactions with U2AF2, affecting the ability to recognize the canonical 3'SS and leading to cryptic 3'SS selection. However, the rigid distance (~13-17 bp) from the predicted BPs to the cryptic 3'SSs for most of the cryptic 3'SSs is most consistent with a change in the size of the sterically protected region downstream of the branch point.
We found that cryptic 3'SS selection is limited to tumors with mutations in the five~10 amino acid hotspots in the SF3B1 HEAT 5-9 repeats and that these mutations are associated with cryptic 3'SS selection across different cancer types and even in cancers in which SF3B1 is not recurrently mutated. 58% of these cryptic 3'SSs are out-of-frame relative to nearby canonical 3'SSs, but the biological impact of these cryptic 3'SSs is likely a function of how frequently they are used relative to the nearby canonical 3'SSs. We found that while the cryptic 3'SSs are used more often in the SF3B1 mutated samples compared to wild-type samples, they are used relatively infrequently (<10%) compared to nearby canonical 3'SSs. While the differentially expressed genes between the SF3B1 mutated and wild-type samples are enriched for genes in the NMD pathway, even in-frame cryptic 3'SSs are used at a low frequency indicating that the associated canonical 3'SS is mostly preferred to the cryptic 3'SS even in SF3B1 mutants. Nonetheless, we identified ten genes, including four with known roles in cancer, which had a high frequency of cryptic splice site usage relative to the nearby canonical splice site. Further studies are required to determine whether low-frequency cryptic 3'SS selection in hundreds of genes, high-frequency cryptic 3'SS selection in a small group of genes, and/or other splicing alterations drive the oncogenic effect of SF3B1 mutation.
Sample selection
Ethics statement. For the chronic lymphocytic leukemia (CLL) samples, the UCSD IRB approved the study and all subjects gave informed consent (Project #080918). Refer to the informed consent for The Cancer Genome Atlas and Harbour et al. for consent information for other cancer samples [7].
CLL. Seven SF3B1-mutated CLL cases and nine SF3B1 wild-type CLL cases were identified from the CLL Consortium database. The mutations were originally characterized by PCR and verified in the RNA-sequencing data [9]. Sample dates were chosen on average 95 days prior to treatment and at least 287 days after prior treatment to select samples with high tumor cell count. Samples were chosen to have relatively similar numbers of IGHV mutated/unmutated and ZAP-70 positive/negative samples (Fig. 4).
BRCA, LUAD, and LUSC. SF3B1 mutant samples were identified using the Broad GDAC TCGA analysis (http://gdac.broadinstitute.org/runs/analyses__2013_02_22/) in TCGA tumor types with no publication restrictions. Samples with SF3B1 mutations outside of Gencode version 14 exons were excluded. We excluded any cancer types with less than four SF3B1 mutants or for which paired-end RNA-sequencing data was not available leaving breast cancer (BRCA), lung adenocarcinoma (LUAD), and lung squamous cell carcinoma (LUSC). We chose 1.25 as many SF3B1 wild-type controls as mutated samples for each cancer type randomly from samples without mutations in SF3B1 or other splicing factors. RNA sequencing data was downloaded from CGHub [32].
UM. Uveal melanoma samples were downloaded from the Short Read Archive (SRA062359) [7]. As reported in Furney et al., four uveal melanoma samples had SF3B1 mutations in codon 625 and four had wild-type copies of SF3B1 [33].
Library preparation and sequencing for CLL samples RNA was extracted from peripheral blood mononucleocytes from seven SF3B1-mutated CLL cases and nine SF3B1 wild-type cases per the manufacturer's specifications using Qiagen RNeasy mini-spin columns, and RIN scores determined using an Agilent Bioanalyzer. RNA was polyA selected and processed using SMART cDNA synthesis (Clontech) to prepare sequencing libraries. Samples were sequenced on Illumina HiSeq2000 instruments generating an average of 239 million paired 75 bp reads per sample (S1 Fig.).
Adapter trimming
Sequencing adapters and poly-A/T tails were trimmed for CLL samples only using cutadapt version 1.1 (-m 20-n 10-b AAGCAGTGGTATCAACGCAGAGTACTTTTTTTTTTT-b AAGCAGTGGTATCAACGCAGAGTACGCGGG-b AAGCAGTGGTATCAACGCAGAGT -b TTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTT-TTTTTTTTTTTTTTT-b AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA-AAAAAAAAAAAAAAAAAAAAA) [34]. Read pairs where or one of both reads were of length less than 20 were removed.
Splice junction read coverage
Splice junction read coverages were obtained from the SJ.out.tab output file from STAR.
Novel splice junction identification
Novel splice junctions were defined as those junctions identified by STAR not present in Gencode version 14 that (i) were covered by at least 20 reads summed over all cancer samples in a given analysis, (ii) shared a 5' splice site and/or 3'SS with a Gencode junction, and (iii) had one of the following motifs: GU/AG, CU/AC, GC/AG, CU/GC, AU/AC, GU/AU. Novel junctions were calculated separately for each analysis.
Splice junction usage
Known and novel junctions that had a coverage of at least 20 reads over all samples, used a known intron motif, and contained a known Gencode 5' splice site or 3'SS were aggregated by gene and tested for differential usage using DEXSeq's testForDEUTRT function (v1.8.0, R v3.0.3) [20]. Splice junctions used in more than one Gencode gene were removed. When multiple cancer types were analyzed, we provided cancer type as a covariate to DEXSeq. Raw p-values were adjusted for multiple hypothesis testing using the Benjamini Hochberg procedure. To examine the impact of the coverage cutoff of 20 reads summed over all samples on our results, we increased the cutoff to 50, 75, and 100 reads summed over all samples and found that 42%, 32%, and 24% of the significant novel 3'SSs remained at each of these cutoffs. The enrichment for proximal cryptic 3'SS remained at all cutoffs, so we used the 20 read cutoff to maximize sensitivity.
Identification of associated canonical 3'SSs for cryptic 3'SSs
Associated canonical 3'SSs were identified for novel/cryptic 3'SSs as follows. First, all Gencode splice sites that shared a 5' splice site with the novel 3'SS were identified. Then, the closest Gencode 3'SS from these splice sites that was downstream of the cryptic 3'SS was chosen as the associated canonical 3'SS for that cryptic 3'SS. If there was no Gencode 3'SS downstream of the cryptic 3'SS, the closest Gencode 3'SS upstream of the cryptic 3'SS was chosen as the associated canonical 3'SS.
Gene set enrichment for genes with cryptic 3'SS usage
We performed a gene set enrichment analysis using GSEA [21] for the genes that contained cryptic 3'SSs by combining the genes that contained the 619 proximal (S3 File) and the 417 distal cryptic 3'SSs (S4 File).
Identification of control 3'SSs
We identified 23,066 control 3'SSs by choosing splice sites that are annotated in Gencode, whose average coverage over BRCA, CLL, and UM samples is greater than 100, and whose 5' splice site does not have any novel 3'SSs. We characterized intronic AG dinucleotides for these control junctions by analyzing the intronic sequence downstream of the predicted branch points minus the last 10 bp of the intron since alternative 3'SSs can be located in the last 10 bp of the intron.
Hierarchical clustering
All heatmap rows and columns were clustered using scipy.cluster.hierarchy.linkage with either the "complete" or "single" distance metric.
SF3B1 mutant allele frequency
Mutant allele frequency was determined by calculating per-base coverages using unique properly paired reads with samtools mpileup for the SF3B1 locus and counting the number of reads supporting either the reference or alternate alleles.
Gene expression
Reads that were not contained within Gencode v14 exons in the STAR genomic alignment were discarded. The remaining reads were re-aligned to the Gencode v14 transcriptome using Bowtie2 (v2.1.0,-t-k 400-X 400-no-mixed-no-discordant) and transcript expression was estimated using eXpress (v1.3.0,-max-indel-size 20) [40,41]. Gene expression was estimated by summing together the effective counts or FPKM values for all transcripts contained in a gene.
Relative average expression of genes with cryptic 3'SSs
For the green heatmap in Fig. 1D, the average expression (FPKM) of each gene containing a cryptic 3'SS was determined for each cancer type. The average expression values were then normalized for each gene by dividing by the largest average expression of the three cancers for that gene. Therefore each column in the green heatmap in Fig. 1D has one value of 1.0 while the other two values are between 0.0 and 1.0 and represent the expression of the gene in that cancer relative to the maximum.
Definition of HEAT repeats
HEAT repeat locations were defined according to the definition of HEAT repeats in Wang et al. 1998 [15].
COSMIC SF3B1 mutations
COSMIC v66 complete export was downloaded and the number of mutations at each location in the SF3B1 heat domains 5-9 was plotted for locations with at least two observed mutations in COSMIC [42].
Nucleotide frequency plots
Nucleotide frequency plots were constructed using WebLogo (unit_name = 'probability') [22]. Adenine enrichment was calculated by counting the number of adenines and non-adenines at each intron position for a given splice site class and comparing to the number of adenines and non-adenines in control 3'SSs using a Fisher exact test.
Branch point identification
SVM_BP was used to predict branch points [23]. The SVM_BP code was altered to allow for branch points eight bp from the 3'SS by setting mindist3ss = 8 in svm_getfeat.py (see https:// github.com/cdeboever3/svm-bpfinder). SVM_BP was run with options "Hsap 50." When multiple branch points were predicted for one 3'SS, we chose the branch point with the highest sequence score (bp_scr). In some instances, there was more than one cryptic 3'SS associated with a canonical 3'SS, so we randomly chose only one of these cryptic splice sites for further analysis. For Fig. 3C, we plotted the distance from highest scoring BP predicted for canonical 3'SSs to their associated cryptic 3'SSs as in Fig. 3A. However, the distances for cryptic 3'SSs located less than 13 bp or more than 17 bp from the BP in Fig. 3A were replaced with the distance from the second highest scoring BP. S5C-S5D Fig. were created similarly.
Differential gene expression
Gene expression was estimated as described above. We summed the effective counts from eXpress for all transcripts from each gene to obtain effective read counts for each gene. We provided these read counts to DESeq2 (v1.2.10, R v3.0.3) and tested for differential gene expression using nbinomWaldTest using cancer type as a covariate for the analysis with different cancers [43]. We only tested genes where the sum of effective read counts over all samples was greater than 100. p-values were adjusted using the Benjamini-Hochberg procedure. Gene set enrichment analysis was performed using GSEA [21].
Percent spliced in for cryptic 3'SSs relative to associated canonical 3'SSs Percent spliced in (PSI) values for cryptic 3'SSs relative to canonical 3'SSs were calculated by dividing the number of reads that span the cryptic 3'SS (c) by the number of reads that span the cryptic 3'SS plus the number of reads that span the canonical 3'SS (a), c cþa , for each sample. The ten 3'SSs with high PSI values in CLL were identified by identifying cryptic 3'SSs whose median PSI was greater than 50% in the CLL SF3B1 mutants but less than 20% in the wild-type samples and whose average coverage was at least 30 junction-spanning reads in the CLL mutant samples. These junctions were also chosen to be out-of-frame although the cryptic 3'SS in ORAI2 is located in the 5' untranslated region.
Code, data, and reproducibility Supporting Information S1 Fig. Number of uniquely mapped RNA-seq reads from STAR alignment. We sequenced the transcriptomes of peripheral blood mononucleocytes from seven SF3B1-mutated chronic lymphocytic leukemia (CLL) cases and nine SF3B1 wild-type cases. We also obtained data from breast cancer (BRCA; 14 mutant, 18 wild-type), lung squamous cell carcinoma (LUSC; four mutant, five wild-type) and lung adenocarcinoma (LUAD; seven mutant, nine wild-type) samples from the TCGA and uveal melanoma (UM; four mutant, four wild-type) samples Beeswarm plots showing the PSI values for the cryptic 3'SS relative to the associated canonical 3'SS in nine of ten genes with high levels of cryptic 3'SS inclusion in CLL SF3B1 mutants (M) compared to wild-type (W) samples that were also expressed in the BRCA samples. The number in the upper corner of each plot is the distance in base pairs from the highest or second-highest scoring BP predicted for the associated canonical 3'SS to the cryptic 3'SS. (TIF) S1 File. Metadata for samples used in this study. SF3B1 mutated samples have columns for frequency of SF3B1 mutation in RNA-seq data, mutation type, codon change and whether the mutation is in the HEAT 5-9 repeats. These columns are empty for SF3B1 wild-type tumor samples. (TSV) S2 File. Summary of differential junction usage results from DEXSeq. DEXSeq was used to test for differential splice junction usage in a joint analysis of the CLL, BRCA, and UM samples as well as individually for each cancer type. "Novel" indicates that the junction is not annotated in Gencode. Proximal indicates that a novel 3'SS is 10-30 bp upstream of a canonical Gencode 3'SS. (TSV) S3 File. 619 cryptic 3'SSs located 10-30 bp upstream of canonical 3'SSs from joint BRCA, CLL, and UM analysis. Location of 5' splice sites and 3'SSs are one-based coordinates that denote the start and end of the intron. The columns COSMIC, TSgene, and ncg denote whether the gene is present in COSMIC, TSGene, or the Network of Cancer Genes respectively. (TSV) S4 File. 417 distal cryptic 3'SSs used more often in SF3B1 mutants from joint BRCA, CLL, and UM analysis. Location of 5' splice sites and 3'SSs are one-based coordinates that denote the start and end of the intron. The columns COSMIC, TSgene, and ncg denote whether the gene is present in COSMIC, TSGene, or the Network of Cancer Genes respectively. (TSV) S5 File. GSEA results for 912 genes containing 619 proximal and 417 distal cryptic 3' splice sites used more often in SF3B1 mutants. (XLS) S6 File. 325 significant cryptic 3'SSs located 10-30 bp upstream of canonical 3'SSs and used more often in SF3B1 mutants from CLL-only DEXSeq analysis. Location of 5' splice sites and 3'SSs are one-based coordinates that denote the start and end of the intron. The columns COSMIC, TSgene, and ncg denote whether the gene is present in COSMIC, TSGene, or the Network of Cancer Genes respectively. (TSV) S7 File. Percent spliced in for 325 cryptic 3'SSs located 10-30 bp upstream of canonical 3'SSs from CLL-only DEXSeq analysis. Note that there are only 324 values because one canonical 3'SS was filtered due to low coverage so a PSI value could not be calculated. (TSV) S8 File. 272 genes that are differentially expressed between SF3B1 mutant and wild-type samples from joint analysis of CLL, BRCA, and UM using DESeq2. (TSV) S9 File. GSEA results for 272 genes differentially expressed genes from joint CLL, BRCA, and UM DESeq2 analysis. (XLS) S10 File. 192 significant cryptic 3'SSs located 10-30 bp upstream of canonical 3'SSs and used more often in SF3B1 mutants from BRCA-only DEXSeq analysis. Location of 5' splice sites and 3'SSs are one-based coordinates that denote the start and end of the intron. The columns COSMIC, TSgene, and ncg denote whether the gene is present in COSMIC, TSGene, or the Network of Cancer Genes respectively. (TSV) S11 File. Percent spliced in for 192 cryptic 3'SSs located 10-30 bp upstream of canonical 3'SSs from BRCA-only DEXSeq analysis. Note that there are only 191 values because one canonical 3'SS was filtered due to low coverage so a PSI value could not be calculated. (TSV) S12 File. 33 genes that are differentially expressed between SF3B1 mutant and wild-type CLL samples using DESeq2. (TSV) | 2016-05-04T20:20:58.661Z | 2015-03-01T00:00:00.000 | {
"year": 2015,
"sha1": "eb0675dd2ff7a5480f41a9797c933187b57ade9d",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/ploscompbiol/article/file?id=10.1371/journal.pcbi.1004105&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "eb0675dd2ff7a5480f41a9797c933187b57ade9d",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine",
"Computer Science",
"Biology"
]
} |
1798379 | pes2o/s2orc | v3-fos-license | Autoimmunity against p53 predicts invasive cancer with poor survival in patients with an ovarian mass
Serum autoantibodies against the p53 protein (p53 AAb) were analysed with a newly developed enzyme-linked immunosorbent assay (ELISA) based on highly purified and renatured p53. In a hospital-based cohort study, preoperative sera from 113 patients with ovarian cancer, 15 patients with borderline tumours and 117 patients with benign tumours of the ovaries were studied. The prevalence of p53 AAb in patients with invasive cancer was 19% (21/113). No p53 AAb were found in patients with borderline lesions or benign tumours. The ELISA had a specificity for malignancy of 99% (1 of 117; false-positive from a patient with severe diabetes mellitus) and a likelihood ratio (LR+) for a positive test result of 21.7 (elevated CA125 and malignancy: LR+ 3.7). p53 AAb were only detectable in patients with immunohistochemical staining of nuclear p53 in the tumour (P= 0.006). Presence of p53 AAb positively correlated with tumour stage (P= 0.034) and grade (P= 0.009). Kaplan–Meier analysis showed both a shortened overall survival (P= 0.0016, log-rank) and relapse-free survival (P= 0.055) for p53 AAb-positive patients (median follow-up 22 months). High titres related to even worse prognosis. p53 AAb independently related to poor survival adjusting for stage (P= 0.026), grade (P= 0.029) and residual disease after surgery (P= 0.005). Preoperative findings of adnexal mass with serum p53 AAb are strongly suggestive of an aggressive invasive ovarian cancer. © 2000 Cancer Research Campaign
Mutations of the p53 tumour suppressor gene are associated with tumorigenesis of most types of human cancers including ovarian cancer (Hollstein et al, 1991;Harris and Hollstein, 1993;Wen et al, 1999). Mutational inactivation of the p53 gene leading to an altered p53 protein occurs during tumorigenesis of approximately 50% of ovarian cancers (Marks et al, 1991;Milner et al, 1993;Wen et al, 1999). These alterations represent in more than 90% point missense mutations in the highly conserved core region resulting in a functionally inactive mutant protein (Soussi and May, 1996). The mutation frequently causes a delayed turnover of the protein and thereby an extended half-life compared to the wildtype protein. It accumulates in the tumour cell nucleus to high levels detectable by immunohistochemical methods (Lane, 1992;Legros et al, 1994;Runnebaum et al, 1996). Mutant p53 protein was shown to be immunogenic in different species leading to generation of autoantibodies (p53 AAb). These antibodies were directed against the immunodominant linear epitopes located in the amino-and carboxy-terminal regions of p53 (Schlichtholz et al, 1992;Lubin et al, 1993;Legros et al, 1994;Soussi and May, 1996).
p53 AAb were first detected in patients with breast cancer (Crawford et al, 1982), and in children with a variety of malignancies (Caron de Fromentel et al, 1987). Ovarian cancers were among the most immunogenic malignancies inducing p53 AAb response. In a comparative study, patients with cancer of the lung, breast, ovary or colon showed the highest p53 AAb prevalence (Angelopoulou et al, 1994).
Early investigations used electrophoresis-based systems such as Western blot analysis or immunoprecipitation applicable to only small patient samples (Crawford et al, 1982;Caron de Fromentel et al, 1987). The development of ELISA systems allowed for rapid and facilitated analysis of larger patient series, particularly to study the diagnostic and prognostic value of p53 AAb Angelopoulou et al, 1997). However, it has been hypothesized that the specificity of the early assay systems may have been limited due to single-step purification of the antigen p53 allowing binding of unspecific antibodies to epitopes exposed by p53 protein denaturation or to fragments of other proteins.
In the present study, an ELISA was developed based on doublepurified and renatured, recombinant human p53 protein (Nedbal et al, 1997). Using this new assay, we assessed the prevalence of p53 AAb in preoperative patients with benign, borderline or malignant adnexal mass and the relation to histopathological and clinical parameters, particularly the clinical outcome. Implications for clinical management of patients with suspicious adnexal mass are discussed.
Patients and materials
In this retrospective cohort study, we included 245 Caucasian women, 128 of whom were newly diagnosed with ovarian malignancy. One-hundred and seventeen patients (age range: 17-94 years, median 49 years) who were operated for benign lesions of the ovaries served as controls. Patients were enrolled at the Autoimmunity against p53 predicts invasive cancer with poor survival in patients with an ovarian mass Department of Obstetrics and Gynaecology of the University of Ulm during the period between January 1993 and November 1997. Diagnoses were proven histologically. Cases and controls were included on the basis of availability of pre-treatment serum samples. Serum had been obtained by centrifugation and was stored at -80°C until analysis. One hundred and thirteen women aged between 21 years and 89 years (median 61 years) had primary invasive ovarian cancer. Fifteen patients (age range: 23-75 years, median 50 years) were diagnosed with borderline lesions of the ovaries. Patients with borderline tumours were initially treated by oophorectomy. Patients with invasive cancer underwent cytoreductive surgery followed by chemotherapy. They were postoperatively staged according to the classification of the International Federation of Gynaecologists and Obstetricians (FIGO). Where complete surgical cytoreduction could not be achieved, the presence of visible residual tumour was recorded. Forty-five patients had visible tumour residues, 40 patients were resected R0 (information missing on 28). Platinum-based chemotherapy was given to 76 of 113 (67%) of the patients. Thirteen patients with stage I disease received no adjuvant therapy. Eighteen patients with stage III or IV received chemotherapy not containing platinum. Three patients with stage II and stage III disease refused chemotherapy. Another three patients with progressive disease underwent surgery, but died due to poor general condition before chemotherapeutical treatment could be started. Clinical data were obtained from the patients' charts and the histopathological reports. Values of the tumour marker CA125 were routinely determined at admission. Values below 35 U ml -1 were considered normal. CA125 test results were available for 106 cancer cases and 112 controls with benign lesions. Patients with invasive cancer were followed up with respect to relapse and survival until November 30, 1998. Patients with borderline lesions and those with benign tumours were screened only for the presence of serum p53 AAb and excluded from further analysis.
ELISA
The ELISA was based on double-purified recombinant human wild-type p53 protein (Nedbal et al, 1997). In brief, p53 was expressed in E. coli tagged aminoterminally with His 6 and purified through metal chelate affinity chromatography under denaturing conditions (8 M urea). The denatured p53 protein was refolded by stepwise removal of urea in a dialysis procedure and further purified through gel permeation (Sephadex 200, Pharmacia, Erlangen, Germany). Integrity of refolded p53 was confirmed with the monoclonal antibody PAb 421 recognizing wild-type p53 binding sequence-specifically to a 20 bp oligodeoxyribonucleotide (5′ GGACATGCCCGGGCATGTCC-3′). Microtitre plates (Maxisorp, Nunc, Wiesbaden, Germany) were coated with 50 µl per well of double-purified p53 (0.2 µg ml -1 ) in PBSI. Plates were incubated overnight at 4°C and blocked for 1 h at 37°C in 5% non-fat milk powder (Merck, Darmstadt, Germany) in PBS. Serum was diluted 1:100 and incubated for 1 h at 37°C. After washing with PBS/0.05% Tween 20, peroxidase-conjugated goat antiserum to human immunoglobulin (Dianova, Hamburg, Germany) was added and incubated at 37°C for 30 min. Bound antibody was detected with tetramethylbenzidine and the results monitored in an automatic microtitre plate reader. ELISA cut-off was defined as duplicate of the mean of the absorption using sera from 100 healthy blood donors without a history of cancer. Cut-off corresponded to an antibody titre of 15 binding units. For measuring the antibody titre the assay was standardized using a positive human serum (positive standard) at ample supply showing no change of titre during storage. Binding activity of 1:600 diluted positive standard was defined as 1 binding unit. The binding activity showed linearity over a broad range with 1 binding unit lying in the middle of the linear line. Internal standard curve was used for calculation of the corresponding binding units of the human sera tested. Positive sera were diluted in a range from 1:100 to 1:100 000 to keep the analysis in the linear range of the standard curve. All samples were assayed in duplicate and all quantitative values were means of duplicate determinations. The analyst was unaware of any clinical patient data.
Western blot
For confirmation of ELISA test results Western blots were performed. The amount of purified p53 protein loaded on a minigel was 0.3 µg lane -1 , which converts into 5 µg cm -2 on the membrane. The sera were tested in a dilution of 1:100. Bound antibodies were detected with alkaline phosphatase-coupled goat anti-human antibodies and colour developed using bromo-chloro-indolyl phosphate/nitroblue tetrazolium (BCIP/NBT).
Immunohistochemistry of p53 protein in tumour tissue
Results of routine immunohistochemical (IHC) staining of p53 in tumour tissue were available from 56 patients. Tissue sections (7 µm) were taken from paraffin-embedded ovarian tumour specimens, mounted on glass slides, and stained by the avidin biotin immunoperoxidase method using the anti-human p53 mouse monoclonal antibody DO-1 (Dianova). Immunoreactivity was assessed recording proportion of stained cells and staining intensity. Percentages of positive cells were scored 1 for 0-10%, 2 for 10-50% and 3 for > 50%. Tissue sections containing 10% or more stained tumour cells were considered as IHC-positive. Evaluations were independently performed by two pathologists of the institution.
Statistical analysis
For patients with invasive ovarian cancer, Fisher's exact test was applied to test for an association of p53 AAb with each of the following dichotomized study parameters: Age (< 50 years vs ≥ 50 years), menopausal status (premenopausal vs postmenopausal), oestrogen and progesterone receptor status (IRS ≤ 3 vs IRS > 3; immuno-reactive score, score for staining intensity × score for percentage of positive cells), tumour cell type (serous vs other than serous), and presence of lymph-node metastases at primary surgery (present vs absent). In some cases information on particular parameters was not available.
Cochran-Armitage trend test was conducted across the polytomous variables tumour stage (FIGO), grading, and immunohistochemical results on p53. Our hypothesis was that the frequency of p53 AAb would be higher in patients with advanced tumour stages or less differentiated tumours. We also expected that the immune response to p53 would depend on the grade of p53 overexpression in tumour cells as assessed by the proportion of IHC-stained cells and staining intensity. Therefore P-values of the trend test were calculated one-sided.
Person-months were accumulated up to date of relapse and of death, loss to follow-up or end of the follow-up period. Patients were stratified by presence or absence of p53 AAb. Kaplan-Meier curves were calculated for relapse-free interval and overall survival time. Differences between the Kaplan-Meier curves were evaluated by the log-rank test. Separately, the p53 AAb titre was considered as a predictive variable for survival. Starting at median (424 units) and then cutting at each value above and below until significance was obtained, a cut-off value was empirically selected. The level of significance was set to 0.05. All data analyses were carried out using SAS software (SAS Institute Inc, Cary NC, USA).
RESULTS
Descriptive characteristics of the 113 ovarian cancer patients evaluated in the study are shown in Table 1. The prevalence of p53 AAb as detected with the described ELISA in the ovarian cancer patients at the time of diagnosis was 18.6% (21 of 113). In these patients titres ranged from 20 units to 51 400 units. Median was 424 units. Two patients had excessively high titres of 23 600 units and 51 400 units. The specificity of our p53 AAb ELISA was 99.2%. The assay showed a positive reaction only for one out of 117 patients in the control group, who had a titre of 34 units (mean of three independent duplicate measurements). The histology diagnosis of this patient was a unilateral benign cystadenoma simplex. p53 immunohistochemical analysis specifically performed on this material was negative. This patient with a severe type 1 diabetes mellitus was followed-up for 3 years without evidence of malignancy. Positivity for p53 AAb in the ELISA was confirmed by Western blot. All sera with an antibody titre higher than 30 units were clearly detectable in Western blot analysis. Some sera with a lower titre produced only a faint signal in Western blot analysis, possibly due to a lower sensitivity of this assay. None of the 15 patients with borderline lesions had serum p53 AAb detectable by ELISA or Western blot analysis. In terms of its use as a diagnostic test for malignancy, the likelihood ratio for a positive test result in the ELISA was 21.7; the likelihood ratio for a negative result was 0.8. For comparison, the sensitivity of the CA125 test was 84.9% with specificity 76.8%, likelihood ratios were 3.7 for a positive result and 0.2 for a negative result. Table 1 shows the presence of p53 AAb in relation to clinicopathological parameters. In patients with FIGO stage I and II disease, three of 36 (8%) had p53 AAb in their serum. The frequency of p53 AAb was positively related with tumour stage, showing a higher prevalence in the advanced stages FIGO III and IV (14 of 62, 23% and four of 15, 27%, respectively; one-sided trend test over all stages: P = 0.034). Likewise, presence of p53 AAb was related with grade of tumour cell differentiation (onesided trend test: P = 0.009). The presence of p53 AAb was independent of the menopausal status of the patient and immunohistochemically determined hormone receptor status. p53 AAb were not associated with particular histologic cell types of the tumour nor lymph-node involvement.
Patients were followed-up for a median time of 22 months (range: 1-68 months). Mortality during follow-up was significantly higher in the p53 AAb-positive group than in the p53 AAbnegative group (71% vs 32%, P = 0.001). Kaplan-Meier analysis revealed that patients with p53 AAb died significantly earlier (Figure 1, P = 0.002). The curves were nearly identical during the first 12 months of follow-up. One year survival rates were 85% and 84%, respectively. Thereafter, the graph for the p53 AAb-positive group decreased rapidly. Survival rates at 24 months were 68% and 39%, survival rates at 36 months were 61% and 19% for the AAb-negative and the AAb-positive group, respectively.
We separately studied variables previously identified to be predictors of poor outcome in ovarian cancer (stage of disease, tumour grading, residual tumour after surgery). Log-rank test confirmed for our cohort that patients were at higher risk of earlier death when having advanced FIGO III and IV stage (P = 0.0002), high tumour grading (P = 0.0003), or residual tumour after surgery (P = 0.0007). To investigate whether p53 AAb was an independent predictor, we repeated the analysis and adjusted for these variables. Survival was still significantly different by p53 AAb status after adjusting for tumour stage (P = 0.026), grading (P = 0.029) and presence or absence of residual tumour (P = 0.005). Differences in survival curves of p53 AAb-positive and p53 AAbnegative patients were also significant (P = 0.030) when only patients treated with platinum-based chemotherapy were considered (plots not shown). Figure 2 shows the Kaplan-Meier curves with relapse as the end-point of follow-up. Patients with detectable p53 AAb had a shorter relapse-free survival time. The difference was of borderline significance (P = 0.055). Comparing survival in relation to antibody level within the p53 AAb-positive patients, a significant difference was obtained with the cut-off value set to 250 units, which corresponds to the lower tertil in that sample. Survival was shorter for patients whose p53 AAb titres lay above this value (P = 0.027). The median survival times in these two groups were 17 and 34 months, respectively.
p53 AAb were detected only in sera of patients with tumours containing 10% or more cells immunohistochemically positive for nuclear p53 overexpression (P = 0.006). Table 2 shows the frequencies of serum p53 AAb in relation to the proportion of immunohistochemically positive-stained cells. No p53 AAb were detectable when IHC was negative, whereas p53 AAb were present in 36% of patients whose tumours had more than half of the cells stained positive. The trend test confirmed our hypothesis of a positive association of p53 AAb frequency with the proportion of IHC-stained cells (P = 0.002). No association was found between p53 AAb and the staining intensity. p53 AAb were not detectable in 26 of 38 (68%) patients whose tumours showed p53 protein overexpression in tissue. Within this group, 18 of 28 (64%) of patients with more than 50% of cells positive for p53 by IHC had a negative antibody test.
DISCUSSION
Based on our newly developed ELISA, we found p53 AAb in 21 of 113 (19%) ovarian cancer patients with a specificity of 99% for invasive ovarian cancer. All ELISA-positive samples were confirmed by positive Western blot analysis. The one falsepositive sample was taken from a patient with severe diabetes mellitus possibly with an immunological cross-reactivity against various nuclear proteins including p53. Previously reported prevalences of p53 AAb in ovarian cancer ranged from 15% (Angelopoulou et al, 1994) to 46% (Vogl et al, 1999). The prevalence of 19% in our cohort of ovarian cancer patients is comparable to that of other epithelial cancers, particularly colorectal cancer with an antibody frequency between 16% and 23% (Angelopoulou et al, 1994;, breast cancer with reported p53 AAb frequencies between 9% and 15% (Schlichtholz et al, 1992;Crawford et al, 1982) and lung cancer with frequencies between 8% and 24% (Lai et al, 1998, Schlichtholz, 1994. Prevalences in previous reports appeared to depend on the type of assay used. In ovarian cancer, higher prevalences were found when ELISAs were used for detection (Green et al, 1995;Gadducci et al, 1996;Vogl et al, 1999). In these studies, ELISA results were not validated by other assay techniques and no control groups were tested, raising questions about the specificity of these assays. In a comparative analysis of three ELISA systems recently published by others, our solid-phase ELISA using doublepurified and renatured wild-type p53 protein provided the highest diagnostic accuracy in correctly identifying cancer cases from a cohort of cases and controls (Rohayem et al, 1999). The other two test systems were a solid-phase ELISA using eucaryotically expressed wild-type p53 and a two-site sandwich ELISA using native p53 extracts from tumour cells.
Presence of p53 AAb positively correlated with tumour grading (P = 0.009) and stage (P = 0.034), indicating an aggressive behaviour of p53-AAb-inducing ovarian cancers. Tumour aggressiveness was also reflected by the shortened survival time and relapse-free period of antibody carriers. There are only few data on the prognostic value of p53 AAb in ovarian cancer. Angelopoulou et al found in bivariate analysis a significantly increased risk only for relapse (Angelopoulou et al, 1996). In multivariate analysis, however, p53 AAb was not an independent prognostic factor. Gadducci et al found that progression-free survival and overall survival of advanced (FIGO III and IV) ovarian cancer patients were not related to preoperative serum p53 AAb status (Gadducci , 1999). The data based on our newly developed ELISA clearly attribute a prognostic value to p53 AAb. Presence of p53 AAb was still an independent predictor of worse clinical outcome even after adjusting for the well-established prognostic parameters. The association of poor overall survival with p53 AAb as detected by our ELISA was unequivocally more clear-cut than the association with p53 mutation in the tumour and/or p53 IHC reported in previous studies including ours. Particularly in multivariate analyses, p53 IHC was not an independent predictor of poor survival (Eltabbakh et al, 1997;Wen et al, 1999). Quantitation of p53 AAb titre generated additional predictive information on patients' outcome. Our results favourably compare with studies on breast cancer and lung cancer showing a positive p53 AAb status to be an independent prognostic variable for poor overall survival (Peyrat et al, 1995;Lai et al, 1998). Prognosis of patients with invasive ovarian cancer has changed little over the past two decades. Although early-stage ovarian cancer is highly curable, more than two thirds of the patients present with advanced disease with a poor survival rate.
Transvaginal sonography (TVS) -in general of low specificity -has been demonstrated to be a valuable screening technique with a relatively high sensitivity for early-stage ovarian cancer in highrisk populations of asymptomatic women (DePriest et al, 1997). Tumour marker CA125 determination (sensitivity 84.9%, specificity 76.8% in the present study) has failed to compensate for the lack of specificity of TVS. Owing to its high specificity (99%) for malignancy, the p53 AAb ELISA is suitable to be employed in a two-stage procedure as confirmatory test on individuals who had a positive or suspicious screening test. In this case, the high likelihood ratio for a positive result of 21.7 of the p53 AAb ELISA provokes a conclusive change from pre-test to post-test probability of malignancy. Future prospective studies should test whether p53 AAb ELISA can help to increase the diagnostic accuracy of ultrasound in order to avoid the delay of operation on ovarian cancer patients with low morphologic suspicion index. A positive p53 AAb test result in patients with adnexal mass should prompt referral to a centre with an infrastructure allowing explorative laparotomy. If laparoscopy is initially performed, availability of immediate and accurate pathologic diagnosis in such a centre is an important prerequisite for the treatment of these patients (Dottino et al, 1999).
Mechanisms leading to p53 immunogenicity still remain unexplained. Several studies consistently demonstrated that most antibodies preferentially target immunodominant linear epitopes contained in the amino (residues 1-95) and carboxy (residues 300-393) termini of p53. Both are exposed to the immune system due to their location on the surface of the protein, while the core domain prone to mutations is buried in the molecule (Schlichtholz et al, 1992;Legros et al, 1993;Lubin et al, 1993). These observations suggested that the specific immune response could be triggered by the level of nuclear p53 expression (Soussi and May, 1996). Our data on ovarian cancer support this hypothesis. All patients with circulating p53 AAb showed positive tumour immunostaining indicating p53 accumulation (P = 0.006). Such an association has also been reported for breast cancer (Mudenda et al, 1994) and colorectal cancer (Houbiers et al, 1995). Additionally, we found that the frequency of p53 AAb was positively related (P = 0.002) to the proportion of stained tumour cells, indicating an underlying quantitative effect on antibody induction. Only a subset of patients with IHC-positive tumours had a positive result in our ELISA, suggesting that p53 accumulation in ovarian cancer is not sufficient to induce a detectable humoral response.
In summary, p53 AAb, as detected by double-purified and refolded p53 antigen, were highly specific for malignancy in patients with ovarian mass and correlated with aggressive behaviour of the cancer. Presence of p53 AAb independently predicted poor clinical outcome, particularly above a cut-off value of 250 units. Quantitation of p53 AAb might contribute additional prognostic information. Further studies may prove the p53 AAb ELISA to be a valuable tool to preoperatively identify patients with aggressive ovarian tumours. p53 AAb testing could improve diagnostic accuracy of screening methods and consequently reduce stage at detection, decrease stage-specific mortality and could help in clinical decision-making on therapeutic regimens such as adjuvant p53 gene replacement therapy in addition to standard chemotherapy. | 2014-10-01T00:00:00.000Z | 2000-10-26T00:00:00.000 | {
"year": 2000,
"sha1": "93494bd7fa7b7f1183a835f1aef3f9b3974b35d7",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/6691446.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "93494bd7fa7b7f1183a835f1aef3f9b3974b35d7",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
} |
243503826 | pes2o/s2orc | v3-fos-license | Attribute based Weighted Mechanism for Community Detection in Social Internet of Things
A new paradigm of Internet of Things (IoT) is emerging rapidly by socializing the smarter physical devices called as Social Internet of Things (SIoT). Social relationships established between these objects make them autonomously connected for services, without any human intervention. Since SIoT is a large-scale network with huge data involved, the content spreading behaviour need to be exploited. In order to ensure the growth of the content spread, the large-scale SIoT network is divided into several communities based on the social attributes in this work. We first divided the SIoT network into high quality Sociality based Weighted Communities (SWC). Social attributes like user preferences, social similarities, and mutual friends’ degrees are main metrics for achieving the best rate function. The weighted method based on these social attributes determine the nodes to be present in their respective communities. Also, the controlling of the local community augmentation using cluster concepts is done in our approach. Finally, a Credential Acclaimed Information Spreading (CAIS) mechanism is proposed which selects the best node with the maximum credential to surge the content spreading behaviour in the detected communities of SIoT network. The proposed social-driven attribute based weighted mechanism for community detection is validated using
Introduction
With the massive growth in technology and by incorporating smartness in the physical devices, Internet of Things (IoT) marches the real-world communication. The ultimate aim of IoT is to deliver livelihood for mankind by performing similar services. Progression of IoT intends to be a boon to the broad variety of various application fields by enabling smarter device to device communication [1]. Implementing the social perceptions in the IoT paradigm, unveils a new era called Social Internet of Things (SIoT) [2]. The SIoT model indicates an environment which consents humans and smart devices to interact within a society through numerous varieties of relationships. SIoT uplifts the socialization of the smarter devices connected with one another without any human intervention [3]. SIoT devices are proficient of constructing any number of friends, which in turn to form several communities based on the interested attributes of the devices involved. Therefore, detecting and characterizing the large-scale SIoT network into various communities is imperative for the better service discovery. The nodes are classified and grouped as several communities which serve as the elementary component of SIoT networks. The leaders of these communities effectively influence the other nodes present. 3 The main two steps in the community detection is the identification of promising leaders in the SIoT network and examination of nodes likenesses to construct various communities.
Usually, community detection in networks is based on the several conventional community detection algorithms like Louvain, Girvan Newman and Bron Kerbosch. Louvain algorithm detects the disjoint communities in a directed social network using greedy optimization of modularity. Louvain algorithm tends to be one of the fastest community detection algorithms with a high modularity score. But this method is not suited for the smaller communities which limits the resolution [4]. Bron Kerbosch algorithm works goods for unweighted undirected graph for discovering the overlapping communities. This algorithm computes the utmost cliques by hunting for the perfectly linked nodes of a network. The major problem with Bron Kerbosch algorithm is that it does not hold good for output sensitive problems [5]. Girvan Newman algorithm eliminates the devices with the highest number of the shortest routes between the devices. The edges of the devices that are joining the other devices in a community is maintained to have maximum betweenness. The major problem with Girvan Newman algorithm is that, it is not suited for detecting the communities of huge and complex network structures [6]. Rosvall et al proposed the Infomap algorithm which detects the communities by engaging arbitrary strides to evaluate the content spreading behaviour in the networks. This algorithm does the encoding the content in the network as an encoded graph via a restricted channel.
Finally, the original graph is decoded by constructing the set of probable participants. Lesser the number of participants, larger the content about the network, therefor, nor suited for several participants in a network [7].
Paper is planned as follows: section 2 precises the related works; the proposed SWC detection model is discoursed in section 3; CAIS mechanism is explained in section 4; experimental evaluation is discussed in section 5; the results and discussions are elaborated in section 6 and finally the conclusion and the proposed work is offered in section 7. proposed a community detection technique to divide the SIoT network into many superiority communities and then the content spreading is maximized via two phases such as candidate and greedy phases to select the best candidate for the maximum content sharing. But the influence of node content on the information spread is ignored [11]. Incorporation of Louvain algorithm with the fuzzy network is proposed for finding the communities in a SIoT network.
Shapely index is used as the primary degree for obtaining the fuzzy measures [12]. Liu et al presented the progresses in the community detection via deep learning networks. Deep learning 5 models learn the pattern of nodes, neighborhoods, and subgraphs present in their respective communities of the real-world scenario. Currently convolutional neural networks (CNN), autoencoders and generative adversarial networks (GAN) are mostly used for the community detection but the following are the gap between deep learning networks and community detection: detection and recognition of the spatial variations among various communities are not done, combination of temporal based information and spatial content-based information are yet to be learned in these deep networks [13] Though these algorithms are suited for social networks, but really not appropriate for SIoT networks. In this work, we aim to divide the SIoT network into several smaller communities and maximize the content spread among these communities. The major contributions presented in our work are the following: 1. A Sociality based Weighted Community detection (SWC) algorithm for dividing the SIoT network into high quality smaller communities is developed.
2. An effective mechanism for maximizing the content spreading behaviour among the detected communities is proposed via a Credential Acclaimed Information Spreading (CAIS) strategy.
3. The suggested model is estimated on three different datasets like ARAS, MIT and CASAS datasets.
4. To end, the performance of the proposed attribute-based community detection is compared with various available approaches.
Social Attribute based Community Detection
From the research studies, a SIoT network is a random large-scale network containing several nodes of diverse relationships. Generally, SIoT networks are represented interms of weighted graphs by including the social belongings of the links between nodes. In our work, we used a weighted method which is based on the social attributes such as user preferences, 6 social similarities, degree of the mutual friends are for achieving the best rate function. If two nodes are connected to a node with a lesser degree, then those nodes behave with the linked characteristics. This is same like, when few persons discuss on an uncommon theme, then those persons are with the similar interests. Therefore, it is clear that in a SIoT network, content spreading behaviour can be maximized only if it is divided into several smaller communities.
Our aim is to divide a large SIoT network into several small communities based on the communication relationships between the nodes present in each community.
Sociality based Weighted Community (SWC) Division Mechanism
Let us consider that our SIoT network possesses only local structures with a sub graph containing few nodes in it. Hence, we initially choose local clusters and then the size of these clusters is increased consistently by choosing the nodes with the best rate function. Let regulates the number of communities to be formed and it is always a positive integer. When = 0 ≥ 0.5, it forms only one community and when > 2, it forms several communities.
Since, we pursue to form several smaller communities, we chose = 1.
Our algorithm uses the social attributes such as user preferences, social similarities, highest degree and maximum mutual friends are for achieving the best rate function. Here, the social similarities which obtained from the user preferences are considered to be Direct Intimacy ( ) and degree of the mutual friends are considered to be Indirect Intimacy ( ). Let us consider two nodes, and . The is measured as the summation of the weights associated to node multiplied by the summation of the weights associated to node . When there is no social similarity between the nodes and , then = 1. The is measured as the reciprocal of the Consider a node to sub graph , the difference in the best rate function to with and without is equal to the best rate function of with node added with the best rate functions of without node . If the difference in the best rate function plus the summation of the weights to with and without is greater than zero, this specifies that node improved the rate function of its community fitness and therefore it is then added to sub graph . If the difference in the best rate function plus the summation of the weights to with and without is lesser than zero, this specifies that node reduced the rate function of its community fitness and therefore it is not joined to sub graph . Thus, only if a node makes some improvement to the community fitness function, it joins that respective community. Thus, this method is suitable for exploring the overlapping community detection by means of controlling the number of communities via adjusting the value of.
Local Community Augmentation Control
SIoT network contains huge amount of triangular assemblies, which emerges several negatively influences the community and that node is not added to . Therefore, if the community can be more reliable, the node is added to . Else, even if the node is with the best fitness function, it is not added to .
SWC Algorithm
Our 3. Select a random node , which does not belong to any of the local communities.
4. Use the Direct Intimacy ( ) and Indirect Intimacy ( ) functions to compute the rate function of the neighboring. 9 5. Choose a node with the best rate function. If the best rate function is nonnegative, estimate the reliability of node comparative to community for sub graph .
6. If the best rate function is negative, then repeat from step 3.
7. If the reliability of node comparative to the local community is superior than -0.05, then will be added in , creating a larger local community; else, repeat from step 3.
8. Recompute the best rate function and reliability of each node.
9. If any of a node possess a negative rate function and its reliability of the same does not satisfy the constraint, then discard that node from the larger local community, which in turn generates new sub graph.
11. If all nodes fulfil the constraint of rate function and its reliability, then return to step 4.
Credential Acclaim based Information Spreading (CAIS) Mechanism
After discovering several communities and regularizing it from a SIoT environment, next our aim is to increase the information spreading quality of then communities formed. It can be achieved only by electing a leader, who is capable of spreading the maximum information among the nodes of their respective community. In this work, we used Credential Acclaim (1) Then the fitness of the sail fishes and sardines are estimated using the equations (3) and (4)
11
The position of the best sail fish and injured sardine with the best fitness value is saved in each iteration and considered as the elite, the position of sail fishes and sardines are then updated towards the best solution as given in the equations (5) and (6).
In the equations (5) and (6) Then estimate the fitness of the sardine fishes, if there is a better fitness solution in sardine, then that injured sardine is replaced with the elite sail fish as given in the equation (7) = ; ( ) > ( ) Such best fitted sail fishes are given higher credentials and are considered as the leader of the other nodes in its respective community. The content of the information is spread using the elected leader via SFO algorithm. The entire CAIS algorithm is described in the following steps of Algorithm 2
2.
Calculate the fitness of the sail fishes and sardines using the equations (3) and (4) 3.
7.
Update the position of all the sardine fishes using the equation (6), if > 0.5
9.
Update the position of selected sail fish using the equation (5) 10. Calculate the fitness of all the sardines using the equation (4) 11. Replace elite sail fish by injured sardine using the equation (7) 12. Remove the hunted sardine fish from the population 13. Update the best sail fish and sardine 14. Give high credentials to such best sail fish and sardine fishes. 13 15. Choose the highest credentialed fish/node as the leader for its own community 16. Spread the content of the information via such an elected leader 17. End
Experimental Evaluation
To prove the versatility of our proposed model, we validated it using three different real world datasets are Center for Advanced Studies in Adaptive Systems (CASAS) [15], Massachusetts Institute of Technology (MIT) [16] and Activity Recognition with Ambient Sensing (ARAS) [17] for recognising the actions using machine learning. We tested our model on 16 subjects. Totally 15791 number of actions are collected from 427 sensors which are preinstalled in 11 flats. The details of the datasets used in our work is shown in the Table 1.
Results and Discussions
The most commonly used metrics in the evaluation of community detection algorithms are influence speed, Normalized Mutual Information (NMI), modularity, F-measure, precision, recall and computation time [17]. Influence spread is maximized based on the credentials acquired by the nodes which are elected as the leader via SFO algorithm. Fig. 1. depicts the influence spread plot for different community detection algorithms. It is evident that the information spread is maximum for our proposed attribute-based method with the increase in the credentials. NMI is estimated via a confusion matrix, each row corresponds to the number of originally existing community and each column corresponds to the number of detected communities in a SIoT network.
Conclusion
Proposed an attribute-based weighted mechanism for the detection of communities in a SIoT environment. We focussed our work in dividing the SIoT network into high quality Sociality based Weighted Communities (SWC). We exploited the important social attributes like user preferences, social similarities, mutual friends' degrees are main metrics for achieving the best rate function. The weighted method based on these social attributes determine the nodes to be present in their respective communities. We also presented a mechanism for controlling the local community augmentation. A Credential Acclaimed Information Spreading (CAIS) mechanism is implemented for selecting the best node with the maximum credential to surge the content spreading behaviour in the detected communities of SIoT network. Experimental results prove that the proposed social-driven attribute based weighted mechanism for | 2021-09-28T16:59:28.611Z | 2021-07-12T00:00:00.000 | {
"year": 2021,
"sha1": "1ef0cebfc34c8215970e51b4efa49bedce9ebd65",
"oa_license": "CCBY",
"oa_url": "https://www.researchsquare.com/article/rs-652736/v1.pdf?c=1631887787000",
"oa_status": "GREEN",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "d5dc643dabbc57a4b903700caa260007837d0ef0",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": []
} |
226582002 | pes2o/s2orc | v3-fos-license | Robust optimization for composite blade of wind turbine based on kriging model
Structural optimization models often feature many uncertain factors, which can be handled by robust optimization. This work presents a complete robust optimization program for composite blade based on the kriging approximation model. Two case studies were given and performed using a genetic algorithm. The first being typical optimization, where the first natural frequency of the blade is selected as the optimized objective and the optimal sizing distribution for the entire blade shell is sought to ignore the uncertain factors. The other case determines the standard deviation of the optimized objective in the first case as another optimization goal. Moreover, a 6σ robustness for the optimization results of the two cases was evaluated. The result shows that typical optimization increases the first natural frequency of the blade by 19%, while its robustness level has a reduction of 61% compared with the first blade. Nevertheless, the robust optimization not only results in an increment of 15.4% in the first natural frequency of the blade but also increases its robustness level by up to 90%. Therefore, the proposed approach can effectively improve optimization objectives, especially reduce the impacts of uncertainties on the objective functions.
Introduction
The large horizontal-axis wind turbine (HAWT) blade, one of the most critical components of the wind power system, is characterized by slender shape, composite structure, and flexible body. Its long blade span, limited capacity to control blade tip deflection for ensuring a safe distance between tip and tower, along with the trend of individual wind turbine capacity increasing year by year, all suggest that it requires a higher stiffness than other small and medium blades. [1][2][3][4] For these HAWTs, studies have found that the blade weight grows with a rotor radius at R 2.3 while the rotor power grows with R 2.1 . 5 Future larger wind turbines for higher power extraction capabilities require increasing blade stiffness to accommodate the significantly increased blade weight as the rotor radius grows. In addition to improvements in the design and manufacture process, blade structure optimization can be an effective method to increase the blade stiffness. Here, we reviewed the current literatures on the structure design and optimization of the composite blade. [6][7][8] Several authors have explored some specific issues of the structural optimization of the composite blades. In work by Anderson et al., 9 a high-fidelity multidisciplinary optimization capability is employed for the structural optimization of wind turbine blades. The optimal fiber angles distribution throughout the internal structure of the blade were sought to minimize a stress parameter for each of several load cases. The result showed that the driving stress for fatigue has a reduction of 18-60% after optimization. Barr and Jaworski 10 explored the concept of passive aeroelastic tailoring to maximize the power extraction of an NREL 5 MW wind turbine blade and presented a variable-angle tow composite materials model along blade span to couple the bend-twist deformations under aerodynamic load. The resulting computational formulation predicted an increase of 14% in turbine power extraction when the optimization is performed around the cut-in wind speed, and by 7% when the blade is optimized near the rated wind speed. Albanesi et al. 11 presented a metamodel-based method, combining a genetic algorithm with an artificial neural network, to optimize the composite laminate layout of wind turbine blades. An actual case investigation showed that laminated weight could be saved up to 20% compared to the reference design. Fagan et al. 12 completed the structural optimization of the wind turbine blade using the multi-objective genetic algorithm and a finite element model. A candidate blade design was manufactured and tested for some structural characteristics, including mass, the center of gravity, deflections, strains, and natural frequencies. Almeida et al. 13 presented a methodology to perform cross-section evolutionary optimization for a topologically optimized structure using a genetic algorithm. The structure with both topology and cross-section optimization accomplished a specific stiffness 330% higher than the structure of the quasi-isotropic stacking sequence. Buckney et al. 14 utilized the topology optimization to find optimal structural configurations for a 3-MW wind turbine blade and saved weight by up to 13.8% compared to a conventional design. Generally speaking, there are two different approaches to achieve structural optimization of the blade: the first approach is the optimization in spanwise material distribution, the selection of materials, size of parts such as spar flange and shear webs through the knowledge of typical blade build-up and constraint [9][10][11][12] ; the other is topology optimization, which seeks the optimal material distribution. 13,14 Here the authors' focus will be the first approach. Most of the published literature demonstrated that the simulation predictions for strains, natural frequencies, and mass are a good agreement with the test results. In the study by Albanesi et al., 15 the authors propounded a novel method simultaneously to optimize the ply-order, ply-number, and ply-drop configuration using simulationbased optimization. As an actual application, they redesigned the composite layout of a 40-kW wind turbine blade and demonstrated a reduction in weight by up to 15% compared to the existing layout. These researchers have made outstanding contributions to the structural optimization of blade by redesigning its laminate configuration. However, they all neglected a critical issue that the blade structural parameters are not constants in the actual engineering application, but fluctuate up and down based on the design values, and this phenomenon reflects the blade robustness. In the case where the blade performance metrics are too sensitive to the design parameters or their robustness is relatively weak, a slight fluctuation in design parameters may lead to a significant drop in the blade performance metrics. [16][17][18][19] This selection of an appropriate optimization algorithm is challenging for composite structural design problem that contains plenty of variables.
This work aims to develop a new method to improve the robustness of blade performance metrics and structural parameters based on a kriging model. This methodology combines the general optimization with robust optimization organically. Therefore, general optimization is first implemented to improve the blade performance, 20 and then, the optimal results and its robustness level are analyzed comprehensively. The evaluation results test whether robust optimization is necessary. If the evaluation results showed that the robustness of the overall optimization results does not satisfy the requirements, the robust optimization would be performed eventually.
The kriging approximation model using experimental design
The kriging approximation model is an unbiased estimation model with the smallest estimated variance. 21 It can describe not only high nonlinear processes but also smooth target effects, remove numerical noise, and significantly improve optimization efficiency. The model can provide an accurate interpolation. Its fundamental theory can be briefly described as follows.
The model is superposed by a global model and local deviations, as shown in the following equation: where Y (x) is an unknown approximation model, f (x) denotes a known polynomial function, and Z (x) represents a stochastic process with a mean of zero, a variance of s 2 , and a covariance of zero. f (x) provides a global approximation model of the design space, while Z (x) creates a local bias based on the global model. The covariance matrix of Z (x) can be expressed as the following equation: where R denotes the correlation matrix and R (x i , x j ) is the correlation function of sample points x i and x j . The correlation function has different forms. The Gaussian correlation function has been selected to study, which is defined as the following equation: where n dv is the number of design variables and q k denotes an unknown related parameter. Once the correlation function is determined, the response estimate for any test point x can be calculated by the following equation: where y is the column vector of length n s (sampling points). r T (x) denotes the correlation vector of length n s between test point x and sample points {x 1 , x 2 , . . . , x n s }, as shown in the following equation: b in equation (4) is estimated by the following equation: The variance estimate of the global model is obtained by the following equation: The nonlinear unconstrained optimization problem is shown in equation (8), which can be solved by the maximum likelihood estimation approach to get the correlation parameter q.
When q k is calculated, the correlation vector r T (x) between the unknown point x and the known sample data can be got from equation (5), and the response value can be obtained by equation (4). In the construction of the kriging model, the choice of test points has a direct impact on the accuracy of the constructed model. In this investigation, the kriging model is constructed by using the optimal Latin square test design method, which has the advantages of dispersing the test design points in the design space and representing as much information as possible with as few test design points as possible.
Formulation of the optimization problem
General optimization: Case 1 Optimization design. The blade modal characteristics, including the modal frequency and modal mode, are the important factors affecting the blade vibration and noise. Since the blade modal characteristics are the global characteristic of the blade, the first modal frequency is selected as the optimal target. Equation (9) is the general optimization objective function: where f 1 denotes the first-order modal frequency of the blade.
In the present work, due to the complex lay-up schedule of the analyzed rotor blade (up to 100 layers in some crowded places), the optimal design is seriously challenging to complete if each layer thickness of each section is used as a layer design variable. Therefore, the basic unit of the blade laminate has been selected as the optimal design variable, namely the thickness of the uniaxial fiberglass (x 1 ), the thickness of the biaxial fiberglass (x 2 ), the thickness of the triaxial fiberglass (x 3 ), the single-layer thickness of the balsa wood (x 4 ), and the single-layer thickness of the reinforcing material (x 5 ). Besides, the width of the spar cap (x 6 ) is also selected as a design variable, and design variables and existing values are presented in Table 1. 22 Different types of composite materials are adopted for the blade construction to achieve better mechanical properties of the blade. Because of the complex loading on the blade, it must satisfy the strength requirements in the optimization process. It is inappropriate to utilize the maximum stress as the strength constraint because mechanical properties vary in different directions of material. Therefore, the Tsai-Wu failure criterion as shown in equation (10) could be applied to check composite structure.
Besides, another constraint is that the blade weight does not increase compared to the first blade's during the optimization process, as shown in the following equation: where m 0 is the actual blade weight. Adaptive single-objective, integrated into design exploration, combines the optimal space-filling (OSF) sampling method, kriging response surface, and mixed-integer sequential quadratic programming (MISQP) algorithm with computational domain reduction technique. 23 The OSF sampling method, optimized a version of Latin hypercube sampling, has a better space-filling ability and is more suitable for generating particularly complex response surfaces. The MISQP algorithm can process both continuous and discrete input parameters for optimization of the individual output parameters.
Optimization results. According to the optimization design scheme in "Optimization design" section, the kriging approximation model is established using 60 samples, generated by the OSF sampling method. During the optimization process, the maximum number of iterations and the convergence tolerance are set to 120 and 1 Â 10 À6 , and finally, three candidate solutions satisfy the strength criterion and the quality constraint are generated. The general optimization iteration result of the 1.5 MW wind turbine blade is shown in Figure 1. It revealed that the frequency response value has gradually converged when the iteration step reaches 40 and the optimization effect is remarkable. Taking the manufacturing and processing of composite laminates into account, all optimized parameters are rounded off and listed in Table 2. The blade first-order natural frequencies are 0.26 Hz and 0.31 Hz, respectively, for the first and optimized blade. Results in Table 3 turn out that the blade first-order natural frequency is improved (about 19%) after general optimization. Except for the slight increase of the thickness of balsa wood and triaxial glass, other materials' thickness and the width of the spar cap are decreased, which ensures that the weight of the optimized blade is not more massive than the first blade's. Although the thickness of balsa wood increases slightly, its density is much smaller than other materials. Therefore, the optimized blade can satisfy the constraints required by equation (11).
The 6s robustness analysis results of the first-order natural frequency for the initial and optimized blades are summarized in Figure 2. As can be seen clearly from Figure 2, the 6s level of the first-order natural frequency for the initial and optimized blades is 3.1615s and 1.23333s, respectively. The robustness of the first-order natural frequency is relatively low for both blades. Further analysis showed that the first-order natural frequency robustness of the optimized blade is significantly lower than the first blade. It is evident that the optimization of blade structure parameters does not ensure that the blade robustness obtains a remarkable improvement without considering the parameter value fluctuation. Hence, blade modal frequencies and robustness must both be regarded as optimization targets for the comprehensive improvement of blade performance.
Robust optimization: Case 2
Taking the 1.5 MW wind turbine blade as an example, the proposed kriging model and the robust optimization method are validated. 24 The entire optimization process based on the kriging model is shown in Figure 3. The kriging approximation model is constructed by the optimized space-filling test method, which has the better space-filling ability and is more suitable for generating complex response surfaces than the Latin hypercube test method.
The premise of robust optimization is to calculate the mean and variance of the response values. The commonly used methods are matrix method, analytical method, and Monte Carlo simulation method. The characters of Monte Carlo simulation are simple and fast, its extremely few mathematical calculations and computer dependence, both suggest that the Monte Carlo simulation method is an effective method for evaluating the probability characteristics. Therefore, the Monte Carlo simulation method was selected for this work. First, the experiment is designed according to the optimized space-filling experiment design method, and the relevant sample point data are calculated and extracted by the finite element method. Then, a kriging response surface model is built using these sample point data. Finally, the robust optimization of the blade structure is performed based on the kriging approximation model.
Robust optimal design based on Monte Carlo simulation technology. The blade is composed of a multilayer material, and the thickness of each layer has a certain fluctuation based on the design value. Therefore, to fully consider the impact of design variable fluctuations, the 6s robust optimization method described in "Robust optimization: Case 2" section is applied to the blade structure optimization. For a typical robust optimization, the mathematical model is determined in the following equation: Min Fð; sÞ S:t: G j ð; sÞ 0 where x is the design variable, j denotes the number of constraint functions, and x U and x L are the upper and lower limits of the design variations. In this mathematical model, the objective function F can be written as follows: where w 1i and w 2i denote weights, R 1i and R 2i are scale factors, and m is the number of responses. The principle of selecting positive and negative signs in the formula is as follows: a positive sign means minimizing the response average and a negative one is an inverse. Monte Carlo simulation technology is recognized as the most accurate method to evaluate probability characteristics, and the mean and variance s 2 in the above robust optimization design Equation (12) need to be solved by this technique. In the first step, the random variable value and the design variable value are obtained by sampling, and the next step is to substitute the variable value into the kriging model to obtain a Monte Carlo distribution cloud map of the response. Finally, the mean and variance s 2 can be calculated from the Monte Carlo distribution map.
Robust optimization design of wind turbine blade. As the general optimization scheme, the single-layer thickness of each layer material and the first-order natural frequency of the blade are determined as experimental parameters and evaluation indexes of robust optimization, respectively. First, 200 design points were generated using the optimized space-filling experiment method. Then, the kriging model between design variables and optimization goals is established by these design points. The determination coefficient R 2 and relative average error (RAE) are used to evaluate the fitting accuracy of the kriging response surface model. As given in Table 3, the larger R 2 or the smaller RAE, the higher the accuracy of the response surface, and vice versa. As can be seen from (12) and (13), the corresponding mathematical model of 6s robust optimization for a blade is defined as follows: where x il and x iu represent lower and upper limits of design variables, and x i is design variables.
Robust optimization results. 6s robust optimization significantly reduces the response fluctuation of the objective function and improves the 6s level of the response distribution. The results contrast of general and 6s robust optimization is presented in Table 4. As given in Table 4, general optimization increases the frequency value by 19% and has a relatively low 6s level (only 1.23s). The preliminary analysis results obtained from the 6s robust optimization is that the robust optimization reduces the frequency of 0.01 Hz than general optimization. However, the 6s level of the frequency response and design variables are improved. The results in this chapter indicate that the robustness of almost all parameters has reached or exceeded the 6s level after robust optimization, the next chapter, therefore, moves on to discuss the sensitivity of design variables and frequency response.
Modal sensitivity analysis of design variables
The sensitivity of a structural parameter is an indicator of whether the parameter has a significant impact on structural performance. The higher the sensitivity of the structural parameters, the worse the robustness of the structural performance, and vice versa. The modal sensitivity of the blade is the rate of change of its natural frequency to its structural parameters, which can be obtained by solving the first derivative of the blade free vibration differential equation for design variables. The modal sensitivity results of design variables from the blades of the initial design, case 1, and case 2 are shown in Figure 4. The label values from 1 to 6 along the x-axis represent the single-layer thickness of the unidirectional glass, biaxial glass, triaxial glass, balsa, reinforcing material, and width of spar cap, respectively. The label values on the y-axis denote the numerical results of the modal sensitivity of the design variables selected in this study. The data from this figure is another essential illustration of the conclusions obtained by the 6s robustness analysis for the optimized solutions in case 1 and case 2. The single most striking observation to emerge from the data comparison was the modal sensitivity of the third design variable (single-layer thickness of the triaxial glass) for first blade and blades in case 1 and case 2. The modal sensitivity values for case 1 and case 2 has a reduction by up to 39% and 83% compared to the first blade, respectively. Therefore, the optimal results from robust optimization (case 2) show lower sensitivity of the design variables and stronger robustness of the objective function than the optimal result from typical optimization (case 1).
Sensitivity analysis of natural frequency response of the blade As described in "Robust optimization: Case 2" section, one of the purposes of robust optimization is to minimize the fluctuation of the objective function when the design variable changes with a specified distribution law. From another perspective, it also means that the sensitivity of the objective function to the design variables is minimized. Figure 5 illustrates the fluctuation law of the natural frequency response of the blade as each design variable changes within the specified range. All figures other than Figure 5(f) show that the frequency response of all three blades from initial design, case 1, and case 2 has a consistent trend as design variables change, only the frequency response of the blades in case 1 and case 2 is more significant than the frequency response of the initial blade, and the frequency response of the blade in case 2 varies more smoothly with the design variables than the other two. From the data in these subfigures, it is apparent that although the typical structural optimization approach (case 1) can significantly improve the response value of the objective function, it cannot ensure that the sensitivity of the objective function to design variables is reduced. The proposed robust optimization (case 2) can simultaneously optimize the response of the objective function and the sensitivity of the objective function to design variables. The most striking result to emerge from Figure 5(f) is that once the width of the spar cap exceeds approximately 300 mm, the frequency response of the blade in case 1 fluctuates dramatically. These further analyses for Figure 5(f) revealed from another perspective that the typical structural optimization might exacerbate the response fluctuation of the objective function, thereby reducing the robustness of the structural performance.
Conclusions
How to minimize structural performance fluctuations caused by uncertain factors while maximizing structural performance is an extremely challenging task. To address this knowledge gap, this study proposed a robust optimization strategy based on the kriging approximation model and compared it with the typical structural optimization approach. Finally, two concrete cases for the structural optimization of the composite blade were performed using a genetic algorithm. The most significant findings from the optimization design include the following: (1) The solution results of case 1 showed that the typical structure optimization increases the first natural frequency of the blade by 19% without considering the fluctuation of the design variables. However, further analysis for robustness evaluation indicated that the 6s level of the first natural frequency has a reduction by up to 61%. Although the typical optimization approach can obtain the optimal solution of the design problem, it is hard to ensure that the anti-fluctuation performance (robustness) of the objective function is also improved. the optimal solution of the optimization design and solve the fluctuation problem of the objective function. (3) Further sensitivity analysis results for the natural frequency response and the design variables of the blade also demonstrated that the proposed robust optimization is superior to the typical structure optimization approach.
Author contributions
Ma Huidong contributed to data curation, formal analysis, visualization, writing of original draft. Zheng Yuqiao contributed to funding acquisition, methodology, project administration, and review and editing of the original manuscript. Wei Jianfeng contributed to investigation, resources, and validation. Zhu Kai contributed to software and supervision.
Declaration of conflicting interests
The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Funding
The author(s) disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: This work was supported by the National Natural | 2020-08-06T09:09:06.519Z | 2020-03-19T00:00:00.000 | {
"year": 2020,
"sha1": "5ddc87842703f6c8eace7a6b242d23be23fe234a",
"oa_license": "CCBYNC",
"oa_url": "https://journals.sagepub.com/doi/pdf/10.1177/2633366X20914631",
"oa_status": "GOLD",
"pdf_src": "Sage",
"pdf_hash": "7dd4c06e2628b9c6a9f9b181fde93526b499cbcf",
"s2fieldsofstudy": [
"Engineering",
"Environmental Science"
],
"extfieldsofstudy": [
"Materials Science"
]
} |
233659594 | pes2o/s2orc | v3-fos-license | The Aesthetics of Rupture: Deconstructing Rasa
Semantic fixity is a transcendental signified. One of the touted aims of literary theory was to topple it. The Indian semantic concept of Vyañjana attempted to do this millennia before. But canonical theories of Rasa established Rasananda as an attainment of absolute coherence and harmony. What this paper calls trans-epistemic praxis is a viable methodology to reclaim the long-lost rupturality (if structurality is resisted, rupturality must be embraced, at least as a neologism) inherent to aesthetics. This is done in a Post-theory context. “Bhanga” (rupturing) leads to “bhangi”, aesthetic charm. It is an aporetic textual disruption that leads to the most fertile indeterminacy of meaning. Modern literary theory set out on a debunking and destabilizing mission of liberal humanist tenets, but got hardened into “doxa”, crystallized structures and hierarchies. This necessitated a theorizing of theory itself. The chronotope of Post-theory gets foregrounded. A crossing of spatio-temporal boundaries gives us the freedom to site Rasa Theory and Indian Poetics as Post-theoretical. Inter-spaces and inter-times are engendered. Deconstruction and Rasa become heterodoxic knowledges to each other, subverting each other honouring the alterity of the other. This exercise liberates Theory from becoming sclerotic. Orthodoxics and monologisms get flouted. Theory is a story. Story is built on the figurality of language. The tropology of language is built on a never-ending desire for signification. This desire never meets with satiation. The concept of “Rati” can be seen as this interminable desire of language. Post-theory is a call to wake up from amnesia, the terrible oblivion regarding the fact that Deconstruction and Rasa are ceaseless streams of reading processes and not rigid and straitjacketed end products. This ruptural aesthetics leads to the rapture of poeisis, the indeterminate significatory process.
Abhivyakti -Rati -Differance
Ruptures are everywhere. But ruptures constitute a continuum. This is utterly paradoxical, but aesthetics and literary theory are built on this paradox. Rupture, fragmentation, fissuresthese are fundamental to everything including language and discourse. Aesthetics is the study of beauty, and as such it is supposed to be engaged in a search for harmony, unity, plenitude and so on. But as this study attempts to demonstrate, this notion of coherence and totality is a product of what can be called metaphysicsa quest after some transcendental signified that guarantees semantic fixities. Conventional theorists of Indian Aesthetics regard the concept of Rasananda as one such transcendental signified, the attainment of which justifies the stable significatory potential of an artefact. But this is a negation of the inherent textuality of the work of artthe infinite freeplay of signifiers always already at work in the textwhich is the cardinal principle behind the concept of Vyañjana. Vyañjana is the infinite semantic possibility of language which functions as the underlying principle of Rasa. This paper attempts to debunk the metaphysics mentioned above from a Post-theoretical application of Deconstruction.
Collating Deconstruction with Rasa in a Post-theoretical scenario might appear a bit bizarre at the outset. The methodology employed here may be designated trans-epistemic Rasa is truly aporetic, a chain of never-ending signifiers. The term "asamlakshyakramavyañgya"rasa realization with imperceptible stagesborrowed from the Dhavani theory bears witness to the undecidability and indeterminacy of linguistic signification in the realization of rasa. This indeterminacy engenders a rapture of poeisis.
Poeisis can be explained as a process, a making. Rapture is a result of the realization that the artefact is not a product but a process that is to continue interminably. This can be summed up by saying that the process is from rupture to rapture. Rasananda is another word for this rapture. Ananda is taken here as a concept which flouts all conventional fixities to enter the deconstructive realm where absolute indeterminacy is the only possible state.
For this journey through interminacy we have to situate ourselves within the "chronotopes" of Post-theory. It was Mikhail Bakhtin who introduced the concept of chronotope into theoretical thinking. Bakhtin writes: We will give the name chronotope (literally, time-space) to the intrinsic connectedness of temporal and spatial relationships that are artistically expressed in literature. We understand the chronotope as a formally constitutive category of literature (84) This concept of the chronotope helps us to forge the time-space of Post-theory. There definitely is a "here" and a "there", i.e. India and the West, in spatial configuration for an Indian reader of Western knowledges. There is also, in temporal terms, a division between a "now" and a "then", i.e. the present and the past. Post-theory with its absolute respect for alterity-something that Theory attempted to attain but miserably faileddoes not negate the juxtaposition of the spatial "there" with a temporal "then". It is this theoretical gesture that makes the Indian theory of the pastthe Rasasiddhanta -Post-theory. An enviable capacity to transcend the borders of time and space is inherent to chronotopicity. Post-theoretical speculations are built on this spatio-temporal porosity.
In Post-Theory: New Directions in Criticism edited by Martin McQuillanet. al.
Jeremy Lane presents the chronotope of Post-theory in the following way: In this case, Post-theory would imply an ability to transcend or move beyond the limitations and weaknesses of 'Theory'. The desire to challenge and transcend that set of theoretical concerns which dominate the intellectual field at any one time is of course entirely laudable. Yet the mode of this transcendence seems to be somewhat paradoxical; what we might term the chronotope of Post-theory would seem typically to involve a moving beyond which is also somehow a return, as Young so tellingly put it,'to the old certainties of the everyday world outside' (90) Lane goes on to prove that theory's return is of course there, but it is never to the "old certainties". Post-theory returns to old theories to unravel their uncertainties. Lane is sedimentation. When this is subverted we get an "inter-space" and an "inter-time" which can successfully accommodate all the alterities of space and time. The elitism within theory which presents the person conversant in the jargon of theory as "knowledgeable" gets erased.
The orthodoxy and monologism which repress the plurality and fluidity of discursive formations get into harsh conflict with heterodoxic knowledges which challenge the unitary semantics of language. Etymologically "hetero" is cognate with "itara" in Sanskrit. "Itara" is the other. In a trans-epistemic praxis, Rasa and Deconstruction posit themselves as mutual others, honouring the alterity of the other. Or, in other words, Rasa and Deconstruction assume the status of heterodoxic knowledges within Post-theory.
The legacy of post-structuralism has been elaborately dealt with by Colin Davis in
After Post-structuralism. He says that it is a very ambiguous legacy. A legacy will always be ambiguous still to be decided. Had it been unequivocal, no polemics would have emerged regarding what it was and to whom it belonged. Davis goes into Derrida's ideas regarding legacy to justify his point: Derrida's account of the constitutive ambiguities of legacies concludes with the injunction to read and the warning that it may not be possible. Reading, Derrida suggests, will not settle the legacy once and for all; rather it will keep the dispute alive, providing new resonances with which to preserve and to reinterpret the monuments of our intellectual history (7).
Davis considers stories and story-telling fundamental to all discourses. All theoretical formulations are stories of some sort. Story is a term used here to represent the rhetoricity or figurality of language. In this sense, reading or interpreting a story must be an attempt at unravelling the inherent tropology of language. Language involves only the act of storytelling. The theory of Rasa is a story. So is the theory of Deconstruction. This renders possible inter-semiotic readings of stories. All desire, "rati", meets with ultimate fulfilment in most privileged versions of Rasa theory. Abhinavagupta calls this consummation of desire "abhivyakti". But if we make a closer reading of the concept, this story of satiation gets debunked. As an explanation of abhivyakti we can say "AbhivyañjitaSthayin is Rasa".
Which means linguistic expressions or enunciations in texts merely give a suggestion or a trigger for the inherent emotion to be roused. Only a ceaseless process of arousal starts here, and no attaining of "śama" or tranquillity is tenable in this context. So "rati" becomes the logic of desire. Since "vyañjana" is a linguistic concept, "abhivyakti", which is derived from "vyañjana", is also a linguistic one. Since desire or "rati" is eternal and not a state of stasis with fulfilled desires, it can be perceived as the Derridean "différance" leading not to a referent but to newer and newer references.
Having seen theories as stories there is no real hitch in establishing their rigid and straitjacketed end products was completely pushed to oblivion. Post-theory then is a reminder, a call to wake up from this snare of amnesia. Both Rasa and Deconstruction signify not through their lucid streaks but through their irregularities and blind spots. The attempt in this paper was to show how the borderlines crumble and theory becomes just another literary genre, another name for literariness. Theory is story, theory is rhetoric,theory is poetry. So Rasa theory is not poetics,i.e. the theory of poetry. It is poeisis or the making of poetry. Since the process of making is the only available and accessible entity, Rasa is poetry itself. Philosophy and literature are one in this formulation because both are inhabited by the common factor of figurality. It is on this ground that any Post-theory trans-epistemic practice can be carried out. Identity and difference become mutually contestatory, contaminating and at the same time constitutive categories. The unbridgeable fissurethe gap inherent and among all philosophical systemswithin knowledge renders it its aesthetics. And the aesthetics of rupture and the rapture of poeisis exist within the deferral strategy of mutual supplementarity. | 2021-05-05T00:07:58.131Z | 2021-03-27T00:00:00.000 | {
"year": 2021,
"sha1": "9f898c986d7b106223456cba399007ca990ec4db",
"oa_license": "CCBY",
"oa_url": "https://ijellh.com/OJS/index.php/OJS/article/download/10953/9064",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "263c32810afaaf05a900bf9d7fe0d912e19cf147",
"s2fieldsofstudy": [
"Philosophy",
"Art"
],
"extfieldsofstudy": [
"Art"
]
} |
231628612 | pes2o/s2orc | v3-fos-license | Comparison of Gaia and Hipparcos parallaxes of close visual binary stars and the impact on determinations of their masses
Precise measurement of the fundamental parameters of stellar systems, including mass and radius, depends critically on how well the stellar distances are known. Astrometry from space provides parallax measurements of unprecented accuracy, from which distances can be derived, initially from the Hipparcos mission, with further refinement of that analysis provided by van Leeuwen in 2007. The publication of the Gaia DR2 catalogue promises a dramatic improvement in the available data. We have recalculated the dynamical masses of a sample of 1 700 close visual binary stars using Gaia DR2 and compared the results with masses derived from both the original and enhanced Hipparcos data. We show the van Leeuwen analysis yields results close to those of Gaia DR2, but the latter are significantly more accurate. We consider the impact of the Gaia DR2 parallaxes on our understanding of the sample of visual binaries.
INTRODUCTION
The Hipparcos satellite was the first space mission to be devoted to astrometry. The European Space Agency (ESA) launched the telescope in 1989, which successfully accomplished its mission during the period from 1991 to 1993, making measurements of nearly 118,000 stars. The results were published in 1997 yielding the Hipparcos and Tycho Catalogues (ESA, 1997). In 2007, F. van Leeuwen, a member of the Hipparcos team, published a new reduction of the Hipparcos catalogue that improved the accuracy of the measured parameters (van Leeuwen, 2007).
Launched in 2013, the ESA Gaia mission obtained precise astrometry and photometry for approximately 1.7 billion stars, a 10,000 fold increase on Hipparcos. The first data set was published in 2016 as Data Release 1 (DR1) (Gaia Collaboration et al., * E-mail: malwardat@sharjah.ac.ae 2016), and the second data set was published in 2018 (Gaia Collaboration, 2018;Lindegren et al., 2018). Gaia DR2 is already revolutionizing many areas of stellar astrophysics. Many researchers now rely on and confirm the advantages of Gaia measurements (e.g. Stassun & Torres (2018)).
Stellar binary systems are a key source of physical parameters of individual stars, especially masses, radii and distances. However, the fact of binarity can make parallax measurements difficult and affect their accuracy. For example, Shatskii & Tokovinin (1998) pointed this out, noting that Hipparcos parallax measurements of binary and multiple systems are, in some cases, distorted by the orbital motion of the components of such systems. To study parallax measurements of stars in binaries and assess the relative reliability of Hipparcos, van Leeuwen and Gaia DR2 we selected a sample of close visual binary stars (CVBSs) taken from the Sixth Catalog of Orbits of Visual Binary Stars (ORB6) (Hartkopf et al., 2001). This is a useful reference catalogue and data set as it includes around 2,900 solved orbits of approximately 2,700 CVBSs distributed all over the celestial sphere. Among the stars listed in the sixth catalog, we found 1700 CVBSs with parallax measurements given by Hipparcos 1997, van Leeuwen reduced Hipparcos 2007, and Gaia DR2 2018. With parallax errors improved over both the original Hipparcos catalogue and the re-analysis of (van Leeuwen, 2007), the Gaia DR2 catalogue and the newly computed masses and radii should supersede any earlier results. Nevertheless, a comparison between DR2 and the earlier parallax measurements provides a useful benchmark test for the latest results.
We investigated how measurements of the dynamical masses of the selected CVBSs are affected by the distances inferred from the parallax measurements in each of the catalogues discussed above. We also compared these results with masses estimated from two different indirect methods; Malkov's photometric masses (Malkov et al., 2012a,b) and Al-Wardat's multiparameter approach for analyzing CVBSs (Al-Wardat, 2002a,b, 2007Al-Wardat et al., 2014b,a, 2016, 2017Masda et al., 2018a,c). The latter is a computationally complex method employing colours, colour indices and magnitude differences of the system along with its parallax to build individual synthetic spectral energy distributions (SED) for each component of the system. From this, a complete set of physical and geometrical parameters can be deduced for each star. The method makes use of Kurucz (ATLAS9) line-blanketed planeparallel model atmospheres of the individual components (Kurucz, 1994).
Comparing dynamical with photometric masses is of specific importance in estimating empirical astrophysical equations and judging the accuracy of the zero points and constants. It also gives important information about the accuracy of orbital parameters of binary stars (BS) and multiplicity ratio among all-stars (Duchêne & Kraus, 2013). Once the physical and geometrical parameters of the stars have been estimated, especially log L and log T eff , the positions of the individual components of the system can be located on the Hertzsprung-Russell (H-R) diagram. Hence, their masses can be estimated using evolutionary tracks such as those of Girardi et al. (2000).
HIPPARCOS AND GAIA OBSERVATIONS OF VISUAL BINARY PARALLAXES
Parallaxes of both multiple stellar systems and even single stars are important in providing distance estimates that help in determining the stellar physical and geometrical parameters, especially the masses. Ideally, parallaxes should be measured geometrically, a modelindependent method. However, for many stars they have only been available indirectly from photometric or spectroscopic observations and these are dependent on stellar atmosphere modelling. The advent of space-based astrometry has increased the sample size and accuracy of geometric parallax measurements, which presents an important opportunity to compare these with those determined by other methods. Furthermore, we can examine how better parallax measurements can reduce the errors in stellar mass determinations, leading to improved tests of the mass-luminosity and mass-radius relations and related stellar formation and evolution theories.
van Leeuwen 2007 reanalysis of the Hipparcos data
In 1999, two years after the release of Hipparcos and Tycho Catalogues, Narayanan & Gould (1999) studied the correlation of Hipparcos parallax measurements for Pleiades and Hyades clusters. They noted that the parallaxes were larger on average than the other reported values for the stars of these two clusters. Later, Makarov showed an inconsistency between the mean parallax of the Pleiades cluster from Hipparcos catalog and that obtained from stellar evolution theory and photometric measurements (Makarov, 2002). In 2005, measurements made using the Hubble Space Telescope (HST) fine guidance sensor (FGS) confirmed the error in the Hipparcos parallax of the Pleiades (Soderblom et al., 2005).
A new reduction of the raw Hipparcos data depending on dynamical modelling of the satellite's attitude was developed by van Leeuwen and Fantino (van Leeuwen & Fantino, 2005), with a full reanalysis of the Hipparcos data published in 2007(van Leeuwen, 2007. The latter paper claimed a parallax accuracy up to a factor of 4 better than the original catalogue for nearly all stars brighter than 8 magnitude. Nevertheless, the revised Hipparcos measurement of the distance to the Pleiades of 120.0 ± 1.9 pc (van Leeuwen, 2009;Schönrich et al., 2019) remains anomalous compared to estimates based on isochrone fits of the stellar photometry (Meynet et al., 1993;Stello & Nissen, 2001) and results obtained using eclipsing binaries by (Zwahlen et al., 2004;Southworth et al., 2005), which placed their distances in the range (130−137) pc. These are in good agreement with the 134.6 ± 0.6 pc distance obtained from Gaia DR2 (Gaia Collaboration, 2018).
The visual Binary sample and parallax data
The binary systems selected for this study (Table 1) have to fulfil two main requirements: they should have parallax measurements in all three space-based astro- metric catalogues and a solved orbit in the ORB6 catalogue. Systems with zero or negative parallaxes were excluded. Table 1 shows the first 25 lines of the sample -the complete table with 1710 stars is available in electronic format. The first four columns give information about the star; Right Ascension α 2000 , Declination δ 2000 , Hipparcos and HD names. Columns 5,6,7 and 8 give the orbital period P , error of the orbital period σP , the semi-major axis a, and error of semi-major axis σa, all as given in the ORB6 catalog. The last six columns list the trigonometric parallax of the stars with their errors as given by Hipparcos 1997 (π 1997 ) ESA (1997), van Leeuwen reduction (π 2007 ) (van Leeuwen, 2007), and Gaia DR2 (π 2018 ) Gaia Collaboration (2018).
The distribution of parallax measurements for each catalogue, as a function of the fractional parallax error ( σπ π ), are shown in Fig. 1 A further consideration in comparing the parallax measurements is whether or not the parallax values are generally in agreement with each other (within the errors). The distribution of the number of measured binary systems within specific parallax bins is a way of illustrating this. Fig. 5 shows the number of stars within 10 mas bins for the parallax range π ∼ (0 < π ≤ 100 mas). There is consistency between the catalogues with small deviations, likely representing the few stars that are distributed into adjacent bins where there are small changes in value close to the bin boundaries. While there is no attempt to select a statistically complete sample in this work, the figure clearly shows that the distribution of systems reflects the accessible volume, with majority of the binary systems lying farther than 30 pc (π ≤ 30 mas), with a peak between 10 and 20 mas. There is a decline in the number of binary systems below 10 mas, probably linked to the declining ability to resolve binaries at increasing distance. Fig. 6 shows the distribution for nearby systems within 20 mas bins and parallax measurements on the range π ∼ (100 ≤ π ≤ 300) mas. This shows some differences in parallax measurements, but the counting statistical errors are similar in size to the differences. marked) comparing the two catalogues of Hipparcos trigonometric parallax measurements with the Gaia DR2 data. Both plots show the y=x line of equal parallax. There is good agreement between Gaia DR2 and both treatments of the Hipparcos data. However, there are several stars where there are significant differences between Gaia and Hipparcos measurements. These are the same objects for both Hipparcos treatments.
To search further for any trends in the parallax comparison, we plotted the best fit straight line to the parallax measurements of Hip 1997 and Hip 2007 against Gaia DR2 for several different parallax intervals. Fig. 9 shows the parallax range 0-15 mas, Fig. 10 shows the parallax range 15-40 mas, and Fig. 11 shows the parallax range 40-200 mas. The y=x lines of equal parallax are in blue. The best agreement between Hipparcos and Gaia is found in the 10-40 mas range. For lower parallaxes (greater distances) the Hipparcos measurements are systematically higher than Gaia, while the situation is reversed for the higher parallaxes. The weighted mean offset in the whole sample of parallax measurements ( Fig. 12) Gaia − π 1997 Hip ) it is −59.03µas. The analysis shows clearly that the Van Leeuwen 2007 analysis is in better agreement with Gaia DR2 than the original Hipparcos reduction, justifying the reworking of the astrometric data. However, it is also clear that this work has now been superceded by Gaia DR2 in terms of astrometric accuracy.
One important factor to note in the Gaia DR2 data is that all objects in the catalogue are effectively single stars. Single line spectroscopic binaries were treated as single objects while double line binaries detected as such were excluded from the catalogue (Gaia Collaboration et al., 2018). For resolved systems, such as those included in this analysis, binary motion could affect the accuracy of the parallax determination for orbital periods below 2 years (Gaia Collaboration et al., 2018). In their anal- ysis of the stellar parameters using Gaia DR2 data, Andrae et al. (2018) noted some of these issues and recommended only using estimates of radii and luminosities for stars with fractional parallax uncertainties of 20% or less. We found that some BSs have parallax discrepancies between Gaia and either Hipparcos catalogues. A number have significant differences (larger than 5 mas) in parallax measurements between the two catalogues: Hip (190, 1076Hip (190, , 1349Hip (190, , 1392Hip (190, , 1625. The study of these systems is important and can be used as a tool to judge the reliability of Gaia parallax measurements. As an example, we analyzed the system Hip 68170 using Al-Wardat's method (Section 4).
The reasons for the large differences in parallax measurements between Hipparcos and Gaia could be due to either; the effect of the interstellar extinction, or the effect of the change of photo-center of these binaries as noted by several authors (Shatskii & Tokovinin, 1998;López Oriona et al., 2020).
Dynamical Masses
Binary stars are the best source of information regarding stellar masses, where we can calculate the dynamical mass sum of a BSs once we have its orbit and parallax Docobo et al. (2014). DR2 gives more accurate parallax measurements, which in principle means more accurate masses and fundamental parameters. There is a clear strong relation between the multiplicity in stars in general, and the estimated or cal- culated mass-sums. Duchêne & Kraus (2013) discussed the mass-dependence of the main sequence stars multiplicity properties and showed that the multiplicity rises significantly toward high-mass stars.
In this study, we want to evaluate the impact of improved parallax data on the mass values and accuracy of the measurements of CVBSs. Therefore, we recalculated the dynamical masses of all CVBSs with solved orbits in the ORB6 using parallax measurements of the three astrometric catalogues discussed in section 2 and the photometric mass sums from (Malkov et al., 2012b). Results are listed in Table 2.
where P is the orbital period (in years), M dyn is the dynamical mass sum in solar mass M ⊙ , a and π are the semi-major axis and the parallax in arcsec, respectively. Hip 1997 has the highest mean (i.e. the lowest accuracy), and Hip 2007 lies between them. Fig. 15 and Fig. 16 show scatter plots comparing the dynamical mass sum based on the two catalogues of Hipparcos trigonometric parallax measurements with the Gaia DR2 data. These plots show a large scatter compared to the parallax plot from Fig. 7 and Fig. 8, because any change in the parallax will be enlarged in the dynamical masses, based on Kepler's third law.
Malkov et al. (2012b) Photometric Masses
As previously stated, to get precise dynamical masses we need precise orbital parameters, which requires more relative positional measurements and accurate parallax measurements that are not always available. Therefore, it is important to have alternative validated methods for estimating stellar masses. Malkov selected a sample of 652 visual binaries with good orbital solutions and used Hip 2007 parallax measurements (van Leeuwen, 2007) to estimate luminosities and masses of the individual components of these BS. He used the photometric empirical mass-luminosity (M − L) relation, given as: where m 1,2 are the apparent magnitudes of individual components, f MLR is the mass-luminosity relation, A v is the interstellar extinction value, and π is the trigonometric parallax. The overlap of the BS studied by Malkov that have parallax measurements in the three catalogues is 340 CVBSs.
We plot here Malkov's photometric mass sums against the dynamical mass sums based on Hip 1997 parallax measurements in Fig. 17 Any significant discrepancies may be related to mis-identification of stellar multiplicity for massive stars, where they are identified as binary systems but potentially have more than two components. Duchêne & Kraus (2013) pointed out that the probability of multiplicity of stars with M ∼ (1.5 ≤ M ≤ 8) M ⊙ is ≥ 50% and for stars with M ∼ (8 ≤ M ≤ 16) M ⊙ is ≥ 60%. Analysis of BSs using Al-Wardat's method can test this idea.
Masses Based on Al-Wardat's Method
Al-Wardat's method is a computational spectrophotometric multi-parameter approach that employs atmosphere modelling (ATLAS9) and synthetic photometry to estimate all physical parameters including individual masses (Note that Gaia team used ATLAS9 synthetic spectral library for the extinction and reddening estimations). Since it depends on accurately measured magnitudes and colour indices, the method can estimate masses and parallaxes for MS stars without parallax information. However, the approach is more robust when parallaxes are available, especially for evolved stars. We list here, in Table 3, the masses of 17 BSs analyzed earlier using Al-Wardat's method, in addition to the results Fig. 22, where the black lines are the perfect fit y = x. The comparison shows a very good consistency between Al-Wardat's masses and Malkov's photometric masses, and a good consistency between Al-Wardat's masses and the dynamical masses for most stars, although there are a few outliers in each figure. The best consistency is with the dynamical mass sums based on Hip 2007, but most of the stars were analyzed using the parallaxes from this catalogue. We recommend that all systems should be reanalysed using Gaia DR2 parallax measurements, which will be done in a future work.
NOTES ON SPECIFIC SYSTEMS
Figs. 22 and 23 show some scattered points, where there are significant differences between dynamical masses and those estimated by Al-Wardat's method. These are the systems HD 25811, Hip 12552, Hip 64838, and Hip 689. All of these have large error values for the parallax measurements in the catalogues. A possible explanation is that the trigonometric parallax measurements have been distorted by the orbital motion of the components of such systems, which affects the position of the photo-center of the system (Shatskii & Tokovinin, 1998). Individual systems are discussed below: HD 25811: This system was analyzed using Al-Wardat's method (Al-Wardat et al., 2014a). In spite of the fact that there was no measured trigonometric parallax at that time, they estimated M a = 1.55 ± 0.16 M ⊙ , M b = 1.50 ± 0.15 M ⊙ , and a dynamical parallax (π = 5.095 ± 0.095 mas, d = 196.27 pc) depending on an initial value (π = 5.24±0.6 mas, d = 191 pc) taken from Al-Wardat (2003). Comparing the estimated dynamical parallax by Al-Wardat et al. (2014a) as (5.095± 0.095) mas with the measured value by Gaia as (4.953±0.080) mas shows that the estimated value using Al-Wardat's method was very close to the new Gaia measurement. This is a good indicator of the accuracy of the "Al-Wardat method" for analyzing CVBSs. Another note regarding the system HD 25811 is that the published Table 3 The individual and total masses from AL-Wardat, Malkov photometric mass sum and dynamical mass sum.
Hip
M Girardi et al. (2000), with a mass sum of 3.10 ± 0.37 M ⊙ which is very close to that of Al-Wardat et al. (2014a). This is the closest mass to the 3.32 M ⊙ calculated using Gaia parallax and the orbital elements of Al-Wardat et al. (2014a), another indicator of the reliability of Al-Wardat's method. Adopting the physical and geometrical parameters of the system given by Al-Wardat et al. (2014a) would suggest a little bit higher parallax than that of Gaia, i.e the system is a bit closer.
Hip 12552: This system has some discrepancies in trigonometric parallax, where Hip 1997 gives 9.69 ± 1.29 mas, Hip 2007 gives 11.07± 1.07 mas and Gaia 2018 gives 13.786± 0.583 mas. Al-Wardat et al. (2016) estimated a value equal to 11.83 ± 1.07 mas, which is closest to Gaia measurement. The discrepancy in the parallax measurements resulted in differences in mass-sums, where the estimated mass-sum by Al-Wardat et al. (2016), based on Al-Wardat's method, was 2.23 M ⊙ and the photometric mass sum was 2.54 M ⊙ , While this is close to the dynamical mass based on Hip 2007 parallax measurements, we note again that it is mainly because Malkov used the parallax of Hip 2007. These mass sums are higher than the dynamical mass sum based on Gaia 2018 parallax measurements given as 1.48 M ⊙ . Reanalyzing the system using Al-Wardat's method assured a mass of the system higher than 2 solar masses, implying that the discrepancy could be due to an error in Gaia parallax measurement for this system, or due to inaccurate orbital elements. New relative positional measurements are required to resolve the situation.
Hip 64838: This system has lower discrepancies in trigonometric parallax, where parallax from Hip 1997 is 13.45 mas, from Hip 2007 is 12.28 mas, and from Gaia 2018 is 13.18 mas. The photometric mass sum is 3.27 M ⊙ , identical to the dynamical mass based on Hip 2007 parallax measurements. However, the dynamical mass based on Gaia 2018 parallax measurements is 2.64 M ⊙ !. Al-Wardat et al. (2017) analyzed this system using both Al-Wardat's method and Docobo's dynamical method. The authors presented two orbital solutions; a short one with a period of 9.130 ± 0.030 yrs and a long one with a period of 18.442 ± 0.200 yrs, and two evolutionary states; either MS components or Sub-giant ones. The one preferred by the authors was the short period subgiant solution, which required a dynamical parallax of 13.13 ± 0.43 mas and a mass sum of 2.665 ± 0.125M ⊙ . This coincides perfectly with the trigonometric parallax given later by Gaia and the masses calculated using it.
Hip 689: This system was analyzed by Al-Wardat et al. (2014c) using Al-Wardat's method asking the question: Is it a sub-giant binary? The system has discrepancies in trigonometric parallaxes, where that of Hip 1997 is 12.72 mas, that of Hip 2007 is 11.69 mas, and that of Gaia 2018 is 10.112 mas. This gives a distance error range equals to 20 pcs. Hence, it affects the calculated mass-sums strongly. Both the mass-sum given by Al-Wardat et al. (2014c) as 2.6 M ⊙ , and the photometric mass sum given by Malkov et al. (2012a) as 2.67 M ⊙ coincide with the dynamical mass calculated based on Hip 2007 parallax measurement. However, when we recalculate the dynamical mass based on Gaia 2018 parallax measurement we obtain a value of 4.29 M ⊙ !. If we suppose that the parallax given by Gaia 2018 is precise, then we can say that Hip 689 has more than two components, and it could be a triple system. This requires a further analysis of the system and more high resolution imaging observations to resolve the question of multiplicity.
Hip 68170: This system is analyzed using Al-Wardat's method for the first time in this paper. It has been chosen as an example of the 55 problematic systems which have been discussed earlier as an example of the ability of the Al-Wardat's method to estimate the fundamental parameters independently of the parallax and to judge between different measurement methods. The observational data used to analyze the system is collected in Table 4. There are clear discrepancies in the trigonometric parallaxes between Hip 1997 (14 mas), Hip 2007 (14.43 mas), and Gaia 2018 (8.06 mas). This gives a potential range in the possible distance of 50 pc. Hence, there is a corresponding large range of calculated dynamical mass-sums. The system also has a very large value for the interstellar extinction of the system as 32.7542 as shown in Table 4.
The results of the analysis for the three parallax measurements of Hipparcos and Gaia are listed in Table 5, which gives the physical and geometrical parameters of the individual components (effective temperature, radii, gravity, luminosity's, absolute and bolometric magnitudes, spectral types, and masses). The estimated mass sum using Al-Wardat's method are: 2.95 M ⊙ using Hip 1997 parallax measurement, 2.91 M ⊙ using Hip 2007 parallax measurement and 4.07 M ⊙ using Gaia DR2 parallax measurement.
In order to calculate the dynamical mass sum, we used the latest modified orbit of the system which gives an orbital period of P = 18.757 years and semi-major axis of a = 0.136arc − second (Mason et al., 2019). for Gaia DR2. Which shows that the discrepancy in the parallax does not significantly affect the estimated masses using Al-Wardat's method, while it does have a clear impact on the dynamical mass sum.
So, the final result for the system is as follows: Depending on Fig. 25, Al-Wardat's method gives a mass sum of 2.95 ± 1.12 , this leads for a new dynamical parallax of 13.43 ± 1.37, to which the closest is that of Hip 1997 as 14.00 ± 0.79. This shows that there is a clear issue in the parallax measurements of Gaia for this system, likely to be mainly due to interstellar extinction. 6.742 ± 0.004 -△m v * 0.39667 * * * The average visual magnitude difference. * * ESA (1997); Tokovinin et al. (2010Tokovinin et al. ( , 2014Tokovinin et al. ( , 2016; Tokovinin (2017) (Girardi et al., 2000).
The analysis shows that the system consists of two subgiant stars, as shown in Fig. 25, with a metallicity of 0.019 dex and age of 2.75±0.50 Gy as shown in Fig. 26. Fragmentation is the most probable formation theory for such a system. The spectral types of the components are estimated as G6.5IV and G9.5IV for the primary and secondary components respectively, which are consistent with those proposed by Cutispoto et al. (2002) as G2IV/III and G4IV/III.
DISCUSSION
A comparison between the calculated masses using trigonometric parallaxes of the three catalogues has shown a consistency in the distributions. Of course, this Table 5 The physical parameters of the individual components of the system HIP 68170 as estimated using Al-Wardat's method and based on parallax measurements of Hip 1997, Hip2007, and Gaia DR2. The adopted final results for the system are given in cols. 3 and 4, with a new dynamical parallax of 13.43 mas.
Hip1997
Using πHip2007 Using πGaia2018 14.00 ± 0.79 ( is an expected result since the dynamical masses in the three cases are calculated using the same orbital solutions, in addition to the fact that the sample focused on systems with differences between the three parallaxes within 5% of the value. Systems of higher values and larger differences in parallax measurements between Gaia and Hipparcos (both catalogues), are eliminated from this study, and will be studied again after the next release of Gaia data. The distribution of the masses shows a concentration of BSs among the low mass sums M ∼ (0 < M ≤ 4) M ⊙ , which is reasonable because stars in the milky way galaxy are mainly main-sequence stars with masses in the range M ∼ (0.08 < M ≤ 8) M ⊙ .
This tendency for the binaries to have low masses could be explained by the theories of formation of BS, but these still need more observational data to differentiate between one theory or another (Tokovinin, 2018a). In general, the currently most accepted theory of the formation of BS is the fragmentation of proto-stellar cores or circumstellar discs (Bate et al., 1995;Kroupa, 1995;Bate et al., 2002;Tohline, 2002;Kratter & Matzner, 2006;Clarke, 2009;Offner et al., 2010;Kratter & Lodato, 2016;Moe & Di Stefano, 2017;Moe et al., 2019;Tokovinin & Moe, 2020). Which may result in multiple systems higher than binaries in the case of massive proto-stellar cores and discs, because of the gradually increasing likelihood that more massive stars will fragment (Kratter & Matzner, 2006). We expect a strong contribution from Gaia data in solving the mysteries of formation of multiple stars, where recent stellar evolution theories concentrate on the study of massive stars (Aghakhanloo et al., 2020).
The accuracy of the Al-Wardat method had been demonstrated in this paper shown to provide a useful consistency check for DR2 parallax measurements. Some of the parallaxes obtained by Hipparcos give masses more consistent with the photometric and dynamic system parameters than DR2. On the other hand, Al-Wardat's method estimated a parallax for the system HD25811, which had no parallax from Hipparcos, as 5.095 ±0.095 mas in 2014 (Al-Wardat et al., 2014a). This value is very close to that of Gaia DR2 -4.953 ± 0.081 mas. Moreover, the method can deal with multiple stellar systems which are sometimes ignored by dynamical methods and the Malkov Method, which assumes that the systems are binaries.
CONCLUSION
In 2018, the Gaia collaboration released the DR2 data, which gave precise parallax measurements for approximately 1.7 billion objects in addition to other photomet-ric and astrometric data. These precise parallax measurements have allowed many astronomical questions to be addressed. One in particular is the case of close visual binary stars, where it has been noted that Hipparcos parallax measurements of binary and multiple systems are, in some cases, distorted by the orbital motion of the components of such systems (Shatskii & Tokovinin, 1998). In this paper, we have looked at the precision of the parallax measurements for the two missions and considered how they affect the measurements of the physical parameters for a sample of 1700 close visual binaries, taken from the Sixth Catalog of Orbits of Visual Binary Stars. First, we focused on comparing the parallax measurements between the three space-based astrometric catalogues: Hipparcos 1997, van Leeuwen 2007, and Gaia DR2 2018. The results showed that van Leeuwen's reduction of Hipparcos data was indeed an improvement on Hipparcos 1997, and those parallaxes are in better agreement with the Gaia DR2 release than those of Hipparcos 1997. Secondly, this work studied the mass-sum of the selected binary systems, where we calculated the dynamical mass-sum using parallaxes from the three catalogues and then compared the results with masses estimated using other methods (340 systems from Malkov photometric masses and 17 systems depended on Al-Wardat's method). The results showed that the estimated masses using Al-Wardat's method for analyzing CVBSs, which is a computational spectrophotometric technique, were closer to the dynamical masses than those of the photometric mass sum given by Malkov, and closer to the dynamical masses calculated using van Leeuwen 2007 parallaxes. The latter point can be explained by noting that those works, which used Al-Wardat's method, adopted mainly van Leeuwen 2007 parallax measurements. Finally, we discussed five specific BSs which showed discrepancies between their mass sums calculated or estimated by different methods. The comparison showed that Al-Wardat's method is an effective method for analyzing close visual binary an multiple systems.
There are several future lines of study that have emerged from our work: • Interstellar extinction should be taken into account during the further analysis of Gaia parallax measurements. Special attention should be given to specific high extinction regions in the galaxy. • The effect of duplicity, multiplicity on the photo center and resulting Gaia parallax measurements should also be taken into account. • There needs to be a detailed programme to reanalyze all previously studied binary and multiple systems using the new Gaia parallax measurements and applying the Al-Wardat Method for complete internal consistency in the measurement of the stellar physical parameters.
• A new parallax measurements for the system Hip 12552 is needed, and also new relative position measurements to help in solving the issue of the parallax difference between Gaia and Hipparcos, and to obtain the system's precise fundamental parameters. This may become available from the next Gaia data release. • To reanalyze the system Hip 689 using a different method in order to know if it is a binary, a triple or a quadruple system.
ACKNOWLEDGEMENTS
This work has made use of data from the European Space Agency (ESA) mission Gaia (https://www.cosmos.esa.int/gaia), processed by the Gaia Data Processing and Analysis Consortium (DPAC, https://www.cosmos.esa.int/web/gaia/dpac/consortium). It also has made use of SAO/NASA, the SIMBAD database, the Fourth Catalog of Interferometric Measurements of Binary Stars, IPAC data systems, and codes of Al-Wardat's method for analyzing close visual binary and multiple stars.
Funding for the DPAC has been provided by national institutions, in particular the institutions participating in the Gaia Multilateral Agreement. | 2021-01-18T14:07:27.248Z | 2021-01-18T00:00:00.000 | {
"year": 2021,
"sha1": "b657c8fd24aed412e5135470016ce9c19018d7a1",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/2111.05325",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "5c5f15a4a157b6a2bf6b8c16175eba9e1c685919",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
105200024 | pes2o/s2orc | v3-fos-license | Antibacterial and antibiofilm activities of quercetin against clinical isolates of Staphyloccocus aureus and Staphylococcus saprophyticus with resistance profile
The aim of this study was to determine the antibacterial and antibiofilm properties of quercetin against clinical isolates of Staphyloccocus aureus and Staphylococcus saprophyticus with resistance profile. The antibacterial activity of quercetin was performed by the determination of the minimum inhibitory concentration (MIC) through the microdilution method according to the Clinical and Laboratory Standards Institute (CLSI). The percentage of inhibition of Staphylococcus spp. biofilm, after treatment with sub-inhibitory concentrations of quercetin (MIC/2 and MIC/4), was evaluated by the violet crystal assay. Quercetin showed an antimicrobial activity against clinical isolates of methicillin-susceptible S. aureus (MSSA) (MIC = 250 µg/ml), methicillin-resistant S. aureus (MRSA) (MIC = 500 µg/ml), vancomycin-intermediate S. aureus (VISA) (MIC = 125 and 150 µg/ml), S. saprophyticus resistant to oxacillin (MIC = 62.5 to 125 µg/ml), vancomycin-resistant S. aureus (VRSA) and S. saprophyticus resistant to oxacillin and vancomycin (MIC = 500 to 1000 µg/ml). At MIC/2 and MIC/4 the quercetin inhibit 46.5 ± 2.7% and 39.4 ± 4.3% of the S. aureus biofilm, respectively, and 51.7 ± 5.5% and 46.9 ± 5.5% of the S. saprophyticus biofilm, respectively. According to the results of this study, it was noticed that the quercetin presented an antibacterial activity against strains of Staphylococcus spp. with resistance profile and also inhibited the bacterial biofilm production even in sub-inhibitory concentrations.
Staphylococcus saprophyticus is also a species of the genus Staphylococcus that has a wide clinical importance. S. saprophyticus composes the normal microbiota of the skin and urinary and genitals tracts. However, when there is an imbalance in the microbiota, occurs the begining of urinary infections [2,3]. The resistance to methicillin in the S. saprophyticus strains has also reached a global distribution. Many studies defend that the main mechanism related to the acquisition of resistance to methicillin, in S. saprophyticus, is through the transfer of resistance genes present in the strains of MRSA or methicillin-resistant S. epidermidis [3,9]. The ability of some microorganisms to produce biofilm is another global public health concern. Biofilms are biological communities with a high degree of organization, in which microorganisms form structured, coordinated and functional communities. In addition, these biological communities are capable of produce polymeric matrices, wherein they are immersed and adhered to a biotic or abiotic surface [10,11]. Biofilmproducing microorganisms are responsible for most of the human bacterial infections, once they have colonization with greater structural stability and longevity. The biofilm promotes a protective barrier between bacteria and the environment, acting like an important virulence and pathogenicity factor, making these bacteria highly resistant to antimicrobials and host immunity [11,12]. In this way, it is important to conduct studies to identify the bacterial resistance phenotype, in order to contribute to epidemiological surveillance, especially of the genus Staphyloccocus, one of leading causes of nosocomial infections. The dissemination, especially in hospital environments, of these pathogens resistant to antimicrobial agents and biofilm producers, represents a serious threat to public health, implying in the therapeutic failure of many infectious diseases [13,14]. Despite of the development of new antimicrobials by pharmaceutical industry in the last three decades, infections caused by bacteria of genus Staphylococcus are still an alarming health problem. Therefore, it is necessary to discover new therapeutic options with antimicrobial and antibiofilm activity [13][14][15][16]. The flavonoids, secondary metabolites of the polyphenols class, are found in vegetables, fruits, nuts, honey, s tems and flowers. Quercetin, 3,5,7,3'-4'-pentahydroxy flavone, is the most abundant flavonoid present in the h uman diet and represents about 95% of the total ingested flavonoids. This molecule is one of the most studied flavonoids due to its biological activities, such as antiviral, antimicrobial, antioxidant, antithrombotic and antitumoral. Some studies have described its antimicrobial activity against some microorganisms, such as Bacillus subtilis, Micrococcus luteus and Aspergillus flavus [17,18]. Despite of the existence of studies that already report its antimicrobial activity, there are no researches regarding its antimicrobial and antibiofilm activity against clinical isolates of Staphylococcus spp. resistant to vancomycin. In this way, the aim of this study was to evaluate the antimicrobial and antibiofilm activities of quercetin against Staphylococcus spp. clinical isolates with resistance profile.
II.
MATERIAL AND METHODS 2.1 Identification of clinic isolates Staphylococcus spp. clinical isolates were provided by a university hospital of Pernambuco, in the period from January to March 2017. The isolates were seeded in nutrient Agar (AN) for subsequent identification of bacteria. After that, the samples were seeded in Baird Parker Agar (BPA) base supplemented with 2% Egg yolk Tellurite emulsion (Hi-Media), incubated at 35 ± 2 °C for 48 h. The typical colonies of S. aureus (shiny black with an opaque ring, surrounded by a clear halo) were submitted to gram stain, catalase assay, coagulase, mannitol salt Agar assay and DNAse for Staphylococcus aureus identification. The colonies that did not presented typical aspects were submitted to gram stain, catalase assay and novobiocin sensitivity tests (5 µg), to identify S. saprophyticus (resistant to novobiocin) or S. epidermidis (sensitive to novobiocin) [19,20]. Methicillin-sensitive Staphylococcus aureus (MSSA) ATCC 29213 and MRSA ATCC 33591 were used as control strains.
Identification of resistance profile of the clinical isolates
The identification of resistance profile of the Staphylococcus spp. clinical isolates was conducted according to Clinical and Laboratory Standards Institute [21]. For the identification of MRSA, vancomycinintermediate Staphylococcus aureus (VISA), vancomycin-resistant Staphylococcus aureus (VRSA) and S. saprophyticus resistant to cefoxitin, oxacillin and vancomycin were submitted to the method of disk diffusion with cefoxitin, oxacillin and vancomycin; microdilution method with oxacillin and vancomycin; as well as screening for oxacillin and vancomycin [21]. For the disk diffusion method, inocula of microorganisms were adjusted to 0.5 of the McFarland scale and seeded in Müeller Hinton Agar (MHA). Then, cefoxitin, oxacillin and vancomycin were deposited on the plates and incubated at 35 ± 2 °C for 24 h. After incubation, the inhibition halos were measured and analyzed following the CLSI cutting points [21]. The minimum inhibitory concentration (MIC) was determined by the microdilution method according to the CLSI [21]. Initially, 95 µl of Müeller Hinton Broth (MHB) was added to all plate wells. After, oxacillin and vancomycin were added in concentrations range from 0.5 to 256 µg/ml or 0.0625 to 32 µg/ml, respectively. Bacterial suspensions were adjusted to 0.5 of the McFarland scale, diluted and added in the wells to obtain a final concentration of 2-5 x 10 5 CFU/well. Subsequently, the plates were incubated at 35 ± 2 °C for 24 h. The MIC was determined as the lowest concentration of the standard drug able to inhibit >90% of the microbial growth through spectrophotometry at 620 nm. The minimum bactericidal concentration (MBC) was determined after the obtained results of MIC. An aliquot of the wells with no microbial growth was inoculated in MHA and the plates were incubated at 35 ± 2 °C by 20-24 h. After this period, the MBC was determined as the lowest concentration with no microbial growth. The samples were analyzed following the CLSI cutting points [21]. In the screening test, initially, plates with Müeller Hinton Agar containing 4% NaCl and 6 µg/ml of oxacillin and plates with Brain Heart Infusion Agar (BHIA) containing 4% NaCl and 6 µg/ml of vancomycin were prepared. Then, microorganism inocula were adjusted to 0.5 of the McFarland scale and seeded in the plates. Finally, the plates were incubated at 35 ± 2 °C for 24 h. The plates were carefully observed against the light and any growth after 24 h was considered resistant to oxacillin and/or vancomycin [21].
Phenotypic characterization of biofilm production 2.3.1 Congo Red Agar test
The qualitative determination of biofilm production by clinical isolates was carried out according to the method of Congo Red Agar [22]. The isolates were adjusted to 0.5 of the McFarland scale (10 8 CFU/ml) in BHIA, incubated at 35 ± 2 ºC for 24 h and seeded in plates containing Congo Red Agar. Subsequently, they were incubated in aerobic environment at 35 ± 2 ºC for 48 h. After this period, the colonies with blackened coloration, with dry or rough consistency, were considered as biofilmproducers. Colonies of red color, with mucous consistency, were considered as not biofilm-producers. The experiment was performed in triplicate and in 3 different days.
Violet crystal staining
The quantitative determination of biofilm production was performed by the method of violet crystal staining [23]. Initially, the bacterial isolates were seeded in AN and incubated at 35 ± 2 °C for 18-24 h. Inocula were incubated in Tryptone Soy Broth (TSB) with 1% glucose for 24 h. Every culture was adjusted to 0.5 of the McFarland scale (10 8 CFU/ml) in the TSB with 1% glucose and the adjusted bacterial suspension was added to 96 wells plate with flat bottom. The plates were incubated at 35 ± 2 °C for 48 h. Then, the wells content were aspirates and washed with phosphate buffer (pH 7.4). Next, 200 µl of 99% methanol was added and incubated. After 15 minutes of incubation, the content was discarded. Subsequently, a solution of 1% of violet crystal stain was added in the wells and the plates were kept at room temperature for 30 minutes. The wells content was removed and washed with phosphate buffer. A solution of 33% glacial acetic acid was added and the optical density (OD) was measured by spectrophotometry at 570 nm (Multiskan microplate photometer FC, Thermo scientific, Madrid, Spain). Wells containing only the culture medium were used as control. The strains were classified into four categories, based on the values of ODs of bacterial biofilms, in comparison with value of the ODc (optical density of the control). The strains were classified into non-adherent if OD ≤ ODc; weak biofilm producer if ODc < OD ≤ 2 × ODc; moderate biofilm producer if 2 × OD ≤ 4 × ODc < ODc; or strong biofilm producer if 4 × ODc < OD [23]. The experiment was performed in triplicate and in 3 different days.
Antimicrobial activity of quercetin
The antimicrobial activity of quercetin (Sigma-Aldrich ® ) was performed by the microdilution method, already described previously, according to the CLSI [21]. The range of concentration of quercetin used in this study was 2 to 1000 µg/ml. The experiment was performed in triplicate and in 3 different days.
Biofilm formation-inhi bition test
The antibiofilm activity of quercetin was carried out according to Das, Yang and Ma [24]. Initially, inocula were adjusted to 0.5 of the McFarland scale (10 8 CFU/ml) in TSB with 1% glucose and diluted to obtain bacterial cells concentration of 10 5 CFU/ml. These inocula weredistributed in 96 plate flat-bottom wells and incubated at 37 ± 2 °C for 24 h. Later, the wells content was removed and quercetin was added in MIC, MIC/2 and MIC/4. The plates were incubated at 35 ± 2 °C for 24 h. Then, the wells content was aspirated and the violet crystal stain method was performed, as described in section 2.3.2. The experiment was performed in triplicate and in 3 different days.
III. RESULTS AND DISCUSSION 3.1 Identification of species and phenotypic resistance profile
The identification of microorganism's prevalence in a given region is essential for the implementation of containment measures of infections caused by these bacteria. In addition to the knowledge of the species that cause infection, the identification of the resistance profile is of great importance for infections treatment caused by these microorganisms [14]. The prevalence of resistant bacteria of genus Staphylococcus in hospital and community infections, especially in immunosuppressed individuals, makes these bacteria important subjects in research studies [3,6]. Bacteria of the genus Staphylococcus are recognized for their ability to develop drug resistance, prolonging the patient's treatment time and causing high morbidity and mortality rates [3][4][5][6]. One of the main bacterial resistance profiles of the genus Staphylococcus is the resistance to oxacillin [5.6] Although, vancomycin is currently demonstrating inefficiency in some cases [26,27]. The arising of clinical isolates with intermediate resistance or resistant to vancomycin is one of the reasons that worries the worldwide organizations related to public health, as well as an alert to health professionals [27]. Studies indicate that the appearance of the antibiotic resistance phenotypes of VISA is related to hospitalization and persistent infection [26,27], and may arise when a single colony of bacterial cells, formed mostly by cells that do not have resistance to vancomycin
Phenotypic characterization of biofilm production
In the Congo Red Agar test, all 22 Staphylococcus clinical isolates were characterized as biofilm-producers ( fig. 1).
In the violet crystal method, all strains were characterized as biofilm-producers, being 1 classified as a low producer (4.5%), 10 as strongly biofilm-producer (45.5%) and 11 as moderately biofilm-producer (50%) ( Table 3). This compatibility in the results for quantitative and qualitative methods that evaluated the biofilm production by bacteria of the genus Staphylococcus has been described in other studies [32,33].
Fig.1: Evaluation of biofilm production by Congo Red
Agar test. (Table 5). In addition, the molecule was able to inhibit the biofilm production by these bacteria, even when analyzed in subinhibitory concentrations (Tables 4 and 5). Quercetin showed MIC of 250 µg/ml, 500 µg/ml and 125 to 250 µg/ml against MSSA, MRSA and VISA, respectively. The best inhibitory activity of quercetin was against the S. saprophyticus strains resistant to oxacillin and cefoxitin (MIC = 62.5 to 125 µg/ml). The lower inhibitory activity of quercetin was observed against the VRSA strains and S. saprophyticus resistant to vancomycin, oxacillin and cefoxitin (MIC = 500 to 1000 µg/ml).
To show a good antibacterial activity, the molecule has to present MIC < 100 µg/ml, moderate activity with MIC between 101 and 500 µg/ml, weakly active when MIC is between 501 and 1000 µg/ml, and is inactive when MIC > 1001 µg/ml [36]. So, quercetin, in general, presented moderate antibacterial activity against the clinical isolates tested, except for VRSA and S. saprophyticus resistant to vancomycin, oxacillin and cefoxitin, where this molecule showed a weak activity. Studies evaluated the antimicrobial activity of quercetin against bacterial strains using the disk diffusion or Agar diffusion method. Rauha et al. [37] observed that quercetin presented antimicrobial activity at concentration of 500 µg/ml against ATCC strains of the species: Aspergillus niger, Bacillus subtilis, Candida albicans, Escherichia coli, Micrococcus luteus, Pseudomonas aeruginosa, Saccharomyces cerevisiae, Staphylococcus aureus and Staphylococcus epidermidis, determined by the disc diffusion method. Gatto et al. [17] found no antibacterial activity of this flavonoid, in the concentration of 100 µg/ml, in any of the tested bacteria (Staphylococcus aureus, Bacillus subtilis, Listeria ivanovi, Listeria monocytogenes, Listeria serligeri, Escherichia coli, Shigella flexneri, Shigella sonnei, Salmonella enteritidis and Salmonella tiphymurium). Nitiema et al. [38] evaluated the antibacterial activity of quercetin, at a concentration of 1000 µg, through Agar diffusion method, and did not observe any activity of this molecule against bacterial strains causers of gastroenteritis. Studies that use qualitative and less precise methods, such as disk diffusion and Agar diffusion, are able to identify the antibacterial activity of quercetin, but they cannot determine the minimum inhibitory concentration. Thus, quantitative methods are important for a future in vivo drugs application, because they help in the determination of the dose that will be used in the treatment of infection, in humans and animals [16].
IV. CONCLUSION
In this study, we showed that the S. aureus is the major cause of bacterial infection in genus Staphylococcus, followed by a high incidence of S. saprophyticus. In addition, there is a concern on the incidence of resistant bacterial strains among patients of this hospital in Pernambuco, evidenced by the occurrence of vancomycin-resistant strains and the high incidence of strains that are strongly biofilm producers. In this way, we emphasize the need for identification of the resistance profile of clinical isolates, as well as the ability of this isolates to produce biofilm, once that these two factors are important to bacteria survival and could explain the inefficiency of many treatments. According to our results of antimicrobial and antibiofilm activities of quercetin, we can affirm that this molecule exhibited a promising Finally, further studies must be conducted in order to analyze the in vivo antibacterial activity of quercetin in infections caused by Staphylococcus species. | 2019-02-18T06:40:04.429Z | 2018-09-01T00:00:00.000 | {
"year": 2018,
"sha1": "1acaff2e9a55365b378f2062ced95080d8e9ed75",
"oa_license": "CCBYSA",
"oa_url": "https://ijeab.com/upload_document/issue_files/50-IJEAB-OCT-2018-32-Antibacterial.pdf",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "b3efbfd34b387fcc138d171fa069517acbcd97a1",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Chemistry"
]
} |
261605167 | pes2o/s2orc | v3-fos-license | Psychometric properties of Indonesian slums dwellers’ place attachment
Social scientists have long considered place attachment to be an important factor in promoting environmentally sustainable behaviours among individuals. Raymond and colleagues have developed a five-factor place attachment measure, comprising place dependence, nature dependence, place attachment, family bonding, and friendship bonding, that encompasses most of the differentiations made and that has been amply tested for validity and reliability. However, the bulk of these confirmatory studies have been conducted in Western societies, neglecting people in the Global South and particularly people living in unstable, environmentally fragile regions such as slum areas. This study aims to fill this omission by testing the psychometric qualities of the five-factor place attachment measure in Indonesian slums using a dataset collected by the Resilient Indonesian Slums Envisioned (RISE) project. The dataset consists of a random sample of 700 respondents, living in slum areas of the cities of Bima, Manado, and Pontianak. We split the dataset into two and run factor analyses in EFA (N = 325) and CFA (N = 375) modes. Most notably, our results suggest a four-factor scale, in which place and nature dependences are merged into a single dimension. This finding seems logical considering that those living in urban slums are likely to have their natural surroundings, such as a river and its banks, as part of their living space. Overall, our study extends the use of place attachment to disaster-prone slum contexts that are often overlooked and, thus, supports the line of research that promotes environmental sustainability among people especially vulnerable to ecological changes.
Introduction
Discussions on person-place bonds started in the 1950s and, since then, this topic has received substantial attention from various disciplines [see 1 ,2].Historically, these discussions revolved around the relation between human behavioural and environmental issues.Today, research in this field has extended to focus on cognitive functions, knowledge, and beliefs about aspects of the environment, emphasising individual subjective experiences and emotional attachment to the environment [1,2].
Therefore, the notion of place attachment is often used to refer to the connection of humans with their meaningful place.Put it simply, place attachment is an emotional tie between individuals and their place of living, identified by their identification with the place and their dependence on it [3,4].Despite its establishment in the scholarly discussion, there seems to be varying conceptualizations and thus, operationalizations, of place attachment.Some scholars refer to a tripartite theoretical framework for getting to grips with place attachment [5].The framework emphasises that people develop their attachment to a specific place through a personal dimension, such as religious and historical factors, through place, like social and physical factors, and through a psychological process, such as cognitive and affective evaluations.Along this line, Gustafson [6] offers another tripartite framework, in which he delineates the connections between the self, (e.g., activity, self-identification), others (e.g., social activities with others and community), and environment (e.g., physical distinctiveness).
Based on these conceptualizations, many have pursued various means for explaining place attachment.Some scholars lean more towards a unidimensional construct [7], such as measuring it through neighbourhood attachment [8].While others measure it by using different dimensions, involving people, processes, and place [5].Both unidimensional and multidimensional approaches, nevertheless, consider place as a living space where individuals are able to develop personal and social relationships, as well as attaining their means of living.Therefore, we can conclude that the notion of place attachment relies on two essential elements: (1) whether the place is psychologically distinctive and (2) whether the place allows individuals to realize their goals.In the words of Vaske and Cobrin [3], place attachment centres around place identity and place dependence.This, too, is reflected in the unidimensional approach, in which scholars claim that the components of identity, dependence, and attachment blend into one attitude [2].Place identity is seen as an individual's feeling, or symbolic meaning, about the place that gives their life meaning and purpose, whereas place dependence is a functional connection to a place, reflecting how the physical setting supports the specific goals or desired use [9].
In the light of Vaske and Cobrin's work, Raymond et al. [10] argued that a more comprehensive framework is required to look at individuals' interactions with places in a natural or social context, and how these shape individuals' self-identity.They conceptualised place attachment as an individual's emotional or affective bond to a particular place and consider three contexts.First, the personal context that includes place identity and place dependence.Second, the community context that includes social bonding with family and peers.Finally, the natural environment context that covers natural bonding, affiliation, and connectedness with nature.
Based on the framework above, Raymond et al. [10] constructed a multidimensional scale to measure place attachment that included four dimensions: place identity, place dependence, social bonding, and nature bonding.The place identity and place dependence dimensions were based on a measure developed by Williams et al. [11].The nature bonding dimension was developed based on descriptions of connectedness with nature from Kals et al. [12], and an additional item was added to this dimension based on the findings from semi-structured interviews with 30 local people in a southern Australian region [10].Finally, the social bonding dimension was developed following measures proposed by Kyle et al. [13] and results from a thematic analysis of interviews conducted in the same region as the nature bonding measure discussed above.Their study reveals five dimensions of place attachment, in which the original social bonding dimension was split into friendship and family bonding [10].
In the extant literature, place attachment has been studied in relation to pro-environmental behaviour [14,15], environmentally responsible behaviour [16], and pro-tourism behavioural intentions, such as intention to revisit, positive word-of-mouth, and intention to recommend [17][18][19][20].In addition, previous studies have also found that emotional bonds to one's place of residence encourage people to engage in pro-environmental behaviour, such as protecting the place and assuming greater responsibility for the place's sustainability [21].Other studies have extended the scope by focusing on how place attachment can even predict individuals' responses in coping with disasters [8], and how they behave in recreational parks [22].Although these findings seem promising in times of rapid environmental change [23,24], Adewale et al. [25] argued that place attachment is too often studied under healthy environmental conditions, such as the availability of green open spaces, easy access to public services, low crime rates, and good job opportunities.These standards are assumed to be able to meet individuals' functional and emotional needs, and hence lead to a high level of place attachment.
In reality, however, many people do not enjoy such conditions.Slum areas, for instance, mostly lack these basic environmental and socioeconomic facilities.Kuffer et al. [26] note that an urban slum area may lack one or more of the following characteristics: 1) durable permanent housing that protects against extreme weather, 2) sufficient living space, 3) access to adequate clean water at an affordable cost, 4) access to adequate sanitation, and 5) security in preventing forced evictions.This is especially relevant in many developing countries with high rates of urbanization, such as India, China, Nigeria and Indonesia, which have contributed to a higher gap in supply and demand of decent settlements [25,27].Consequently, we would expect such living conditions to hinder the development of people-place bonds [28].
Nevertheless, some studies have found that individuals living in slums can still feel attached to their neighbourhoods.For example, Li et al. [29] show that place attachment can enable people to survive and not want to move, even though an area is considered of poor physical and environmental quality.Other studies show that people who live in earthquake-prone or potential disaster areas are reluctant to evacuate [30], and even if they did, they would return to the area [8].According to Li et al. [29], these individuals experience a dilemma between their high dependence on their social environment and their desire for a better life.This finding sheds new light on the importance of studying place attachment and its role in eliciting pro-environmental behaviours among people living in less-fortunate areas.
It follows that in this study we proclaim that such an investigation requires a valid measure that is relevant and has been validated among individuals living in environmentally disaster-prone and socially and economically challenging areas, such as slums.The reason is two-fold.First, slums often do not offer the luxury of good and clean environmental surroundings [31,32].Second, people living in these areas are often forced to stay due to the unintended consequences of urbanization as mentioned in the earlier literature [25,27].In line with this, Indonesia provides an appropriate setting for the study of slum areas.Considered as the most disaster-prone country in the world [33], Indonesia's development hinges on the fact that there are still a large number of slum-dwellers spread across the country [34].This combination puts the slum dwellers at a much greater risk of ecological change than many others.In order to respond, it is vital to test the validity of place attachment among people living in Indonesian slums to enable politicians and T. Setiawan et al. policymakers to react in appropriate ways.Nevertheless, such studies are scarce in the current literature.
In order to fill this gap, we test the psychometric qualities of Raymond et al.'s [10] place attachment measure in Indonesian slums.The scale consists of five dimensions, and we opted for this version as it distinguishes between family and friendship bonding, which seemed to be reflected in the areas under study.The scale is tested using a dataset of the "Resilient Indonesian Slums Envisioned (RISE)" project that was collected in 2021 and which focused on the water management and relational wellbeing of people living in slums in the three Indonesian cities of Bima, Manado, and Pontianak.These cities reflect most of the social and water challenges that Indonesian cities are nowadays facing and were selected after careful consideration of the latest report on cities with high disaster risks in a study by the World Bank [27,35].
Method
This study forms part of the larger RISE project that was launched in the autumn of 2021.The project aims to map social-ecological interactions with the goal of making Indonesian cities more resilient to water-related disasters, while also considering other relevant factors such as place attachment.The survey documentation is publicly available online through online archiving system.In this paper we briefly outline the sampling method and, for a fuller explanation, readers are directed towards the survey documentation.
Selection of locations
For this study, we purposively selected slum areas as our research locations.There are a substantial number of slum areas across Indonesia, and we purposively based our selection of locations on two criteria.First, since this study concerns ecological change that may impoverish the living conditions of people living in slums, we focused on areas where there has been an increased risk of flooding as documented by the latest World Bank report [27].We further narrowed down the selection to cities that have experienced recent major floods while often being overlooked.According to the World Bank report, of the many Indonesian cities, Bima, Manado, and Pontianak face heightened flood risks [36].This circumstance has in part been exacerbated by the increased economic activity that has pushed settlements in these cities to expand to riskier areas along the rivers and coastal areas while often accompanied by poor flood mitigation standards.Second, to ensure that the current study is relevant to the national and local governments, the focal slum areas in each city should be in line with the categorisation of slums (kawasan kumuh in Indonesian) governed by Law No. 1 of 2011 on Housing and Residential Areas of Indonesia.As a result, the selection of slum areas also corresponds to the mayoral decrees in the corresponding cities (Mayor's decree of Pontianak Number 1063.More specifically, the selected locations of slums are the following: 1) the subdistricts of Paruga and Sarae in Bima, 2) the subdistricts of North Titiwungen and Wawonasa in Manado, and 3) the subdistricts of Tambelan Sampit, Sungai Jawi Luar, and Tengah in Pontianak.Based on rapid appraisals, these areas have at least two of the following slum criteria: 1) lack of access to clean water and a proper sanitation system, 2) frequent fluvial and/or pluvial floods either in the immediate area or in the surrounding main streets, and 3) poor sewage system.
Random selection of households and respondents
The data collection process in the three cities started in November 2021 and lasted until February 2022.The permit had been granted by The Directorate General of Politics and General Administration from The Ministry of Home Affairs of the Republic of Indonesia (470.02/7428/Polpum) and by the Research Ethical Committee of Universitas Kristen Maranatha (No.134/KEP/X/2022).The study aimed to gather a random sample from the general population aged 18 and above in each location.To this end, the study employed a random walk method with a sampling criterion of having lived in the area for at least three years.The data collection began the random walks by selecting a starting point or house that was near to the local government office in each sub-district.Subsequently, the survey used two-house intervals to move to the next household until it achieved the targeted sample size.The study aimed to acquire 300 respondents from each city and hence a total of 900 participants from three cities.
Within each household where there was more than one adult eligible to participate, the surveyor picked the adult whose birthday was closest to the day of the survey.Next, prior to their participation, the surveyor informed the identified respondent about the study, and asked them for their informed consent to voluntarily participate in the study, while making clear they could refuse.Those who agreed were asked for their written consent.After the study, each participant received Rp.50.000,00 (approximately €3) as a token of appreciation.
The survey successfully collected information from 700 of the 920 respondents approached.In total, there were 262 males and 438 females spread across the three cities.In more detail, there were 300 respondents from Pontianak (150 males and 150 females), 200 respondents from Bima (46 males and 154 females), and 200 respondents from Manado (66 males and 134 females).Their average age was 43 (SD = 11.79).
Measures
There are several measures of place attachment and we considered the five-factor scale by Raymond et al. (2010) to be the most appropriate given our research context for two reasons.First, place attachment should be considered as part of an individual's identity.Similar to the notion of individual attachment that develops over time, attachment to a particular place to some extent reflects the T. Setiawan et al. centrality of that place to the individual [37].Childhood experience plays a major role in determining whether individuals develop a strong attachment to a place.Logically, the longer individuals stay in a specific place, the stronger the attachment to that place becomes.A place can be seen as the field where all individual experiences occur, socially, personally, as well as environmentally, which fits well with the tripartite concept in Raymond et al.'s measurement model.Second, the measurement accommodates the two distinct categorisations of social connection, namely family and friend bonding.This, we felt, reflected the reality of our research context where family connections are a strong determinant in making people stay and not wanting to move away.Further, slums generally expand due to economic migration and some people move in without having a family connection but instead through the persuasion of their friends [38].Therefore, by distinguishing the two social categories, the measure enables us to investigate the type of connection that individuals have.
The original scale consists of 19 items spread across 5 dimensions.For the purpose of this research, we added several further items, two items for place identity, one item for place dependence, and one item each for the family and the friend bonding dimensions.In total, the measure therefore consists of 24 items spread across the five dimensions (see Appendix 1 for a full list).The place identity dimension asks respondents to rate themselves on statements such as "I am very attached to (name of place)".The nature bonding dimension is represented by statements such as "I feel one with the natural environment when I spend my time in the natural environment at (name of place)".Meanwhile, the place dependence dimension asks respondents to rate themselves on statements such as "I learn a lot about myself when I spend my time in nature at (name of place)".Finally, the family and friend bonding dimensions ask respondents to rate themselves on statements such as "Without my family in (name of place) I might move out" and "The friendships formed through sports activities in (name of place) are very important to me", respectively.All the statements were rated on six-point Likert scales, ranging from 1 strongly disagree to 6 strongly agree.
The original English scale was translated to Indonesian language using a back-translation method [39].In detail, firstly, the initial translation involved five Indonesian scholars from various disciplines, such as psychology, economics, anthropology and development studies.This panel of experts translated the original scale to the Indonesian language.Secondly, the research team of the RISE project, back-translated the items to the source language.Finally, along with the panel and the addition of two people from the target population, the research team held several rounds of verbal discussion to determine the translation validation.The discussions centred around the topics of comparability of language and the similarity of interpretation of the two versions.By combining a panel of experts and the target population, the translation validation process aimed to achieve a fine balance between formal translation of the scale and a high level of readability.
Analysis strategy
In order to rigorously test the psychometric qualities of the measure, we employed exploratory factor analysis (EFA) which was followed by confirmatory factor analysis (CFA).EFA is suitable for researchers who want to find out if one or more latent variables are related to the manifest variables [40].This is done by partitioning a shared variance from a variable from its unique and error variances.CFA, on the other hand, is suitable to confirm an existing factor-variable (or factor-item) configuration [41].In doing so, we adopted the following strategy.First, we randomly split the dataset into two in SPSS to enable us to run an exploratory factor analysis (EFA) followed by a confirmatory factor analysis (CFA) of the measures.
To do this, we used the original dataset containing all 700 cases and then went to "Select Cases" option in SPSS and specified approximately 50% of the cases.We chose the option "Filter out unselected cases" to create a filter variable (SPSS automatically provides 1 = selected cases and 0 = unselected cases).After this, we created two new dataset files that have been randomly split and coded by the filter variable (the full syntax is shown in Appendix 4).As a result, we acquired N =325 for EFA and N =375 for CFA.As a rule of thumb in determining an adequate sample size, we followed Costello and Osborne's study [40], in which they found that 62.9% of 303 PscyhINFO (studied in 2005) used a subject to item ratio of 10:1.As for CFA, Kyriazos [41] concluded that a sample size of N ≥ 200 is considered an adequate size.Table 1 provides more details of the individual characteristics of respondents based on analysis division.
Before we proceed to lay out several reasons why it was necessary to perform a consecutive EFA and CFA techniques in the scale validation, it is worth reiterating the main objective of this study: Although place attachment has been shown to relate to proenvironmental behaviours in many different settings, e.g., rural areas, tourism setting, the extant literature lacks studies on areas where there are lacks of physical infrastructures, e.g., slum settlements.Due to a completely different physical setting from previous studies, we expect that there will be distinct ways of perceiving place of residence among our samples.Thus, EFA and CFA were required to serve the aim of the study.Specifically, the reasons are as follows: One, EFA was useful to loosely test whether the measure would indeed result in five factors as the theoretical claims, while CFA confirms the identified factor-item configurations [42].Two, EFA was necessary to identify potential measurement problems in the dataset, such as low factor loadings, whereas CFA was used to test and modify, if necessary, the identified measurement model.Finally, EFA was useful to layout a common factor model, while CFA was necessary to test the goodness-of-fit of the identified model [43,44].
For the EFA, we employed the following parameters: First, we used the estimation method of common factor model in form of iterative principal axis factoring (PAF) in SPSS 27 [42].This is mainly because PAF considers measurement error, unlike principal component analysis (PCA), and it does not require multivariate normality.Therefore, employing PAF method allows us to loosely map out the factor-item configuration but still maintains the appropriate level of values of variance accounted for by the identified factors.Second, we opted for oblique rotation, specifically Oblimin rotation, because theoretically the factors proposed by Raymond et al. [10] are thought to be correlated.In analysing the results, it is possible to switch to a different type of rotation, such as Promax rotation, depending on the initial results.Third, the number of factors retained would initially follow the theoretical claim, that is five factors.However, we deemed a factor to be stable if its eigenvalue was at least 1 [40].Therefore, although the established scale and its theory provide us with strong theoretical base, we would still make necessary changes to follow the statistical findings during the validation process.Finally, fifth, to ensure that the variables (items) were strongly correlated with their corresponding factors, we set a minimum value for communality at 0.4 [45].Item configurations can be modified, such as by removing certain items, if there is a low factor loading or a double loading.Although many set the minimum acceptable factor loading at 0.3, we followed Peterson [46] in setting our threshold at a minimum of 0.4.
Next, the CFA, by comparing the results with the EFA model, was used to confirm whether the data fitted the theory.We used maximum likelihood (ML) as the estimation method.By using ML, we were able to confirm that there are relationships between factors, the configuration of the indicators or items being measured, and how they relate to the factor loadings [42].To this end, we used the lavaan package in R [47].By default, the measurement of the first item of a factor is set to a value of 1 and thus, determining the scale of the factor.Additionally, there is an automatic inclusion of residual variances.Lastly, there is an assumption of correlation between all the factors considered as independent variables.In setting criteria, first, we complied with the good-fit guidelines proposed by Lance et al. [48] and Hooper et al. [44].In other words, the comparative fit index (CFI) should be greater than 0.90 [48], the root mean square error of approximation (RMSEA) should be less than 0.07 [49], and the standardised root mean squared residual (SRMR) should be less than 0.08.
Second, the average variance extracted (AVE) and composite reliability (CR) would be used to assess the factors' convergent validity [50].The AVE value should be at least 0.5 and the CR a minimum of 0.6.However, Fornell and Larcker [51] argue that even if the AVE is below 0.5, provided but the CR is above 0.6, the factor can still be considered valid.Moreover, the AVE value should be larger than the shared variance with other constructs, determined by the factor correlations, to ensure the factors' discriminant validity [52].Finally, the reliability level of each factor should be at least 0.6 [53].We applied all these criteria in determining the psychometric properties of a modified place attachment measure, based on Raymond et al. [10], when applied to people living in slums in Indonesia.
Results
We start by providing the resulting factor structures from the EFA and CFA.Following this, we discuss the item-factor configuration and the internal consistency of all the factors.Finally, we conclude by discussing the convergent and discriminant validity of the factors.
Factor structure
Initially, we ran EFA using principal axis factoring (PAF) given our expectation of non-orthogonal factors.The anticipation of nonorthogonal factors is mainly due to the tripartite model proposed by the measurement theory [10,13].Considering a place to be a field of individual experiences allows the notion of a place to also become a source of connection to social life and nature [10].Thus, all the factors included in the measure should be considered as having a shared variance.Here, we started with Oblimin rotation and set the eigenvalues to 1. Later after the initial results, we switched to a Promax rotation after the removal of several items and the merging of place and nature dependence factors.
From the results, we focussed at the sampling adequacy through the Kaiser Meyer Olkin (KMO) and Bartlett's sphericity test, the factor configuration, and the correlations between factors (see Appendix 2 for more details).KMO is used to measure the degree of sampling adequacy of the dataset and Bartlett's sphericity test is used to check whether the data produced is not identical to the original correlation matrix [54,55].The sampling adequacy of the model is considered high when the value of KMO is at least 0.90 and the Bartlett's sphericity test's significance value is less than 0.05 [54]; and all our models meet this threshold.This suggests that our sample is sufficient and our data does not produce an identity matrix similar to the original correlation matrix.As such, the data appeared to be acceptable for further analysis.
T. Setiawan et al.
Our next finding is that rather than the assumed five-factor model, the analysis produces stable four-factor models indicated by Eigenvalues above 1.The difference between the tested models is in the number of items included.In the first model (Model 1 in Appendix 2), although the factor structure appears acceptable, the pattern matrix reveals some items (items 8, 16, 17, and 18) that load onto multiple factors.After removing these items, one by one, we arrive at the final model shown in Table 2 (Model 3 in Appendix 2) which shows a sufficient level of sampling adequacy, Eigenvalue, and the Cronbach's Alpha.
Next, the CFA was run according to the identified factor-item configurations from the final model of the EFA (see Appendix 3 for full detail).Table 2 shows the goodness of fit of the final CFA model.The first two CFA models that we ran do not fit the data.We used modification indices (MIs) to evaluate the statistical significance of various unspecified relationships between items [56].It is claimed that MIs greater than 3.84 are considered statistically significant (p < .05),and by freeing such parameter would significantly improve model fit.However, we would apply MIs starting with the largest unspecified relationships.The MIs calculated using the lavaan package show that there are items that are correlated which could potentially improve the model fit, e.g., item 11 is correlated with item 10 [47].We repeat this procedure until we reach at the final model shown in Table 2 (Model 3 in Appendix 3) which is a better fit to the data (for full description of MIs, please refer to the syntax in Appendix 4).Although the chi-square test shows a significant discrepancy between 'the sample and fitted correlation matrix' [52, p.2], many scholars argue that this test is highly sensitive to sample size [e.g., 46].As such, it is likely that a model based on a larger sample size would be significant.In the final model, the values of CFI (0.91) and SRMR (0.05) are within the acceptable range, but the RMSEA is outside the suggested range (≤0.10) [48,49].However, Hu and Bentler [52, p.27] suggest that a value of CFI close to 0.95 in combination with a value of SRMR close to 0.09 is sufficient to conclude there is a good model fit.Therefore, we conclude that the final model is an appropriately specified model for our data.
Item-factor configuration (factor loadings)
Having concluded this, we turn our attention to the factor scores (or loadings) of the scale.Tables 3 and 4 show the factor loadings of all the items in the final model.From the EFA results (see Table 3), we can conclude that the communality of all the items are in the medium to high range [40].This means that a good proportion of the variance is explained by each item.The two factors, place dependence and nature dependence, that are viewed as distinct by Raymond et al. [10], are shown to converge into a single factor.Looking at each item, we observe a well configured item positioned in its corresponding factor.For each factor, the loadings are in the medium to high range.This judgement is supported by the high CR and a good AVE of the CFA model (see Table 4).Although some factors are found to have an AVE below the suggested value [50], the CRs of those factors are high.On this basis, the factors are considered to demonstrate an acceptable level of convergent validity [see 53].In addition, the AVE values of all the factors exceed the correlation coefficient between each factor and all the other factors.As such, each factor shows a good level of discriminant validity [57].Finally, each factor is also found to be highly reliable based on the high Cronbach's alphas [53].
Overall, this evidence of robust psychometric properties holds for alternative models using the full dataset, N = 700.That is, this claim is supported by both the EFA and CFA statistical techniques.
Discussion and conclusions
The purpose of this study has been to examine the psychometric qualities of the place attachment measure proposed by Raymond et al. [10] when applied in Indonesian slums.Our findings show that, after some modification, the measure is psychometrically valid for a specific sample of people living in slums.We start the discussion by explaining the findings related to the latent constructs of the measure and continue with their indicators (items).
First and foremost, our findings support the consensus notion of place attachment, defined as an emotional tie between an individual and a particular place, along with the people in it [37,58].This definition fits with sociological and psychological perspectives, in which an attachment to place cannot be understood separately from an individual's attachment to people living in that place.Here, our psychometric findings demonstrate that, indeed, a place attachment measure should, in the very least, consist of place dependence and place identity [3].Furthermore, compared to the original five-factor scale by Raymond et al. [10], we find that our data fit better with a four-factor scale.The four factors are place and nature dependence, place identity, family bonding, and friend bonding.This
Table 4
The CFA factor loadings of the final model.finding, which combines the place dependence and nature dependence of the earlier model, is not surprising given the research context.That is, place and nature dependence should be viewed as a single latent construct because, for those living in slums or other 'less-accessible' areas, their area of residence is likely to include natural surroundings such as mountains and rivers and, when they do, this nature merges into their living space.This finding echoes claims made in previous studies by Fedele et al. [23] and by Williams and Vaske [9], in which place dependence significantly overlaps with nature attachment.In effect, nature, as a place, provides individuals with basic needs such as water, construction material, and even recreational opportunities [23].This is very apparent to our population groups who have limited living space and where nature and their home coincide in one dimension of place.Therefore, we conclude that we have a valid measure of place attachment that is relevant within the context of people living in less-fortunate areas.Second, given the merging of place dependence and nature-bonding factors, we should look at the item-factor configuration.Our findings show that the items in the original factors fit nicely in the new combined factor.Items such as "I feel very attached to the natural environment at (name of place)", which originally represented the nature-bonding factor, and "I feel more satisfied living in (name of place) than any other place", as an item of place dependence, fit well in new factor of place and nature dependence.Theoretically, this fits with the claim by Vaske and Kobrin [3] that place dependence is a functional attachment through which people feel dependent upon their place due to the amenities it provides such as hiking opportunities and feeling at one with nature.As such, those who are attached to the natural environment of their place of residence are likely to be satisfied, with and dependent on, that place.Methodologically, the items are shown to share a substantial variance indicating a strong association between items.
Third, for the other factors, i.e. place identity, family bonding, and friend bonding, we show that the items provided by Raymond et al. [10] plus our additional items (e.g."Without my old friends in (name of place), I might move out"), provide a valid and reliable measure of their corresponding factors.The factors are shown to be stable after multiple runs of the model and display a good reliability.Moreover, there are marked differences in the levels of importance (i.e. total variance explained) of all the four factors when compared to the scale of Raymond et al. [10].They found that the most important factor is place identity followed by nature bonding.In our four-factor model, the most influential factor is place and nature dependence, with place identity in second place.There are various possible explanations for this difference, but we believe the reason is as follows.First, due to the dissimilar type of respondents in our study, one should be open to the possibility that the factor configuration is different.Given that our respondents are living in slum areas with little or no option to improve their living space, it is not that surprising that they perceive a high dependence on their place and the natural environment that comes with it [59].In addition, having merged the place dependence and nature bonding factors, this more-encompassing factor is likely to have a greater importance and explain more of the variance.
Main implications
Overall, the main implications of this research are as follows.First, the concept of place attachment is also applicable for slum areas.Although slum dwellers do not live in good or healthy environmental conditions, they can still develop an emotional tie to their place of residence.This result confirms that place attachment is largely a subjective evaluation of individuals' place of residence, which involves a sense of emotional attachment not only to their living space, but also to their community, and their identity related to the place.More importantly, our findings confirm that place attachment hinges on whether the living space is able to fulfil individuals' needs, e.g., basic needs, and to develop their social identity.This research supports previous findings in China [29] that even though people may live in a poor-quality environment, they do not want to leave their locations.
Second, place attachment in slum areas has a unique factor that combines the dependence on place and nature.We argue that this is highly relevant considering the characteristics of most slum areas.In the cities we studied, all the slum areas are surrounded by natural environments, like rivers or lakes.However, due to dense populations these resources are often used for daily living, such as for water sources and optimizing riverbanks for settlements.This is mostly due to slum characteristics of lacking public facilities, and the absence of capital and labour.Therefore, nature dependency cannot be viewed as solely a natural recreation as in previous measures.Slum dwellers are rather highly dependent on natural resources and ecosystem services to fulfil their daily lives [60].Consequently, we can take advantage of this notion in further research by using community knowledge of local land, forests, and water resources, to prevent environmental damage.
Third, our findings indicate that place and nature dependency is the strongest factor in relation to place attachment in slum areas.This finding is of importance when scholars aim for a quick snapshot of place attachment by simply focusing on this sole factor.Of course, in order to deliver a high quality study scholars are encouraged to conduct a principal component analysis (PCA) prior to testing.
Limitations
We recognise that our study has some limitations.First, although we have successfully engaged with a sizable sample from three different slum areas in Indonesia, one should not take the generalisability of the measure for granted.For instance, the findings might be different if we repeated our study in Jakarta, where the slums are notorious for being controlled by slum landlords [31].This could result in a different picture of place attachment because the individuals' presence there may be largely driven by economic motives.Second, our study does not include other relevant scales that could be correlated with the place attachment measure in an attempt to show the predictive validity of the measure.Previous studies have argued that place attachment is strongly linked with environmentally-responsible behaviour and risk perception [3,61,62].Thus, a follow-up study could usefully take other measures into account in an attempt to explain the predictive function of place attachment.
In conclusion, we show that our modified place attachment measure, inspired by Raymond et al. [10], is a valid measure for T. Setiawan et al. investigating place attachment among people living in slums and probably in other areas where access to basics is scarce.Our findings show that the measure has good psychometric properties and consists of four factors: place and nature dependence, place identity, family bonding, and friend bonding.The modification to the five-factor model does not amount to major differences with the previously found psychometric properties of existing measures [10,3,13].The two main factors of place dependence and place identity in the earlier measures are still present in our model.While there has been some overlap in the terms, such as social bonding with family and friend bonding with other measures, our measure still includes the conceptualisation of place attachment.Our findings also greatly reflect the lives of many Indonesians living in urban areas, where the conception of nature is embedded in their living space due to their limited access to open natural fields or natural parks [63].Despite the limitations outlined above, our findings are useful in providing psychometric evidence of a place attachment measure that is relevant for areas similar to our research context.As such, our study is useful in extending the use of place attachment to disaster-prone slum contexts that have often been overlooked.
Table 1
Respondent characteristics based on the different types of factor analyses.
Note: Bold indicates significance at the p<.05 level.T.Setiawan et al.
Table 2
Goodness of fit for EFA and CFA.
Table 3
The EFA factor loadings of the final model.
Tery Setiawan; Missiliana Riasnugrahani; Edwin de Jong: Conceived and designed the experiments; Performed the experiments; Analyzed and interpreted the data; Contributed reagents, materials, analysis tools or data; Wrote the paper.Figures in bold indicate significance at the p<.05 level.F1: Place & nature dependence; F2: Place identity; F3: Family bonding; F4: Friend bonding. | 2023-09-08T15:18:55.742Z | 2023-09-01T00:00:00.000 | {
"year": 2023,
"sha1": "8050a053524302250d53d3d753e03c1bfcee14fd",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "6dd487ca272b943c0b4143fc24fb874001ad2142",
"s2fieldsofstudy": [
"Environmental Science",
"Sociology",
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
258187292 | pes2o/s2orc | v3-fos-license | Multimodal Group Activity Dataset for Classroom Engagement Level Prediction
We collected a new dataset that includes approximately eight hours of audiovisual recordings of a group of students and their self-evaluation scores for classroom engagement. The dataset and data analysis scripts are available on our open-source repository. We developed baseline face-based and group-activity-based image and video recognition models. Our image models yield 45-85% test accuracy with face-area inputs on person-based classification task. Our video models achieved up to 71% test accuracy on group-level prediction using group activity video inputs. In this technical report, we shared the details of our end-to-end human-centered engagement analysis pipeline from data collection to model development.
Notes for Practitioners
• We shared our dataset, preprocessing steps and baseline analysis code, open-source at https://github. com/asabuncuoglu13/classroom-engagement-dataset. Practitioners can download our dataset using the script provided in this repository. Use the links in Table 2 to quickly explore a sample session. • The dataset contains 26540 frames with self-engagement evaluation scores. We utilized recent state-of-the-art deep learning models to extract features from this audiovisual data. We also released the resulted OpenFace, and OpenPose vectors for the use of other researchers to decrease the carbon footprint in the replication process (see Table 2) • Our MobileNet-MaxOut face-based prediction image model achieved up to 85% test accuracy and MoViNetA4-powered video prediction model yielded 68% test accuracy. We released the Jupyter notebooks in our Github repo. • We created an interactive dashboard to present the model results in a student-centric format. We detailly explained the research and development process on [1]. • Overall, this research can accelerate the adaptation of recent deep learning models in exploring interpretable features to improve the current state of 21st-century learning environments.
What is Classroom Engagement?
Classroom engagement is a multi-component term that coins active involvement in learning, discussion, and reflection with peers, teachers, and materials in a classroom environment with three dimensions: Affective engagement, behavioral engagement, and cognitive engagement [2]. Evaluating engagement is a challenging task, considering this multi-component definition of the term. Two methods, independent observers and self-evaluation, have been used in previous studies to measure engagement levels. The observational methods can be performed in the classroom in realtime or can be assessed via watching recorded videos. All these methods have their advantages and disadvantages. For example, observational methods require human effort to train observers and check annotation reliably. On the other hand, self-evaluation can produce unreliable results from social desirability bias and memory recall limitations [3,4]. In our research, we followed five student-centric steps to create our AI-powered engagement prediction system as summarized in Figure 1: Data collection, Exploratory Analysis, Model development and fine-tuning, Ethics considerations, and Interactive application for end-result communication. Each part of the development process consists of unique challenges from a human-centered perspective. This paper presents our development process and open-source materials to accelerate student-centric development in classroom engagement prediction research. To our knowledge, our end-to-end, video-to-interpretable feature extraction pipeline presents the most comprehensive open-source learning analytics data pipeline that utilizes the most recent state-of-the-art deep learning techniques. Our research outcomes can accelerate the human-centered design of ML-based learning analytics.
Data Collection
Our dataset consists of group activities of undergraduate and graduate degree students, where they learn new creative coding tools. In the activity design, we aimed to reflect an active classroom setup that follows the 21st-century learning goals. UNESCO defines this 21st-century learning environment as a classroom where students become active and engaged learners, thinkers, and creators [10]. In this environment, students learn new things, experience handson interaction, and reflect on their ideas via constructive discussion. We aimed to curate such a dataset that can be applicable to 21st-century classrooms of K-12 and higher education settings.
The activity design has slightly changed in an iterative process. But, following a common theme, all groups completed creative coding activities following Youtube tutorials. Then, they also completed some tangible, hands-on activities related to online learning content. Throughout the activities, they also answered some questions that automatically popped up on their tablets to follow their cognitive engagement. They also explored how creative coding applications can be used in research by experiencing some real-life examples.
Participant Selection
In the participant selection part, we set three requirements to collect life-like data from an active learning environment: 1. Knowing each other before: We set this requirement to create a similar workflow to a classroom environment.
In most classroom learning environments, groups of students already know each other and have things to share in non-related dialog in group activities.
2.
Having no or little knowledge on topic: If a participant already knows the topic, the learning process would be boring and disengaging for this participant.
3. Having a teaching experience: Part of our data analysis includes self-evaluation, which can be a challenging task for the first time. Having previous experience in teaching increased the ability to self-evaluate.
Additionally, collecting data from adult participants allowed us to publish the data publicly. Publishing learning environment data from the K-12 setting requires extra attention, and we could only publish it to the researchers with valid ethical committee permission. As all our participants are older than twenty-two years old, we could publish the dataset in open-source format. The collection complies with GDPR regulations, which means the participant holds every right to their own data, including removing their identity-exposed parts.
Engagement Level Annotation
Each participant evaluated their self-engagement using a scale between -100 to 100 using the CARMA annotation software [11]. We prepared an Engagement Analysis Checklist to help participants evaluate themselves more accurately. Our analysis checklist is a subset that includes observable inventory items from Wang et al.'s Classroom Engagement Inventory [12]. Table 1 lists the selected checklist items and their engagement category based on Fredricks et al.'s definition. We included fourteen items from this inventory that can be physically observed by an independent evaluator. We reminded participants to regularly control our checklist to remind themselves of the definition of engagement and how classroom engagement is assessed. We also asked participants to complete a survey that asked about their research interests and their motivation to learn the topic.
Dataset Format and Pre-Processing
Raw videos. We recorded the videos using three Samsung A8 (2022 Model) tablets in 1280x720 resolution and 30 FPS. One tablet was recording the group from an upper corner. The other two tablets were directly looking at the students that sat side by side. The organization of the cameras can be seen in Figure 2. Each video is synchronized by using a clap sound at the beginning. After the clapping moment, we sliced the video based on their respective active materials/topics/discussions. We sliced the videos around 5-10 minutes to ease the annotation process. Slicing allowed annotators to manage their time efficiently and us to start processing the annotated videos more quickly.
Face Areas. Then, we cropped and resized the facial areas in order to obtain single-face videos from these two cameras. We used MediaPipe 2 's face detection module to detect the center of the face area. Then, we used FFmpeg 3 to crop it by 320px to 320px, which is a commonly used resolution for facial action unit recognition models. In the Judging the quality of the ideas or work during class activities Table 1: Engagement Checklist items and their corresponding categories. We handed out this checklist to our participants while the self-evaluation process to help them memorize the engagement analysis process.
audio extraction, We applied a noise filter to reduce the background noise during the wav-conversion process. In the transcription process, we first run OpenAI's Whisper model [13] to speed up the transcription. Then, we corrected the transcript text. Figure 3 summarizes these processing steps.
Engagement Level Labels. Each participant evaluated their self-engagement using a scale between -100 to 100. In addition to individual engagement levels, we determined the overall group engagement levels using different pooling techniques. When a frame has relatively similar scores, such as [0, 10, 5, 10], we applied an average pooling, which results in an average score; for this example, the average is 5. But, if the scores are relatively different, such as [−60, 0, 60, 100], we decided it is a chaotic moment; in other words, it is a disengaging moment. So, we applied a minimum pooling for this frame, which would result in -60 for this example. While determining the similarity, we tested Cappa's quadratic K.
Exploratory Analysis of First Batch
Our analysis also progressed in an iterative fashion. After the first two data collection studies, we conducted an early analysis to update the data collection process and resolve the issues as soon as possible. The main issue of the first batch was the imbalance of the engagement levels. The 70% of the frames were labelled as highly engaged in self evaluation, which presented a highly imbalanced dataaset. Following the exploratory analysis of the first two sessions, we updated our learning setup and activity flow to reduce this imbalance. The new activity flow achieved more balanced data that shows a normal-distribution like engagement-level category distribution. In this section, we presented our analysis for the first session i which three participants are anonymized as B, C and Y initials.
Methodology
We used OpenFace [14] to extract facial action units and OpenPose [15] to obtain the joint positions of students. OpenFace's Face Feature Extractor yields 1562 features per vector. We analyzed five continuous feature sets: AU Intensities, 3D Eye Landmarks, 3D Face Landmarks, Gaze Directions, and Head Pose, and one discrete AU Classes Feature Set. We extracted a feature vector for each frame (FPS = 1). The resulting features are interpretable in terms of learning analytics, as the Feature Extractor results in features like eye gaze, head position, action units, etc. Before feeding the feature set to machine learning models, we cleaned the data by thresholding the confidence score of face features. We removed all feature vectors that have less than a 0.5 confidence score. We also removed the vector if the row contains only zeros. Out of 6969 features, we trained our models with 5351 feature vectors. Using these features, we conducted a three-step analysis strategy in this exploratory analysis: Figure 3: The illustration of our data pre-processing and exploratory analysis pipeline with feature extraction steps.
Feature Explanation Link
Video Recordings Three hours of video recordings of first two learning sessions Link Image Frames Image frames of video recordings (FPS = 1).
Link Self-Evaluation Scores
The scores are between -100 and 100. Each participant scored their classroom engagement following the Classroom Engagement Inventory. Automatically transcripted using OpenAI's Whisper, then manually checked by humans.
Link
Link Table 2: Media and feature set for each session of the dataset. This table contains links for an example session which we used in the exploratory analysis (Session 2). All dataset and features can be downloaded via our Download Script 1. For each feature set, we applied PCA (Principal Component Analysis) to see the possible linearly separable features that can increase the overall engagement level classification. Principal Component Analysis (PCA) aims to reduce the dimensionality of a set of data consisting of variables correlated with each other while keeping the variation in the dataset [16]. The variation in the principal components decreases while moving from the first component to the last one, and we can measure the variation explained with the PCA's dimensionality reduction by cumulative explained variation. As Open Face's Action Units result in discrete observations, we utilized MCA (Multi Correspondance Analysis) instead of PCA. 3. Finally, we conducted explainability experiments to find the impact of individual features and test the combination of these important features on the same classifiers. We utilized InterpretML [17], an open-source Python library, to explain the behavior of existing systems using LIME. A drawback of the current implementation of InterpretML is its limited availability for multi-class classification. So, we produced interpretable rules on binary predictions. We run these algorithms on a binarized version of the data using the classifier.
Our open-source repository contains codes for conducting these exploratory analysis step by step. Researchers can easily conduct analysis on their custom dataset or customize our code to conduct different experiments on our dataset. In addition to PCA, t-distributed stochastic neighbor embedding (t-SNE) and Uniform Manifold Approximation and Projection (UMAP) are non-linear dimensionality reduction techniques that focus on preserving the local structure of the data [18,19]. They are popular methods for visualizing high-dimensional space, but their non-linear nature can result in misleading output. We believe an effective way to explore this high-dimensionality space is to allow readers to explore the data with their custom parameters, which they can find a balance between local and global structure. So, we published the feature sets in Embedding Projector-friendly format (You can play with the format using the sample features from Session 2: Embeddings -Google Drive). Tensorflow's Embedding Projector [20] is a web application to visualize high-dimensional format using PCA, t-SNE, or UMAP algorithms interactively. Through the Embedding projector, researchers can play with the dimensionality-reduction parameters.
Classification using Individual and Combined Features
In the classifier performance evaluation, we calculated the weighted F1-score metric with our 80-20 split dataset. The F1 score is a harmonic mean of precision and recall. The weighted F1-score calculates scores for each label and finds their weighted average, which provides a more informed decision when data is imbalanced.
Classification using OpenFace Features:
InterpretML Results
We run EBM and LIME on SVM-RBF for five feature sets for each participant similar to the classifier experiments. LIME and EBM can only run on binarized datasets, so we created an additional engaged-disengaged labels for binary classification task. In these experiments, we observed that action unit scores and head position information have the highest impact on the model at the individual level. An overall sample analysis of explainability scores looks like the rules in Table 5.
Researchers can access our full interpretability reports and generate their interpretability reports on our Github Repo.
Final Statistics
After completing the exploratory analysis and updated the activity flow, we continued our user studies with additional five sessions. The released dataset contains 26540 frames (FPS = 1, 7.5 hours) of audiovisual recordings with selfevaluation scores for each frame. For each frame, individual faces are centered and cropped to 320x320 images. For each frame, with two-second paddings, we also sliced ten-second-long video clips to feed video action recognition networks. We automatically extracted dialog transcripts using Whisper [22] and manually checked these transcripts, thanks to volunteers. Table 2 shows the explanations and open-source links for available features of this dataset. We released all the parts of the datasets where the participants allowed the use of their visuals and audio. Four out of thirty-four students did not allow sharing the video recordings, so we cannot publicly share three of the seven videos. However, researchers holding Ethics Committee Approval can request the private links. In the end, the frame counts for each engagement level in a five-level classification task are: • Highly engaged: 5914 (22% of all data) At the end of the data collection process, we started developing baseline architectures that use the image frames and video clips.
Baseline Architectures for Engagement Prediction
Following the data collection and pre-processing steps, we developed baseline deep learning architectures to classify engagement levels. Previous research demonstrated that using mid-level features using pre-trained networks is an effective method in video understanding [23]. Following this insight, we tested different mid-level representations and feature sets in our baseline architectures. In these experiments, we tested sequential learning architectures (GRUs and LSTMs) architectures with different feature representations (VGGFace, MobileNet, MoViNet). In this paper, we presented classification architectures with two base data feeding models. The first set of architectures takes 320x320px size image inputs that contain cropped and centered face areas with individual engagement levels as labels. The second set of architectures takes 1280x720px size 10-second video clip inputs with aggregate group engagement levels as labels.
For all experiments, we split the data into 80% training (20% of it is validation) -20% test data.
Models for Centered Face Input
These models make predictions using 320x320px images, which contain cropped face areas in the center of the image. We present two baseline architectures that test different pre-trained weight initialization and activation functions: (1) The first architecture tests two pre-trained weight initializations, VGG19-Face and MobileNet. (2) The second architecture uses the MobileNet-based model with MaxOut [24] and ReLU activation functions. [21] Similarly, Morris sensitivity values only yielded meaningful values for gaze direction and head pose features. In the head pose, we also observed the impact of y-axis values. The sensitivity values for pose features did not yield meaningful values (could not converge), which indicates the individual impact of features is very low. Table 5: A sample analysis of LIME feature values. A different model can yield different important components. So, for different models at different inference times these components might change.
Experiment 1: VGG19-Face vs. MobileNet for pre-trained weight initialization
On top of pre-trained networks, we trained a densely connected NN with ReLU activation. The network uses sparse categorical cross entropy with Adam optimizer (lr = 0.0001). We trained the network for 50 epochs. Then, we also tested fine-tuning the pre-trained weights to further improve the model performance. In the fine-tuning step, we used the RMSProp optimizer to restrict the oscillations that might occur in the gradient descent. We trained the network with the fine-tuning step with another 50 epochs. Using MobileNet as pre-trained weight initialization demonstrated better performance and accuracy for each participant's engagement classification. Thus, we tested the second model with only MobileNet weights.
Experiment 2: MaxOut Dense Layer vs. ReLU Dense Layer for Prediction Head
As we elaborated in Section 4, the self-evaluation labels can be unreliable and noisy. So, we also included MaxOut activation that showed promising results for noisy labels in previous face recognition tasks [24]. For the MobileNet model, we tested ReLu activation and MaxOut activation for the prediction heads with a similar implementation to the LightCNN network. The MaxOut implementation, as expected, demonstrated better accuracy.
Results
The VGG19-Face is specifically trained to recognize faces, so we expected to observe a better weight initialization when we started our model training with VGG19-Face. However, MobileNet demonstrated significantly better performance in validation and test accuracies. So, we can deduce that MobileNet is a better feature extractor compared to VGG19-Face, even when the image area mainly contains faces. Thus, we continued with MobileNet as a feature extractor while experimenting with activation layers. In this experiment, MaxOut implementation demonstrated better accuracy as it reduced the possibility of overfitting and showed better accuracy in unreliable data. Table 6 reports the validation accuracy before and after fine-tuning and test accuracy of the final model for the MobileNet feature extractor with MaxOut activation, which performed the best accuracies among all experiments for the centered-face input.
Models with Video Input
These models take 10-second long video clips that have 1280x720 resolution. In the data preparation step, we had different options to experiment with. For example, one option was to slice videos based on consecutive engagement levels. Following this option, we obtained 1521 video slices. The shortest video clip was one second, and the longest video clip was 146 seconds. This method of slicing videos did not yield a balanced dataset in terms of quantity, label distribution, and duration. So, we prepared 10-second slices with 2-second paddings. Intuitively, we can say that if we select a random frame label, the 10-second interval (5 seconds before and 5 seconds after) has the same or close label. So, we created 10-second intervals that carries labels of their center frames. This way, we obtained 25K video clips that have 2 seconds of padding.
Experiment 1: RNN with CNN Feature Extractors
Our first model uses CNN-based models (VGG19Face and MobileNet) that we previously utilized in face-area-feeding networks to LSTM-based network as seen in Figure 5. The experiment run for twenty epochs with the batch size 60 (60 video clips). The network uses sparse categorical cross entropy with Adam optimizer (lr = 0.0001).
Experiment 2: MoviNet A0 and A4
Although CNN-RNN models have been dominantly used in video prediction models, a 2D-frame-based classifier lacks representation of temporal context. Tran et al. demonstrated that 3D convolutions have better performance in capturing spatiotemporal information compared to previous models [25]. In the second model, we used a descendant of a 3D convolution-based architecture, MoViNet. We used MoViNet A0 and A4 base models, which showed promising results on the Kinetics-600 classification task [26]. On top of MoViNet models, we trained a GRU-based classification head. The experiment run for twenty epochs with the batch size 20 (20 video clips). The network uses sparse categorical cross entropy with Adam optimizer (lr = 0.0001).
Results
CNN-LSTM architecture yielded poor accuracy, compared to MoViNet-based models. Table 7 shows the validation and test accuracy results of these architectures. In the CNN-LSTM architecture VGG19Face and MobiNet showed similar performance in feature representation. Overall, the 2D-frame based representation could not yield any meaningful results. However, fine-tuning MoViNets yielded promising results. MoViNet-A4 base classifier could achive 0.68 accuracy on test set.
Discussion
From data collection to dashboard development, we followed a human-centered approach in all steps of our machine learning pipeline for predicting student engagement levels. In this section, we hold a discussion over the main findings from our data collection, analysis, and model development steps with a student-centric view.
Common Patterns in Engagement Levels
Engaging patterns: Our activity flow consisted of two parts where participants first learned to use creative coding applications following Youtube tutorials, then they completed a tangible activity that they followed using an online tutorial. While trying to complete the hands-on task, the participants used tangible materials and a tablet computer. During this interaction, the disengagement/engagement ratio of this part becomes low compared to the video-watching activity. This increase in engagement also resonates with the previous research in tangible interaction research. We promote adding more learning activity types, such as peer-to-peer learning and flipped classrooms, to increase the variety in the dataset and help educators interpret the patterns in different learning tasks.
Disengaging patterns: We can list two main patterns for disengagement: (1) When the instructor in the video started to talk about future work, participants tended to decrease their engagement scores. This situation might also occur since the timing of these conversations generally happens at the end of the lecture time. (2) When participants faced a technical failure, they did not feel disengaged but set their engagement to a moderate level. Listing these repeating patterns in engagement annotation can help educators to plan their lectures accordingly.
Outliers: Through the evaluations, we also faced unpredictable annotations. For example, when students were laughing and enjoying, one can expect to label it as an engaging moment. However, our engagement checklist also considers cognitive and behavioral engagement; thus, these enjoyable moments should relate to the course topic. However, we know that these moments help students keep their engagement level long-term. So, labeling and recognizing these moments are challenging both for ML models and teachers. Most of the false positive examples in our dataset consist of these moments.
Architecture
Val-Acc Test-Acc Table 7: Different video action classification model architectures and their validation and test accuracies.
Analyzing Unreliability in Self-Evaluation Process
Through the learning activities in the data collection process, we asked participants questions about the video content to check if their engagement levels and correct answers would match. The main goal of asking these questions was to see if their self-evaluation engagement score was in parallel with their answers. We aimed to achieve a reliability check with this kind of mechanism. After collecting the engagement level annotations, we checked the engagement levels one minute before and after the timestamp of this question.
The first question was, "What is p5.js?". Two participants answered incorrectly. These students also labeled their engagement scores as negative. We also observed that our group activity model could also predict their engagement level correctly. But the second question was tricky. We asked, "Which platform can you use to code p5.js?" Only five of them answered correctly. They selected the platform that they used in the activities. But, the correct answer required choosing multiple platforms. In the evaluation part, most of them labeled themselves as 'engaged,' which indicates that most of our participants tend to label their engagement with their overall behavior rather than giving careful attention to the lecture.
Although this test can only be used for an exploratory and limited experiment for reliability, future researchers can follow a similar process to understand the reliability of their data collection process.
Interpreting Observable Features of First Batch
Our early experiments involving OpenFace and OpenPose features revealed that 3D Face, Eye Landmarks, and Head-Pose features significantly improved accuracy compared to other facial, pose, and voice features. Explainability experiments also demonstrated that the combination of these features had the highest F1 score compared to other individual and combined features. Although gaze directions can reveal insights about shared attention between the students, the least successful models were trained with gaze directions. Previous research demonstrated that mutually shared attention could reveal some insights for classroom engagements. Still, we observed that gaze direction could also falsely guide the models when students are just in a mind-wandering state.
Through the analysis, we also check if our data features resonate with the literature. We expected to achieve the best results by either using AU intensities or combining AU intensities with other features, as the previous research could effectively predict affective states using action units. For example, the activation of AU4, AU7 and AU12 can indicate "Boredom", and AU1, AU4, AU7 and AU12 can reveal "Confusion." [5]. But, when we checked if the students annotated "disengaged" when the action units were active for the "boredom" state, we could not achieve any significant result. To check the OpenFace's reliability in predicting action units, we also conducted a random frame check where we picked random frames and manually checked if OpenFace produced sound AU intensity values. In this examination, we concluded that OpenFace produces reasonable AU intensities, but we realized that predicting an engagement score using a single observation is a challenging task. For example, while they were eating some snacks while watching the video or looking serious while trying to solve a task on a computer, AU intensities were producing the same scores when they were disengaged.
Head pose, to our surprise, had a significant impact on all classification experiments. When we analyzed the feature importance reports, we observed that y-axis values are the main components. While watching the videos, we also observed that our participants (especially participant B) tended to node while listening carefully. We can also generalize this outcome intuitively and say that nodding and similar head pose behaviors can be key components of engagement analysis.
Before our analysis, we expected to see the major contribution of OpenPose features would be the change in the hand movements, head positions, and the distance between the participants. At the beginning of the analysis, we were planning to remove the lower body keypoints, as most of the points are not seen but predicted by the model. But, we decided to keep it, as participants stand up, talk to each other, or leave the class, which shows the change in the engagement levels. In the explainability analysis, we see that these keypoints had a major impact on EBM's decisions. So, future work should include either better keypoint prediction of occluded leg movements or increase the camera field of view to see the whole body.
Effective Data Analysis Practices
Combining features based on feature importance: Combining features from different knowledge sources can enhance classification accuracy significantly, but it can also cause overfitting. In the individual classification tasks, we observed that 3D Face Landmarks and Head Position information are the main indicators of successful engagement level classification tasks, and combining them resulted in the highest scores for both classification and explainability experiments. Although combining features can improve the classifier performance significantly, simply adding all fea-tures together does not make the expected impact, as the classification algorithms struggle to find a converging path. So, while combining the features, we need to select relatively small-sized features that show high orthogonality with each other.
Crafting new features: We also tested creating additional features that might also be relevant to real-life situations. For example, we hypothesized that when students become closer in a classroom, it could be an indicator of engagement. So, we calculated the Euclidean distance between the participants' body keypoints and trained the classifiers with this closeness feature. But, the experiments did not yield the expected classification accuracy.
Adding more and more features: Increasing the future count with a limited dataset size results in weak accuracy scores, as converging to an optimal solution is a challenge. Selecting a subset of features is much more useful than feeding all networks to reduce the training and inference time. In our experiments, a 400-length feature vector took approximately spend 10-times more training time than a 200-length feature vector.
Testing Deep Learning Architectures
Our experiments demonstrated that carefully selecting pre-trained feature extractors in a transfer learning approach is the most important factor in achieving good accuracy. Currently, the most effective and high-accuracy models are run on top of MobileNet (for face-area-fed models) and MoViNet (group-activity-video-fed models) architectures. We encourage researchers to test different feature extractors and mid-level representations that could increase model accuracy significantly.
In addition, the dataset is limited in terms of generalizing the capabilities of the models. As we elaborated in more detail in Section 6.8, the dataset is collected in a university setting where participants are from the same department. This limits the affective, cultural, behavioral, cognitive, and pedagogical representation of the model. So, the adopters and researchers should investigate the ethical considerations related to representability and bias.
Ethical Considerations
At the beginning of our development process, our vision for the classroom scenario was supporting students and teachers in observing engagement patterns and interpreting their engagement levels. Yet, one can be concerned about using the overall system as a classroom surveillance tool, which can process personally-identifiable data that classifies behavior, attitudes, and preferences. Privacy concerns has been a prominent challenge in the learning analytics field [27,28,29,30,31,32]. Williamson et al. list four emerging challenges while developing LADs [33]: (1) Protecting participants' privacy while also including enough demographic information, (2) Surveillance concerns, (3) Neglecting pedagogies that fall outside of the dominant narrative and (4) Making LADs maintainable in terms of software development. For each of these emerging issues, we summarize our approach to help researchers and practitioners utilizing our prediction pipeline.
1. Protecting participant identities: Our pipeline does not require collecting any demographic information.
If a third party adapts our pipeline, they can run the models anonymously without requiring any identifiers. Throughout the use of our system, teachers have access to all data, but students do not have access to other students' engagement levels. Currently, we identified two main users: Teachers and Students. Other stakeholders such as policy-makers might need demographic information to make nationwide decisions. At this point, the engagement data should be aggregated in a privacy-preserving way to protect personal identifiers [34]. 2. Adressing surveillance concerns: Our system only give access teachers to personally identifiable data of their students. A student can only see video data from other members of the study group. Our deep learning architectures do not submit any information to third-party software. For each step of our system, we included a step-by-step explanation of data usage in an end-user-readable way, and we also suggested this approach to adapters of our system. 3. Considering implicit pedagogies: By using our overall system, students and teachers can explore their learning patterns that fall behind the dominant narrative. The final dashboard aims to help students interpret their engagement levels. If other researchers and adapters of the system intend to give suggestions based on their pedagogic approach, they should carefully support their arguments by showing direct links to ML model features. 4. Maintaining the software: Making our dataset, pipeline, models and dashboard open-source and presenting our dashboard on Observable improves the software maintainability significantly. Using Observable, researchers and developers can fork our interactive notebook and create custom dashboards based on their needs. They can also access our data pre-processing scripts and DL model training codes through the project's GitHub repo (https://github.com/asabuncuoglu13/classroom-engagement-dataset), which is currently active and open-source.
Lastly, we shared these resources with a share-a-like license, so adapters should also make their code open-source to increase maintainability and sustain the ethical considerations.
Considerations for Adapting this System into Classrooms
In a classroom environment, not every moment can be engaging. The goal is generally to minimize the long disengaged moments to keep the students' motivation high. So, it is vital to help teachers to analyze ML-based reports by combining multiple data sources. Based on our experience, we advise teachers to use ML-based engagement analysis when they already have some insights about students' characteristics, such as habits, likes, and dislikes. The ML models can only point out the engaged and disengaged moments based on some facial unit movements, body positions, and voice duration. This kind of guidance can be useful when students work together in a crowded classroom where the teacher cannot watch every group's behavior. However, in this scenario, the teacher still needs to take action, which can only be possible by re-evaluating the group dynamics and students' long-term behaviors.
Limitations
Self-Evaluation of Engagement: Self-evaluating the engagement level is challenging, as defining engagement in all aspects is still challenging for most researchers. Before the self-evaluation process, all participants completed a fifteenminute training session that introduced the definition and dimensions of classroom engagement. Unlike the existing research, our system particularly focuses on capturing the multimodal aspect of classroom engagement. The labels of our dataset are ambiguous, and this ambiguous label represents a multi-dimensional complex term, 'engagement.' We collected data from group activities where students follow Youtube tutorials, hold group discussions, and complete hands-on, tangible activities. In this sense, our dataset is the only dataset that is collected in a controlled environment that represents the dynamic nature of learning.
Representation Bias: The collected data is from a small group, representing a very limited analysis of a small community. Considering these limitations, this data is limited for model training purposes but presents a unique dataset that is useful for exploratory analysis and testing.
Conclusion
Determining students' classroom engagement levels by giving attention to the right details is challenging, even for expert teachers. The multi-dimensional definition of engagement requires assessing students' affective, behavioral, and cognitive states. Developing a machine learning-based solution is a challenging task since the students' selfevaluation considers multiple aspects, but the recorded data presents only a few observable features. This paper presented a new dataset, its exploratory analysis and baseline deep learning architectures to accelerate the research in this challenging task. The dataset contains 26540 frames of audiovisual recordings with self-evaluation scores for each frame. Image frames, video slices, OpenFace and OpenPose feature vectors, audio formats, exploratory analysis scripts and deep learning model architectures are open-sourced to increase the accessibility of this dataset from both ML and education fields. Our image and video classification networks could achieve up to 68% test accuracy on group-level engagement. We believe our comprehensive human-centered, open-source machine-learning pipeline can accelerate research in classroom engagement prediction. | 2023-04-19T01:16:19.509Z | 2023-04-18T00:00:00.000 | {
"year": 2023,
"sha1": "2e4faf1d63bcd70a3dde9f32d5af081ab7dbd279",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "2e4faf1d63bcd70a3dde9f32d5af081ab7dbd279",
"s2fieldsofstudy": [
"Computer Science",
"Education"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
18073458 | pes2o/s2orc | v3-fos-license | Temperature Modulates Plant Defense Responses through NB-LRR Proteins
An elevated growth temperature often inhibits plant defense responses and renders plants more susceptible to pathogens. However, the molecular mechanisms underlying this modulation are unknown. To genetically dissect this regulation, we isolated mutants that retain disease resistance at a higher growth temperature in Arabidopsis. One such heat-stable mutant results from a point mutation in SNC1, a NB-LRR encoding gene similar to disease resistance (R) genes. Similar mutations introduced into a tobacco R gene, N, confer defense responses at elevated temperature. Thus R genes or R-like genes involved in recognition of pathogen effectors are likely the causal temperature-sensitive component in defense responses. This is further supported by snc1 intragenic suppressors that regained temperature sensitivity in defense responses. In addition, the SNC1 and N proteins had a reduction of nuclear accumulation at elevated temperature, which likely contributes to the inhibition of defense responses. These findings identify a plant temperature sensitive component in disease resistance and provide a potential means to generate plants adapting to a broader temperature range.
Introduction
Temperature is a major environmental factor that regulates plant growth and development as well as its interaction with other organisms [1]. Plants respond to small temperature changes and yet temperature signaling is largely unknown in plants [2]. Temperature is known to influence disease resistance to bacteria, fungi, virus, and insects; and different host-pathogen interactions respond differently to different temperature ranges [3]. A high temperature very often inhibits disease resistance or plant immunity [4], although low temperature also leads to reduced plant defense in some cases [5]. Despite the fact that temperature sensitivity poses a challenge to agriculture in the current global climate change scenario, the molecular basis for the high temperature inhibition of plant immunity is unknown.
Plant immunity occurs at multiple levels and can be largely divided into two branches. One is a general resistance responding to common features of pathogens named 'microbial-or pathogen associated molecular patterns' (MAMP or PAMP). The second immunity branch responds to pathogen virulent factors or effectors. This cultivar-specific resistance or ETI is induced upon a specific recognition of the pathogen race-specific avirulence (Avr) gene by disease resistance (R) gene of the host plant. This 'genefor-gene' interaction leads to rapid and efficient defense responses including a form of programmed cell death named hypersensitive response (HR) to restrict the growth of pathogens. R proteins of the largest class have 'nucleotide-binding' (NB) and leucine-rich repeat (LRR) domains. The amino-termini of these proteins are either of the Toll and interleukin-1 receptor (TIR) type or the coiled-coiled (CC) type. Multiple layers of plant immunity reflect a co-evolution of host plants and pathogens.
Heat sensitivity of disease resistance has been observed in both basal defense responses and R gene-mediated defense response. For instance, Arabidopsis plants are more susceptible to virulent Pseudomonas syringae pv. tomato (Pst) DC3000 at 28uC than at 22uC [6]. Resistance to tobacco mosaic virus (TMV) conferred by the N gene is effective at 22uC, but is abolished at 30uC [7]. Resistance to root-knot nematodes conferred by the Mi-1 gene in tomato is inactive above 28uC [8]. HR induced by the Arabidopsis RPW8 gene against powdery mildew is suppressed by temperature above 30uC [9]. Arabidopsis resistance to avirulent Pst DC3000 strains with AvrRpt2, AvrRps4, or AvrRpm1 effectors exhibited at 22uC are inhibited at 28uC [6]. Resistance against fungal pathogen Cladosporium fulvum is conferred by Cf4 and Cf9 in tomato, and HR mediated by these two genes can be suppressed at 33uC [10]. A number of mutants with upregulated defense responses are also found to be temperature sensitive. The bon1 mutant exhibits a dwarf phenotype at 22uC but not at 28uC due to a suppression of defense response mediated by SNC1 at elevated temperature [11]. SNC1 is a NB-LRR type of R-like gene closely related to the R genes RPP4 and RPP5 [12], and the gain-of-function mutant snc1-1 exhibits a temperature-sensitive growth and defense phenotype [11]. Similarly, autoimmune response mediated by R-like genes in F1 hybrids between Arabidopsis accessions could be attenuated by a moderate increase in growth temperature [13].
The temperature effect on defense signaling is sometimes thought to be an indirect physiological change caused by global alterations in metabolism and membrane properties among others. However it is possible that a common mechanism for temperature sensitivity exists for different systems of disease resistance because many of them share similar signaling molecules and use similar signaling cascades. Some of the signaling components are themselves modulated by temperature. For instance, EDS1 and PAD4, two regulators of both basal and R-mediated disease resistance, have a higher steady expression level at 22uC than at 28uC [11]. Salicylic acid (SA), a signal for systemic acquired resistance, is regulated by temperature [14,15]. However, an initial attempt to alter temperature sensitivity by up-regulating EDS1 and PAD4 was not successful [6], and no systemic study had been carried out to investigate this temperature modulation of disease resistance at the molecular level.
Here we report a genetic screen for mutants with enhanced disease resistance at an elevated temperature. We show that the R-like gene SNC1 and the R gene N are the temperature-sensitive components responsible for temperature sensitivity in defense responses they each induce. Alterations in R proteins are sufficient to change temperature sensitivity of plant immune response and confer defense responses at elevated temperature. Furthermore, a high temperature reduces nuclear localization of SNC1 and N proteins, which likely contributes to the repression of defense responses. Therefore, temperature sensitivity of R proteins is an important mechanism underlying temperature modulation of plant immunity.
Results
Isolation of a int102 mutant that retains disease resistance at a high growth temperature Wild-type Arabidopsis plants turn off defense responses in the absence of pathogens as these responses usually compromise plant growth and sometimes cause cell death. The Arabidopsis snc1-1 mutant shows constitutive defense response and dwarf phenotype in a temperature-dependent manner, i.e., the mutant phenotypes are expressed at 22uC but are not at 28uC [11] (Fig. 1A). The growth regulation by temperature in snc1-1 therefore serves as a model for investigating temperature modulation of defense responses. We carried out a screen for mutants that are defective in high-temperature inhibition of disease resistance in the snc1-1 background by EMS mutagenesis. Mutants retaining a dwarf phenotype at 28uC together with snc1-1 were isolated. One such mutant, int (insensitive to temperature)102-1, had an almost identical dwarf phenotype at both 22uC and 28uC; and it had a similar small size and a curly leaf shape as the snc1-1 mutant grown at 22uC (Fig. 1A).
Further analysis revealed that the int102-1 snc1-1 mutant retained enhanced defense responses at 28uC and was resistant to the bacterial pathogen Pst DC3000 at both 22uC and 28uC. At 22uC, the wild-type plants supported a 56-fold increase in bacterial growth after a 3-day inoculation, while snc1-1 and int102-1 snc1-1supported only four-to five-fold increase in bacterial growth (Fig. 1B), indicating that snc1-1 and int102-1 snc1-1 exhibit enhanced resistance at a similar level at normal growth temperature. At 28uC, snc1-1 was as susceptible as the wild type and supported 127-fold increase in bacterial growth. In contrast, there was little growth (two-fold increase) of bacteria on int102-1 snc1-1 at 28uC (Fig. 1B). Therefore, the int102-1 snc1-1 mutant indeed has elevated defense responses at both temperatures and the int102-1 mutation confers a temperature-insensitive or heatstable immune response.
We found that the elevated defense response in this mutant is mediated by salicylic acid (SA) and PAD4. The expressions of a defense response marker gene PR1 [16] was higher in int102-1 snc1-1 than in the wild type or snc1-1 at 28uC (Fig. 1C), suggesting an activation of the SA pathway. The nahG transgene coding for SA degradation enzyme [17] was therefore introduced into int102-1 snc1-1, and this transgene indeed suppressed both the growth and defense gene expression phenotypes of int102 snc1-1. The int102 snc1-1nahG plants had wild-type morphology and no elevated PR1 expression at either 22uC or 28uC ( Fig. 1D and data not shown). Similarly, the pad4 mutant defective in defense responses [18] suppressed the phenotypes of int102-1 snc1-1 ( Fig. 1C and D), further demonstrating an upregulation of defense responses in the int102-1 snc1-1 mutant at 28uC.
Mutation in the R-like gene SNC1 is responsible for enhanced disease resistance at high temperatures We cloned the INT102 gene based on its tight linkage to SNC1 in a F2 population of int102-1 snc1-1 (in the Col-0 accession) crossed to the wild-type Ws-2 accession. Analysis of 240 int-looking plants in this population revealed no recombination between INT102 and SNC1, indicating a tight linkage of the two genes. Sequencing the SNC1 gene in int102-1 snc1-1 revealed a G to A point mutation causing a change of glutamic acid to lysine (named snc1-3) at amino acid residue 640 in the second LRR motif in the LRR domain of SNC1 ( Fig. 2A, S1). The int102-1 snc1-1 mutant is therefore named snc1-4, and it contains both the snc1-1 and snc1-3 mutations.
The snc1-4 mutation is likely gain-of-function although it is recessive to the wild type and snc1-1. The same recessive property was observed for the gain-of-function mutation snc1-1 [12]. This recessive nature is likely due to haploid deficiency rather than lossof-function, and it is demonstrated that the mutant SNC1-1 transgene induced snc1-1 phenotype in transgenic plants [12]. To determine if the snc1-3 mutation is the causal mutation of the int phenotype, we generated transgenic Arabidopsis lines containing different forms of SNC1. The SNC1 protein was tagged by the green fluorescent protein (GFP) at the carboxyl-terminus and expressed under the control of its native promoter (named as pSNC1::SNC1:GFP). Four forms of SNC1 fusions were created: the wild-type (SNC1WT), SNC1-1, SNC1-3, and SNC1-4; and they were transformed into the wild-type Col-0 plants respectively. Primary transformants were grown first at 22uC for three weeks before being transferred to 28uC till seed setting and their growth phenotypes were scored both at 22uC and after two weeks at 28uC (Fig. 2B). Transgenic Arabidopsis plants with the same transgene exhibited varying phenotype, mostly likely due to varying
Author Summary
It has been known that temperature modulates plant immune responses, but the molecular mechanisms underlying this modulation are unknown. Our study describes a novel finding that the NB-LRR type of R or R-like protein is the temperature-sensitive component of plant defense responses. R or R-like proteins have 'receptor' like functions involved in specific recognition of pathogens. Through genetic screens and targeted mutagenesis, we found that alterations in the R-like gene SNC1 and the R gene N can change temperature sensitivity of defense responses. Further, an elevated temperature reduced the nuclear accumulation of SNC1, which likely contributes to the inhibition of disease resistance at high temperatures. Our study indicates that NB-LRR proteins mediate temperature sensitivity in plant immune responses.
expression levels of the transgene. Among the 10 pSNC1::SNC1WT:GFP lines generated, all but one exhibited morphological defects at 22uC indicating an activation of defense responses. This is consistent with the previous finding that the wild-type SNC1 genomic fragment could induce autoactivation due to a higher SNC1 expression level than the endogenous one [19]. Among these, seven lines showed a dwarf phenotype at 22uC but new tissues grown at 28uC did not exhibit visible growth defects and we termed these as rescued at 28uC. The rest two lines showed dwarf phenotype at 22uC but can be partially rescued at 28uC. Among the 9 lines of pSNC1::SNC1-1:GFP, all exhibited a dwarf phenotype. At 28uC, four lines were rescued, two lines were partially rescued, and 3 lines were not rescued. Among the 8 lines of pSNC1::SNC1-3:GFP, one did not exhibit a morphological defect, one had a 28uC rescued dwarf phenotype, one had a partial 28uC rescued phenotype, and 5 had a non-rescued phenotype. Among the 12 lines with pSNC1::SNC1-4:GFP, one did not exhibit a growth defect, three had a 28uC rescued dwarf phenotype, one had a partial rescued and 6 had a 28uC non-rescued phenotype. In sum, the SNC1-3 and SNC1-4 constructs induced a significantly higher percentage of transgenic lines with a dwarf phenotype at both 22uC and 28uC, indicating that the snc1-3 mutation is the causal mutation of int102 and that the snc1-3 mutation alone (without snc1-1) is sufficient to induce defense responses at a higher temperature. Therefore, temperature sensitivity of disease resistance induced by SNC1 is controlled by the R-like gene itself rather than by other regulatory components and that a mutation in the R-like gene is sufficient to confer disease resistance at a high temperature.
The SNC1-4 mutant gene induces HR-like cell death at elevated temperature in Nicotiana benthamiana We explored the possibility of using a transient assay to analyze the SNC1 activity and its protein localization, as we could not detect SNC1:GFP signals by microscope in transgenic Arabidopsis. Because many R genes induce HR-like cell death in N. benthamiana (Nb) when co-expressed with their elicitors and some activated forms of R genes induced cell death in the absence of their elicitors, we tested cell death-inducing activities of different forms of SNC1 expressed under a strong 35S promoter by Agrobacterium-mediated infiltration (agro-infiltration) in Nb. For agro-infiltration experiments of more than six replicas, we saw a correlation of cell death-inducing activities in Nb with defense response (dwarf)-inducing activities in Arabidopsis for different forms of SNC1 genes (Fig. 2C). No visible cell death was observed for p35S::SNC1WT:GFP at 22uC or 28uC. On the other hand, the p35S::SNC1-1:GFP fusion induced visible cell death at 22uC but not at 28uC. In fully infiltrated leaves, small areas (more than 2 mm in diameter) underwent cell death manifested by collapsed leave cells. In contrast, both p35S::SNC1-3:GFP and p35S::SNC1-4:GFP induced visible cell death at both temperatures. These The dwarf phenotype of snc1-1 at 22uC is suppressed by a higher temperature of 28uC while the int102 snc1-1 (snc1-4) mutant has the same dwarf phenotype at both 22uC and 28uC. Shown are plants of 4-week-old at 22uC and 3-week-old at 28uC. (B) The int102 mutation enables snc1-1 to retain resistance to a virulent pathogen at 28uC. Shown is the growth of Pseudomonas syringae pv tomato (Pst) DC3000 in the wild type, snc1-1, and int102-1 snc1-1 (snc1-4) plants at 22uC and 28uC. The t-test shows significant difference in growth at 3 days post-inoculation (3 dpi) between the wild type and int102 snc1-1 at both 22uC and 28uC (P = 0.007 and 0.001, respectively). It also shows significant difference at 3 dpi between the wild type and snc1-1 at 22uC (P = 0.012) but not at 28uC (P = 0.415). Error bars represent standard deviations of three biological repeats. The experiments were repeated at least three times and similar results were obtained. (C) Expression of defense genes is upregulated in int102-1 snc1-1 (snc1-4) at both 22uC and 28uC and this upregulation is dependent on PAD4. Shown are PR1 and SNC1 expressions in three-week-old plants analyzed by RNA blot. rRNAs were used as loading controls. (D) The dwarf phenotype of int102-1 snc1-1 (snc1-4) is suppressed by pad4 and nahG. Shown are wild type, int102-1 snc1-1 (denoted as +), int102-1 snc1-1 pad4, and int102-1 snc1-1 nahG grown at 22uC and 28uC before bolting. doi:10.1371/journal.ppat.1000844.g001 results further support the conclusion that snc1-3 mutation is the causal mutation for heat-stable resistance.
The rit1 mutation suppresses defense responses in snc1-4 only at elevated temperature To further investigate the mechanism underlying temperature sensitivity of defense responses, we carried out a suppressor screen in the heat-stable snc1-4 mutant background. Mutants that regained high-temperature inhibition of defense responses were isolated and named rit (revertant of int). One such mutant rit1 snc1-4 had a snc1-1 like phenotype: dwarf at 22uC but wild-type like at 28uC, suggesting a regaining of temperature sensitivity (Fig. 3A). Correlated with its growth defect, the rit1 snc1-4 mutant has an enhanced disease resistance to virulent pathogen Pst DC3000 at 22uC and this elevated defense is suppressed at 28uC (Fig. 3B). At 22uC, Pst DC3000 had a similar growth reduction in rit1 snc1-4 as in snc1-1 and snc1-4 compared to the wild-type Col (p = 0.24, 0.11 respectively). At 28uC, snc1-4 exhibited an inhibition of bacterial growth to a similar extent as at 22uC (p = 0.065). In contrast, rit1 snc1-4 lost the inhibition of bacterial growth at 28uC conferred by snc1-4 (p = 0.0001) and supported bacterial growth to a similar extent as snc1-1 (p = 0.064). Therefore rit1 indeed reverses the heat-stable resistance phenotype to the heat-sensitive phenotype.
The rit1 mutant has a missense mutation in SNC1 We found that rit1 is an intragenic suppressor of snc1-4. There is no phenotypic segregation in the F2 progenies of a cross of rit1 snc1-4 with wild-type Col-0 or Ler grown at 28uC indicating that the rit1 mutation is very closely linked to the SNC1 gene. Sequencing the entire SNC1 genomic fragment in the rit1 snc1-4 mutant identified a G to A point mutation resulting in a serine substitution of glycine at amino acid residue 380 (Fig. 3C, S1). We named this G380S mutation snc1-5 and this new allele with snc1-1, snc1-3, and snc1-5 mutations as snc1-6 ( Fig. 3C). This glycine residue resides immediately after the putative GxP or GLPL motif in the NB-ARC domain. This motif was previously identified as important for nucleotide binding and mutations in residues close to the motif might compromise activation of NB-LRR proteins [20].
A second intragenic suppressor named rit4 was identified from the same rit screen. This mutant was independent of rit1 as it was isolated from a different mutagenesis pool and had an additional phenotype unrelated to defense. Interestingly, we found the same G to A alteration resulting in a G380S mutation as in rit4. That two independent but identical mutations result in the same rit phenotype confirms that snc1-5 is indeed the mutation responsible for reverting the temperature insensitivity of snc1-4. This conclusion is further supported by the SNC1-6 activity in the Nb transient expression system. The snc1-5 mutation was introduced into the p35S::SNC1-4:GFP construct to create p35S::SNC1-6:GFP. While SNC1-4 induced cell death in Nb at both 22uC and 28uC, SNC1-6 induced cell death only at 22uC but not at 28uC (Fig. 3D). Thus snc1-5 mutation appears to be a suppressor of the heat-stable SNC1-4 activity specifically at 28uC and it does not significantly suppress SNC1-4 activity at 22uC. This notion is further supported by the failure of inhibiting the SNC1-1 22uC activity by the snc1-5 mutation at 22uC. When the snc1-5 mutation was introduced into SNC1-1, the p35S::SNC1-1,5:GFP was able to induce cell death at 22uC but not at 28uC similarly to p35S::SNC1-1:GFP (Fig. 3E).
With the identification of different forms of SNC1 conferring defense responses of different temperature sensitivity, we conclude that the NB-LRR gene SNC1 is the temperature-sensitive component causing temperature sensitivity of the whole defense responses it induces. An elevated temperature inhibits plant immunity probably through the very early component of the signaling pathway that is the NB-LRR genes.
Temperature affects subcellular accumulation of the SNC1 proteins
We further investigated the molecular basis underlying the temperature sensitivity of SNC1. The SNC1 transcript level was higher in snc1-1 than in the wild type at 22uC but not at 28uC, while it was higher in snc1-4 at both temperatures (Fig. 1C). Consistent with previous findings [11,19], regulation of SNC1 at the transcript level is largely due to a feedback amplification through PAD4, as the snc1-4 pad4 double mutant had the same amount of SNC1 transcript as snc1-1 pad4 (Fig. 1C). Therefore, upregulation of the SNC1 transcript by the snc1-3 mutation at a high temperature is unlikely the primary cause of the heat-stable resistance.
Because an extremely high temperature of 37uC was shown to greatly inhibit the accumulation of the NB-LRR type of R protein MLA [21], we investigated whether temperature sensitivity is due to differential accumulation of SNC1 proteins at 22uC and 28uC. In transgenic Arabidopsis with pSNC1::SNC1:GFP, we were able to detect weak GFP signals by Western although no GFP signals could be visualized by microscopy. Three independent lines with each of the four wild-type and mutant SNC1 transgenes were analyzed by protein blots for GFP expression at 22uC and 28uC (Fig. S2, Text S1). No dramatic differences in expression level were observed among different SNC1 versions or between 28uC rescued and 28uC non-rescued lines. There is a slight reduction of SNC1:GFP at 28uC compared to at 22uC in the 28uC rescued lines but not much in the non-rescued lines. Whether this reflects a The rit1 mutant regained a temperature-sensitive growth phenotype. The snc1-4 mutant had dwarf phenotype at both 22uC and 28uC compared to the wild-type Col while rit1 snc1-4 was wild-type looking at 28uC but dwarf at 22uC. (B) The rit1 mutant has enhanced disease resistance at 22uC but not at 28uC. Shown is the growth of Pst DC3000 in the wild type, snc1-1, snc1-4 and snc1-4 rit1 plants at 22uC and 28uC. The snc1-4 mutant is more resistant than the wild type at both 22uC and 28uC while the snc1-4 rit1 mutant is more susceptible than snc1-4 at 28uC but as resistant to snc1-4 at 22uC. The t-test on growth at 3 dpi at 22uC shows that there is a significant difference between the wild type and the other three mutants: snc1-1, snc1-4, and snc1-4 rit1 (P = 0.0007, 0.0001, and 0.0012, respectively) and that there is no significant difference between snc1-4 and snc1-4 rit1 (P = 0.13). The ttest on growth at 3 dpi at 28uC shows that there is a significant difference between snc1-4 and the wild type or snc1-1 (P = 0.003, 0.001, respectively) and between snc1-4 and snc1-4 rit1 (P = 0.004) but no significant difference between snc1-1 and snc1-4 rit1 (P = 0.126). Error bars represent standard deviations of three biological repeats. The experiments were repeated two times and similar results were obtained. (C) Diagram of mutation sites in snc1-6. The snc1-5 mutation is in the NB-ARC domain. (D) Activity assay of the SNC1-6 mutant gene in Nb. Shown are Nb leaves three days after agro-infiltration with SNC1-4:GFP and SNC1-6:GFP. While SNC1-4 induces cell death at 22uC and 28uC, while SNC1-6 induces cell death only at 22uC but not 28uC. (E) Effect of snc1-5 mutation on SNC1-1 activity. Shown are Nb leaves three days after agro-infiltration with SNC1-1:GFP and SNC1-1,5:GFP. Both SNC1 forms induced cell death at 22uC but not 28uC. doi:10.1371/journal.ppat.1000844.g003 biologically significant reduction or a dilution of signals by nontransgenic plants from the segregating population is yet to be determined.
A significant correlation was observed between the amount of nuclear accumulation of the SNC1 protein and its activity. Because we could not detect GFP signals in transgenic plants, we transiently expressed different forms of p35S::SNC1:GFP in Arabidopsis protoplasts. Despite a low expression efficiency compared to p35S::GFP, three expression patterns were observed for the SNC1:GFP proteins: nucleus-only, ubiquitous (cytosol, plasma membrane, and nucleus), and no nucleus (cytosol or cytosol and plasma membrane) (Fig. 4A, B). As it was difficult to distinguish no nucleus signal from background, we scored the expression patterns by the first two categories from 200 to 300 protoplasts for each transformation. For protoplasts transformed with SNC1WT at 22uC, 1.8% showed nucleus-only signal and 2.7% showed ubiquitous signal. In contrast, SNC1-1, SNC1-3, and SNC1-4 exhibited nucleus-only signal but no ubiquitous signal (Fig. 4B). At 28uC, neither SNC1WT nor SNC1-1 exhibited the nucleus-only signal, while the majority of SNC1-3 and SNC1-4 cells exhibited the nucleus-only signal (Fig. 4B). Therefore, there is a general correlation of high SNC1 activity with more nuclear signal. At 22uC, all three mutant forms have a higher nuclear accumulation than the wild-type form. At 28uC, nuclear accumulation of SNC1WT and SNC1-1 but not that of SNC1-3 or SNC1-4 was reduced. Thus an elevated temperature reduces the nuclear accumulation of the SNC1 protein and certain mutations such as snc1-3 could resist this inhibition and induce defense at high temperatures.
Nuclear accumulation of SNC1 at elevated temperature might be critical for its activity
A similar inhibition of nuclear accumulation by temperature was also observed for SNC1 expressed in Nb (Fig. 4C). A very weak GFP signal was detected for the SNC1WT:GFP fusion protein and the signal was mostly in the cytosol and plasma membrane. On the other hand, mutant fusions proteins of SNC1-1, SNC1-3 and SNC1-4 had higher fluorescent signals and the signals were mainly localized to the nucleus at 22uC. At 28uC, SNC1-1:GFP was found in the cytosol and plasma membrane and no nucleus localization was observed. In contrast, SNC1-3:GFP and SNC1-4:GFP were both mostly accumulated in the nucleus with SNC1-4 with a stronger signal. We further tested the effect of snc1-5 mutation on the localization of SNC1-4 protein. Correlated with the cell death inducing activity at 22uC but not 28uC, SNC1-6:GFP is localized in the nucleus at 22uC but not 28uC when expressed in Nb (Fig. 4F). Thus, an elevated temperature could reduce the nuclear accumulation of heat-sensitive SNC1-1 and SNC1-6 but not the heat-stable SNC1-3 and SNC1-4, indicating that nuclear localization of SNC1 might be critical for its activity.
The nuclear accumulation of SNC1 appears to be required for the enhanced defense responses at elevated temperatures. A nucleus export signal (NES) [22] was added to the SNC1-4:GFP fusion, and it abolished the nuclear localization of SNC1-4:GFP at 28uC and also resulted in a weaker expression of the protein (Fig. 4E). The resulting SNC1-4:NES:GFP could no longer induce cell death at either 22uC or 28uC in contrast to SNC1-4:GFP (Fig. 4F). Similar activity suppression by NES was also observed for the SNC1-3:GFP fusion (Fig. 4F). Thus, nuclear localization at high temperatures is likely critical for the mutant SNC1 proteins to induce heat-stable defense responses. It remains however to be determined if the total amount of SNC1-4:NES:GFP expression is reduced compared to SNC1-4:GFP, and if so whether the SNC1-4 protein becomes less stable outside the nucleus.
Nuclear accumulation of the SNC1 protein is an early event in activating defense responses at elevated temperature To assess the relative position of nuclear accumulation of SNC1 in the signaling event of disease resistance at high temperature, we analyzed SNC1-4 localization in a few mutants that suppress the snc1-4 mutant phenotype. In addition to PAD4 and SA, we found that MOS3 and MOS6, two genes required for snc1-1 activity [23,24], are also required for disease resistance in snc1-4 at high temperature. Both the snc1-4 mos3 and the snc1-4 mos6 double mutants exhibited a largely wild-type growth phenotype at 22uC and 28uC (Fig. 5A).
We found that the nuclear accumulation of SNC1-3 and SNC1-4 at 28uC was inhibited by the mos3 and mos6 mutations. No protoplasts showed nucleus-only signal in mos3 or mos6 for SNC1-4 (Fig. 5B, 4B). In contrast, the loss-of-function mutation of PAD4 did not alter the nuclear accumulation of the SNC1 mutant proteins (Fig. 5B) although it suppressed the snc1-4 mutant phenotype (Fig. 1C, D). These data suggest that MOS3 and MOS6 mediate SNC1-induced defense responses via an early event influencing the localization of R-like protein. MOS3 encodes a putative nucleoporin Nup96 which could be responsible for RNA transport through nuclear envelope. It is yet to be determined how it influences the SNC1 localization. MOS6 encodes a putative importin a3, which may affect R protein shuttling directly. That pad4 affects disease resistance but not SNC1 localization in protoplasts further indicates that nuclear localization of the SNC1 protein at high temperature is a critical early event in heat-stable defense responses.
Mutations in the R gene N confer defense responses at elevated temperature
To determine if our observation of temperature-sensitive induction of defense responses by the Arabidopsis R-like gene SNC1 is a general phenomenon for NB-LRR type of R genes, we analyzed defense response induced by the R gene N, which confers resistance to tobacco mosaic virus (TMV) only at temperatures below 28uC [7,25]. A mutation of Y646K corresponding to snc1-3 (E640K) was introduced into a genomic fragment of N gene described previously [26] (Fig. 6A). Because the EK640LD motif in SNC1-3 forms a potential sumoylation site, we also made a mutation in the N gene to introduce a N648D change corresponding to D642 of SNC1 so that the double mutant Y646K N648D could potentially provide a sumoylation site in the N protein as in the Arabidopsis SNC1-3 protein (Fig. 6A). When co-expressed with its elicitor p50 in Nicotiana tobaccum (Nt) by agroinfiltration, the wild-type N gene triggered HR-like cell death at 22uC but not at 30uC (Fig. 4B). In contrast, mutant N genes containing Y646K, or Y646KN648D mutations, when coexpressed with p50, induced cell death in Nt at both 22uC and 30uC (Fig. 6B). To our surprise, the N648D mutant N gene also induced cell death at 30uC (Fig. 6B). These mutations do not appear to confer constitutive auto-activities because they did not cause cell death in the absence of p50. Thus, the N gene is responsible for the temperature sensitivity of HR associated with TMV resistance, indicating that other NB-LRR type of R genes might also function as temperature-sensitive components in defense responses and that temperature sensitivity can be altered by specific mutations in the R proteins to confer heat-stable disease resistance.
We determined if temperature sensitivity of the N-mediated defense response is correlated with the N protein localization similarly to that of the SNC1 protein. A fusion protein of N and citrine was shown previously to be localized to the nucleus when expressed together with p50 in N. benthamiana [27]. We found that in contrast to nucleus localization of N:citrine at 22uC when coexpressed with p50 (Fig. 6C), no signal could be detected in the nucleus when infiltrated plants were incubated at 30uC (Fig. 6C). This indicates that nuclear localization of activated wild-type R protein(s) is subject to temperature modulation, similar to active form of the mutant R protein SNC1-1.
Discussion
Despite the fact that temperature regulates many different growth and developmental processes, the temperature-sensitive components that control this sensitivity are largely unknown in plants. Temperature sensitivity in plant disease resistance is a phenomenon reported as early as 1969 and observed in various plant-pathogen interactions. Through a genetic screen for Arabidopsis heat-stable mutants that would retain defense responses normally inhibited at elevated temperatures, we identified the NB-LRR type of R-like gene SNC1 as a key component responsible for temperature sensitivity. A point mutation in the SNC1 gene is sufficient to induce defense responses at an elevated temperature (Fig. 1). Through a second genetic screen, we identified a SNC1 mutation that appears to inhibit SNC1 activity specifically at high temperature (Fig. 5). This finding reinforces the notion that the NB-LRR encoding gene SNC1 is Fig. 2C, and the protoplasts were incubated at 22uC and 28uC for 12 hours before the signals were analyzed with a ZEISS AXIO scope. Two representative pictures of each transformation are shown. (B) Quantification of SNC1:GFP localization in the wild type (wt) and the mutant (mos3 and mos6) protoplasts. Shown on the left are representative images of 'nucleus only' signal (top) and 'ubiquitous signal' (bottom). Shown on the right are percentages of cells with the nucleus-only expression versus the percentage of cells with ubiquitous localization. Data for cells of 'no nucleus' category are not shown. (C) Subcellular localization of the SNC1:GFP proteins in Nb at 22uC and 28uC. Infiltrated leaves as described in Fig. 2C were analyzed by confocal microscopy one day before the onset of cell death. Shown are the GFP signals (green), and overlay of signals from GFP and the membrane stain FM4-64 (red). Images were taken with Leica TCS SP5 confocal microscope. Nuclear localization was observed for SNC1-1, SNC1-3, and SNC1-4 at 22uC as well as SNC1-3 and SNC1-4 at 28uC. The bar represents 20 mm. (D) Subcellular localization of SNC1-6:GFP in Nb at 22uC and 28uC. The snc1-5 mutation abolishes nuclear localization of SNC1-4 at 28uC but not at 22uC. (E) A nucleus export signal (NES) reduces the nuclear localization of SNC1-4. SNC1-4:GFP and SNC1-4:NES:GFP were agro-infiltrated in Nb. Shown are GFP signals at 22uC and 28uC taken with a ZEISS AXIO 2 plus scope. SNC1-4:NES:GFP had a weaker signal than SNC1-4:GFP, and a longer exposure time was used for its imaging. . The WT and mutant N genes were agro-infiltrated together with its elicitor p50 in Nt. The WT N gene induced cell death at 22uC but not 30uC, while the N mutant genes with Y646K, N648D, or Y646KN648D induced cell death at both temperatures. (C) Subcellular localization of the N protein in Nb at 22uC and 30uC when co-expressed with p50. The N-citrine chimeric gene was agro-infiltrated in Nb together with its elicitor p50, and the citrine signal was monitored up to three days after infiltration. Nuclear localization of the N-citrine protein was observed at 22uC but not at 30uC. doi:10.1371/journal.ppat.1000844.g006 temperature sensitive. Furthermore, a mutation similar to the heat-stable mutation identified in SNC1 was created in the N gene, and the mutant N gene was capable of inducing HR at a higher temperature (Fig. 6), indicating that the R gene N is responsible for temperature sensitivity of N-mediated defense responses. Thus we uncovered a mechanism for high temperature inhibition of plant immune responses. It is very likely that the NB-LRR type of R genes rather than other signaling components are responsible for temperature sensitivity in many other R-mediated disease resistance. This mechanism may also account for temperature sensitivity in some lesion mimic mutants and hybrid necrosis. In those mutants or hybrids, upregulated defense responses and lesions could have arisen at least in part from R gene activation similarly to upregulation of the R-like genes in the bon1 or bon1bon3 mutants [28,29]. Inhibition of R or R-like activity by a higher growth temperature could suppress cell death and defense responses induced by those R or R-like genes.
It is not obvious if there is any selective advantage to have a temperature-sensitive immune system. Structural constraints may have prevented the evolution of heat-stable R genes that are also properly regulated. R genes with heat-stable activity could be associated with fitness cost, which may not manifest in the laboratory. Nevertheless, heat-stable resistance does occur in nature. For instance, the tomato Mi-9 gene confers a heat-stable resistance to root-knot nematodes. Though the gene is not yet cloned, it is shown to be a homolog of the heat sensitive Mi-1 gene [30]. It will be interesting to see if any of them arose from changes in the R genes similar to snc1-3.
We propose that temperature sensitivity in defense responses is largely mediated through the NB-LRR coding genes. Plant immunity is triggered when the total R or R-like activity, a multiplication of its amount and its protein activity, is above a threshold (Fig. 7A). For the R or R-like protein, its activity is intrinsically temperature sensitive. Consequently the total R activity would be below the threshold at elevated temperature, resulting in no defense. In contrast, the mutant R or R like proteins like SNC1-3 have reduced temperature sensitivity and therefore could induce heat-stable defense responses. The temperature sensitivity could be intrinsic to the protein itself or could be mediated by R-interacting chaperons whose homeostasis is affected by temperature. An elevated temperature might also reduce the R amount and thus total R activity [21,28]. An apparent reduction of SNC1 wild-type proteins at elevated temperature is observed in transient Nb expression system and transgenic Arabidopsis (Fig. 4C, S2). To what extent this reduction of protein level contributes to the reduced defense responses at elevated temperature is yet to be determined.
It is not well understood how the heat-stable mutations such as snc1-3 affect activities of NB-LRR proteins and how snc1-5 reverts the heat-stable activities. Although the snc1-3 mutation appears to induce autoactivation as suggested by the SNC1-3 transgenic plants (Fig. 2), the Y646K and N648D mutations in N did not induce cell death in the absence of elicitors and therefore are probably not constitutively autoactive. The E640K (snc1-3) mutation in SNC1 and Y646K in N probably do not induce a local post-translational protein modification because the N648D mutation in the N protein cause heat-stable activity as well as Y646K. These mutations do not appear to generate a local nuclear localization signal as shorter versions of the mutant SNC1:GFP proteins did not confer nuclear localization.
It has been hypothesized that activation of the NB-LRR R proteins opens the NB domain and possibly allows interaction of the amino-terminal domain with downstream signaling molecules [31,32]. We hypothesize that R proteins assume at least one transitional conformation (named T) between the OFF state to the ON state (Fig. 7B). Glycine close to the GLPL motif is required for the change from T state to the ON state and high temperature inhibits the change from the T state to ON state. The elicitors and mutations like snc1-1 promotes the change from the OFF state to state T, while mutations like snc1-3 might enhance the transition from state T to ON.
Nuclear localization is probably an immediate subsequent event after the R protein assumes the ON form. Regulated nucleocytoplasmic partitioning of key components is essential in hormone signaling, light signaling, temperature signaling, and plantpathogen interactions [33,34,35,36]. Temperature has been shown to influence the localization of regulators of temperature signaling. For instances, HSFA1, a heat shock transcription factor, is equally distributed between cytoplasm and nucleus at a normal growth temperature and is predominantly nuclear after a heat shock [37]. HOS1, a negative regulator of cold response, is cytoplasmic at a normal temperature but is nuclear after an exposure to a low temperature [38]. How and to what extent temperature influences nuclear localization of proteins will be an interesting subject to explore further.
Other mechanisms for temperature sensitivity must exist in plant immunity. For instance, basal resistance is inhibited by a higher growth temperature [6], indicating regulatory components other than R genes are modulated by temperature as well. That the NB-LRR R proteins can mediate the temperature sensitivity in disease resistance suggests that plants utilize different temperature sensors for different growth, development, and stress response processes. Future studies should further reveal the molecular basis of temperature sensitivity of R genes and more generally, of temperature modulation of gene expression, protein activities, and protein localization. The current climate change is causing increased range and severity of plant diseases [39]. The knowledge gained from this study will potentially provide us with tools to engineer crop plants with heat-stable disease resistance and with better adaptation to climate change.
Plant material and growth condition
The Arabidopsis thaliana plants were grown in soil at 22uC or 28uC under constant light (approximately 100 mmol m 22 sec 21 ) with a relative humidity between 40% and 60% for morphological phenotypic and gene expression analysis. Plants used for pathogen tests are grown under a photoperiod of 12 hr light/12 hr dark. Arabidopsis seedlings used for protoplast transformation were grown on solid medium with 1/2 MS salts, 2% sucrose, and 0.8% agar and under a photoperiod of 8 hr light/16 hr dark. Nicotiana tobaccum (Nt) and Nicotiana benthamiana (Nb) plants were grown in the greenhouse for three to four weeks before they were acclimated to 22uC on lab bench for at least a day before being used for cell death assay.
Mutant screen and map-based cloning
The snc1-1 seeds were treated with 0.25% EMS (ethane methyl sulfonate) for 12 hours. Approximately 40,000 M2 plants (derived from 4,000 M1) were screened at 28uC for int mutants with the 22uC snc1-1 like dwarf phenotype.
The snc1-4 seeds were mutagenized similarly. The M2 plants were screened for rit (revertant of int, i.e. wild-type-looking) mutants at 28uC.
The F2 populations for mapping the int or rit mutations were derived from genetic crossing between the mutants in the Col-0 background to wild-type plants in the Ws-2 background. Bulked segregation analysis was performed on pools of 40 plants with SSLP, CAPS, and dCAPS markers between Col-0 and Ws-2 [40].
Generation of constructs
For the pSNC1::SNC1:GFP construct, a StuI site was added to the genomic fragment of SNC1 before the stop codon via polymerase chain reaction (PCR). An EcoRI and StuI digested fragment of this product was ligated in frame with GFP to generate a 39SNC1:GFP construct. The PstI and EcoRV digested fragment of the 59 region of SNC1 from the BAC clone F5D3 (from Arabidopsis Biological Resource Center) and the EcoRV and PstI digested fragment of the 39SNC1:GFP construct were ligated to the PstI site of pCAMBIA1300 to generate the pSNC1::SNC1:GFP construct. The SNC1-1, SNC1-3, and SNC1-4 mutations were introduced into pSNC1::SNC1:GFP through site-directed mutagenesis with the 'QuikChange' kit according to manufacture's instruction (Stratagene).
The wild-type N gene for mutagenesis was described previously [26]. It is a HA tagged genomic fragment of the N gene under the control of a 35S promoter. This N gene was subject to site-directed mutagenesis with the 'QuikChange' kit as described above.
All primers sequences will be available upon request.
Transgenic plants generation
Agrobacterium tumefaciens stains of GV3101 (Koncz and Schell, 1986) carrying various SNC1 constructs were used to transform wild-type Col-0 plants via standard floral dipping method [42]. Primary transformants were selected on solid medium containing hygromycin.
Protoplast transformation
Protoplast isolation and transformation were carried out as previously described [43]. In brief, protoplasts were generated from wild-type and mutant seedlings grown on plates. After transformation, protoplasts were incubated at specific temperatures and the GFP signals were observed from 12 hours to 48 hours.
Transient expression in Nb and Nt
The binary vectors were transformed into Agrobacterium tumefaciens strain C58C1 [20] for transient expression. Agrobacterial cultures were grown overnight to OD 600 of 1.0 in liquid LB. Cells were then collected by centrifugation and resuspended in the induction medium (10 mM MES, pH 5.7, 10 mM MgCl 2 , 200 mM acetosyringone) to OD 600 of 0.8. After incubating at room temperature for 3 hrs, the Agrobacterial cells were infiltrated into the abaxial surface of Nb or Nt leaves using 1 ml needleless syringes. On average, four spots were used to infiltrate one whole Nb leaf. Infiltrated plants were subsequently incubated at 22uC or 28uC before the infiltrated leaves were examined for GFP signals under a microscope (Zeiss AXIO 2 plus or Leica TCS SP5) within a 48 hr period after inoculation.
RNA analysis
Total RNAs were extracted using Tri Reagent (Molecular Research, Cincinnati, OH) from leaves of 3-week-old plants. Twenty micrograms of total RNAs per sample were used for RNA gel blot analysis according to standard procedure [44].
Pathogen resistance assay P. syringae pv. tomato DC3000 was grown 2 to 3 days on the KB medium and resuspended at 10 5 cfu (colony forming unit) per ml in a solution of 10mM MgCl 2 and 0.02% Silwet L-77. Two-week-old seedlings were dip inoculated with bacteria and kept covered for 1 h. The amount of bacteria in plants was analyzed at 1 h after dipping (day 0) and 3 days after dipping (day 3). Bacterial growth was determined as described previously [45]. Figure S1 Amino acid sequences of the SNC1 proteins. The TIR, NB-ARC, and LRR domains are colored. Mutated residues of snc1-1, snc1-3, and snc1-5 are underlined. Found at: doi:10.1371/journal.ppat.1000844.s001 (0.02 MB DOC) Figure S2 Expression levels of the SNC1 proteins do not correlate with their protein activity at high temperature. Shown is Western blot analysis of SNC1:GFP expression in Arabidopsis transgenic plants with pSNC1::SNC1:GFP constructs by anti-GFP antibody. Lines with a 28uC rescued phenotype are indicated by red color and those with a non-rescued phenotype are indicated by black color. A cross-hybridization band was used as loading control. Abbreviations: pWT: pSNC1::SNC1:GFP; p-1: pSNC1::SNC1-1:GFP; p-3: pSNC1::SNC1-3:GFP; p-4: pSNC1::SNC1-4:GFP. | 2016-05-04T20:20:58.661Z | 2010-04-01T00:00:00.000 | {
"year": 2010,
"sha1": "1fa5fae70f206da81c0017a1d3d4ccc1864dd253",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plospathogens/article/file?id=10.1371/journal.ppat.1000844&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "1fa5fae70f206da81c0017a1d3d4ccc1864dd253",
"s2fieldsofstudy": [
"Biology",
"Environmental Science"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
18657553 | pes2o/s2orc | v3-fos-license | Small Intestinal Intraepithelial TCRγδ+ T Lymphocytes Are Present in the Premature Intestine but Selectively Reduced in Surgical Necrotizing Enterocolitis
Background Gastrointestinal barrier immaturity predisposes preterm infants to necrotizing enterocolitis (NEC). Intraepithelial lymphocytes (IEL) bearing the unconventional T cell receptor (TCR) γδ (γδ IEL) maintain intestinal integrity and prevent bacterial translocation in part through production of interleukin (IL) 17. Objective We sought to study the development of γδ IEL in the ileum of human infants and examine their role in NEC pathogenesis. We defined the ontogeny of γδ IEL proportions in murine and human intestine and subjected tcrδ−/− mice to experimental gut injury. In addition, we used polychromatic flow cytometry to calculate percentages of viable IEL (defined as CD3+ CD8+ CD103+ lymphocytes) and the fraction of γδ IEL in surgically resected tissue from infants with NEC and gestational age matched non-NEC surgical controls. Results In human preterm infants, the proportion of IEL was reduced by 66% in 11 NEC ileum resections compared to 30 non-NEC controls (p<0.001). While γδ IEL dominated over conventional αβ IEL early in gestation in mice and in humans, γδ IEL were preferential decreased in the ileum of surgical NEC patients compared to non-NEC controls (50% reduction, p<0.05). Loss of IEL in human NEC was associated with downregulation of the Th17 transcription factor retinoic acid-related orphan nuclear hormone receptor C (RORC, p<0.001). TCRδ-deficient mice showed increased severity of experimental gut injury (p<0.05) with higher TNFα expression but downregulation of IL17A. Conclusion Complimentary mouse and human data suggest a role of γδ IEL in IL17 production and intestinal barrier production early in life. Specific loss of the γδ IEL fraction may contribute to NEC pathogenesis. Nutritional or pharmacological interventions to support γδ IEL maintenance in the developing small intestine could serve as novel strategies for NEC prevention.
Introduction
A critical, yet understudied, area in neonatology is the development of intestinal immune regulation in preterm infants, who are prone to exaggerated inflammatory host responses to bacterial antigens [1]. One example is necrotizing enterocolitis (NEC), a common, potentially lethal disease, primarily affecting preterm infants. Epidemiologic studies indicate that NEC incidence peaks at 32 weeks postmenstrual age, suggesting that there is a developmental window of susceptibility [2,3]. NEC is characterized by uncontrolled intestinal inflammation that can culminate in bowel necrosis [4][5][6]. Approximately 9,000 infants develop NEC in the United States each year, with reported mortality rates of 10-50% [7,8].
Intraepithelial lymphocytes (IEL) bearing the T cell receptor (TCR) cd (cd IEL) are the first type of T cell to colonize the epithelium during embryogenesis providing important immunoprotective and immunoregulatory activities in the perinatal period when conventional TCRab T cell responses are not yet fully mature [9]. While the precise role of cd IEL is not yet clearly defined, they appear to be critical for the maintenance of epithelial integrity through antibacterial defense, tight junction preservation, recognition of epithelial stress, regulation of inflammatory responses and epithelial growth factor production [10][11][12][13][14][15]. The postnatal development of cd IEL in the human preterm intestine is unknown. Given the immaturity of the intestinal epithelial barrier and its postulated role in NEC [16][17][18][19][20], we hypothesized that the developmental regulation of cd IEL may relate to the window of NEC susceptibility in preterm infants and could represent a new target for disease prevention.
Here we report that cd IEL are developmentally the prominent IEL subtype in the immature murine and human gut. However, we observed a specific reduction of cd IEL proportions in the preterm ileum of NEC patients compared to gestational age matched preterm intestine resected for other indications. Loss of cd IEL resulted in more severe experimental gut injury and inhibited gene expression of IL17 in mice while IEL reduction in human samples correlated with downregulation of IL17 transcription factor RORC. This first report on cd IEL in the preterm gut suggests a novel target for prevention of severe intestinal complications of prematurity.
Ethics Statement
Fresh ileum tissue specimens from infants with NEC or non-NEC diagnoses were provided from the Vanderbilt Children's Hospital pathologist under a protocol approved by the Vanderbilt University Institutional Review Board. Informed consent was waived because all samples were de-identified and only demographic data pertinent to the study design (diagnosis and indication for tissue resection, age at time of tissue resection, gestational age, and sex) were collected from patient records prior to tissue release.
C57BL/6J and TCRd-deficient (tcrd 2/2 ) mice (originally obtained from Jackson laboratories) were bred at an animal facility at Emory University and all studies were approved by the Emory University Institutional Animal Care and Use Committee (IACUC).
Isolation of human intraepithelial lymphocytes
Patient demographics and surgical indications for the non-NEC control tissues are shown in Table 1 and Table 2 respectively. All samples (NEC and controls) were from the ileum and patients were matched for gestational age. We isolated IEL from surgical ileum specimens as previously described [21]. Briefly, the dissected mucosa was washed in HBSS media without Ca 2+ and Mg 2+ containing antibiotics, 5 mM EDTA (Sigma-Aldrich, St. Louis, MO), and 5% heat-inactivated fetal bovine serum (Atlanta Biologicals, Lawrenceville, GA)] for 20 min on a gentle rocker at room temperature. Cell suspensions were pelleted from the supernatant and washed twice in complete HBSS prior to counting using trypan blue exclusion. Cells were resuspended in freezing medium containing 50% Dulbecco's Modified Eagle's medium (DMEM), 40% heat-inactivated fetal bovine serum, and 10% dimethyl sulfoxide (DMSO) (Merck KGaA, Darmstadt, Germany). Cells were frozen in liquid nitrogen for storage until analysis at a concentration of approximately 1610 6 cells/ml.
Flow cytometric analysis and sorting of IEL
We performed 7-color flow cytometric analysis of IEL using an LSRII flow cytometer (BD). IEL were thawed and washed in PBS and counted prior to staining with a PE-TexasRed-conjugated amine viability dye (Invitrogen, Grand Island, NY) for 20 min at room temperature in the dark. Cells were then washed with FACS buffer [(PBS containing 1% bovine serum albumin (Sigma-Aldrich) and 0.1% sodium azide (Sigma-Aldrich)] and stained with titrated amounts of PE-TexasRed-conjugated anti-CD14 and anti-CD19 (''dump channel'') (Invitrogen), PerCp-Cy5.5-conjugated anti-CD3 (BD), PE-Cy7-conjugated anti-CD8 (BD), PE-Cy-5 (Tricolor)-conjugated CD103 (Invitrogen), FITC-conjugated anti-TCRab (BD), PE-conjugated anti-TCRcd (BD), and APCconjugated anti-RORC (eBioscience, clone AFKJS-9). Flow data were analyzed with FlowJo software version 9.3 (Tree Star, Ashland, OR). IEL were identified as CD3 + , CD103 + , CD8 + cells and characterized as cd IEL if cells were also TCRcd + and TCRab 2 . To confirm the purity of the IEL populations, we performed flow cytometry analysis on the remaining tissue following IEL preparation (lamina propria cells) and did not detect any CD103 + TCRcd + cells. We only analyzed viable surgical margins with adequate numbers of viable lymphocytes (''dump channel negative''). All flow cytometric gating/analysis was confirmed by an immunologist (MTR) who was blinded to the sample origin. Fluorescent Minus One (FMO) was used to control for nonspecific signal. Human RORC and occludin gene expression Total RNA was extracted from 25 mg of either fresh NEC and non-NEC ileum using the RNeasy Mini Kit or from six 10-micron sections of formalin-fixed, paraffin-embedded tissue pieces using the RNeasy FFPE Kit (Qiagen Valencia, CA). Total RNA was reverse transcribed using the RT 2 First Strand Kit (Qiagen) per manufacturer's instructions. The cDNA-containing reaction mixture was added to each well of a 96-well-plate PCR array for quantitative real-time (RT) PCR (RORC: Th17 for Autoimmunity and Inflammation PCR Array, occludin: cat no. PPH02571B RT2 Profiler PCR Array; Qiagen). PCR cycles were performed according to the manufacturer's instructions. Expression levels of cytokine genes were quantified using quantitative RT-PCR analysis based on intercalation of SYBRH Green on an ABI 7300 Real-Time PCR system (Life Technologies, Carlsbad, CA). The relative level of mRNA expression for each gene in each sample was normalized to the expression level of reference gene GAPDH and the data were analyzed using the DDC t method [22].
Human immunohistochemistry
Immunohistochemistry of IEL in formalin-fixed paraffinembedded tissue sections was performed as recently described [23]. Briefly, 5 mm paraffin embedded sections were cut and placed on charged slides. After epitope retrieval and protein blocking, slides were incubated for 20 minutes with anti-human CD3 (1:125) (DakoCytomation). A streptavidin-biotin detection system was used followed by application of DAB. The murine Envision+ System, DAB/Peroxidase (DakoCytomation) was employed to produce localized, visible staining. The slides were counterstained with hematoxylin, dehydrated, and cover-slipped.
Intestinal injury model
To induce intestinal injury, we injected 2 weeks old C57BL/6J or TCRd-deficient mice with 100 mg/kg platelet activating factor (PAF, Sigma Aldrich, St. Louis, MO) and 1 mg/kg E. coli 0128:B12 lipopolysaccharide (LPS, Sigma Aldrich) intraperitoneally as previously reported [24][25][26]. Control animals were injected with PBS vehicle control. Pups were sacrificed two hours later and the distal small intestine was isolated. A portion of the distal small intestine was fixed in 10% formalin (Fisher Scientific, Pittsburgh, PA) for paraffin embedding, sectioning and hematoxylin and eosin (H&E) staining for intestinal injury severity scoring (see below). The remainder was collected in Trizol (Invitrogen, Grand Island, NY) for RNA isolation analysis of cytokine gene expression (see below).
Murine IEL isolation and analysis
To examine the ontogeny of cd IEL, small intestines were harvested from 1 week old, 2 weeks old, 3 weeks old, and adult mice (6-8 weeks old). To examine frequencies of cd IEL in mice subjected to experimental intestinal injury, small intestines were harvested from 2 weeks old mice subjected to experimental intestinal injury as described above. Intestines were cut longitudinally and rinsed of luminal contents and subequently cut into 1 cm pieces and shaken at 250 rpm for 20 min at 37uC in HBSS (Ca/Mg-free) with 5% fetal bovine serum and 2 mM EDTA. The cell suspensions were passed through a 100 mm cell strainer then through glass wool columns and centrifuged at 1500 rpm. The cell pellets were resuspended in 45% isotonic Percoll, underlain with 70% Percoll, and centrifuged at 2000 rpm for 25 min. The IEL at the interface of 44% and 70% Percoll were collected and washed for flow cytometric analysis. This technique for IEL isolation has been shown to be valid for both neonatal and adult murine intestines [27,28].
To examine relative frequencies of cd IEL in wild-type mice subjected to intestinal injury, IEL were isolated from dam-fed wild-type 2 week-old mice or mice subjected to intestinal injury as described above. IEL were subsequently isolated, stained, and flow cytometric analysis was conducted on a BD LSR II (BD Biosciences, Franklin Lakes, NJ). IEL were defined as CD103 + , CD3e + and characterized as cd IEL if cells were also TCRcd + and TCRb -.
Murine mRNA isolation and cytokine gene expression
Distal small intestinal samples were homogenized and total RNA isolated and reverse transcribed from random hexamer primers using the QuantiTect Reverse Transcription Kit (Qiagen, Carol Stream, IL). The resulting cDNA products were analyzed by real-time quantitative RT-PCR (iQ SYBR Green Supermix on MyiQ real time PCR detection system, Biorad, Hercules, CA) for IL17A, TNFa and GAPDH mRNA). The relative level of mRNA expression for each gene in each sample was normalized to the expression level of reference gene GAPDH and the data were analyzed using the DDC t method [22].
Statistical analysis
Human studies (Vanderbilt). Gene expression and flow cytometry cell type data followed skewed distributions and underwent logarithmic transformation. Data were compared between independent groups using Student's t test. Lamina propria lymphocytes (LPL) and IEL RORC gene expression from the same set of subjects were compared using the paired t test. Associations between TCRcd IEL and RORC mRNA expression and age parameters were explored using Pearson's correlation coefficient after logarithmic transformation of the skewed variables. The relationship between the proportion of TCRcd + IEL and gestational age in non-NEC surgical control samples followed a non-linear distribution. Thus, a model was fitted to a second order polynomial equation using non-linear regression and plotted with 95% confidence bands. Goodness of fit was evaluated by the R 2 parameter. The runs test was performed to determine whether the curve deviated systematically from the data.
Animal studies (Emory). Data are reported as mean 6 standard error of the mean (SEM). Statistical differences were determined by one-way analysis of variance (ANOVA) or Student's t test as appropriate. A p,0.05 was considered significant.
Surgical ileal mucosa from NEC patients was marked by decreased proportions of IEL and TCRcd IEL ratios compared to non-NEC surgical controls
To determine whether cd IEL may play a protective role against intestinal injury in the premature human intestine, we studied the development, phenotype and distribution of these cells in relationship to total viable CD3 + CD8 + T cells in surgical ileum samples. We prospectively isolated IEL from fresh tissue obtained through medically indicated surgical resection for 11 NEC and 30 non-NEC patients. All tissue sections were ileum and were from infants of comparable gestational age (GA) (p = 0.330), age (p = 0.487), postmenstrual age (PMA) (p = 0.065), and sex distribution (p = 0.484) ( Table 1). Non-NEC cases included resections for reanastomoses for various surgical indications (16), congenital intestinal bowel obstruction (7), spontaneous (focal) intestinal perforation (5), and tissue from stricture removal after medical NEC (2) ( Table 2). Median mucosa weights for NEC and non-NEC tissues were similar (310 mg and 370 mg, respectively, p = 0.478).
We compared the proportions of total IEL and cd IEL as demonstrated in Figure 1A. Using flow cytometry we defined IEL as life CD3 + CD8 + CD103 + lymphocytes and characterized as cd IEL if cells were also TCRaband TCRcd + . Compared to non-NEC surgical controls, NEC samples exhibited significantly lower numbers of total IEL (mean 2,342 versus 124 cells per tissue section, p,0.01). Because NEC is associated with necrosis and intestinal epithelium loss likely explaining reduction in total IEL, we calculated percentages of IEL based on total CD3 + CD8 + cells isolated in tissue epithelium preparations. The mean fraction of IEL within epithelial CD3 + CD8 + cells in non-NEC surgical controls was 64% compared to 23% in NEC, Figure 1B, p, 0.001). Within the IEL compartment of the control group, a sizable proportion of cells were cd IEL (mean 27%), which was significantly decreased in NEC patients (mean 15%) ( Figure 1C, p = 0.02). Therefore surgical NEC was characterized by a preferential reduction in cd IEL over ab IEL.
We considered the possibility of sample contamination from conventional lymphocytes in the lamina propria. We performed flow cytometry analysis on the remaining lamina propria tissue (LPL) following IEL preparation and did not detect any CD103 + TCRcd + cells supporting the purity of IEL and LPL preps. In addition, the mean total number of viable CD3 + cells isolated from the epithelium of NEC samples was 50% of cells identified in non-NEC samples (5,128 vs. 10,228 cells, p = 0.189), suggesting that the reduced IEL fraction in NEC is not explainable by significant influx of CD3 + cells from other compartments.
cd IEL are the predominant IEL subtype in the immature murine and human small intestine Since NEC predominantly affects preterm infants, we examined whether cd IEL are developmentally regulated in the preterm intestine. We examined the relationship between cd IEL proportions and gestational age, postmenstrual age, and age. We did not observe a clear association between cd IEL proportions and postmenstrual age or postnatal age, suggesting that even the most premature infants contain significant fractions of natural cd IELs at birth [33] (Figure 2). Interestingly, the relationship between cd IEL proportions and gestational age in non-NEC surgical control samples followed a U-shaped distribution as determined by nonlinear regression. This model accounted for 37% of the variance of the data (R 2 = 0.37). The observed data did not deviate significantly from the model curve as determined by the runs test (p = 0.31). This distribution suggests a possible window of vulnerability for NEC across gestation (Figure 3).
Young mice are frequently used for NEC-like injury models and correlating the maturity of the mucosal immune system between neonatal mice and humans is complex [33]. In addition, the human data on postnatal development may have been skewed, as neonatal intestinal tissue samples cannot be obtained from healthy neonates. Therefore we isolated epithelial-associated immune cells from the small intestines of wild type neonatal mice ages 1 week to adult ( Figure 4A). cd IEL were the predominant IEL subtype in younger mice (73% in 1 week old mice versus 59% in adult mice, p,0.05), with frequency approaching adult levels by 3 weeks of life (60%, p,0.05 vs. 1 week old) ( Figure 4B).
Intestinal injury in wild-type mice is not associated with a selective reduction in cdIEL
For ethical reasons, it is not possible to determine definitively whether the selective reduction of cd IEL in human NEC occurred prior to or as a result of intestinal injury. Therefore we sought to determine whether experimental intestinal injury in a murine model causes selective reduction in cd IEL. To induce intestinal injury, we injected 2 weeks old C57BL/6J or TCRd 2/2 mice intraperitoneally with 100 mg/kg PAF and 1 mg/kg E. coli 0128:B12 LPS or PBS vehicle control as described above. Pups were sacrificed two hours later and small intestinal epithelialassociated immune cells were isolated as stated above. We detected no differences in percentages of cd IEL between control mice and those subjected to experimental intestinal injury ( Figure 5). These data suggest that the selective reduction in cd IEL associated with human NEC is not a secondary finding following injury but may indicate a specific risk factor.
Significant reduction in RORC expression in NEC tissue correlates with reduction of IEL
TCRcd cells have been attributed an important role in innate mucosal immune responses, partially mediated through the production of IL17 [35,36]. TCRcd IEL have been specifically Gates were set on ''live'', CD14 2 , CD19 2 (''Dump'' negative) and CD3 + cells before applying to sub-populations. Next we identified CD3 + CD8 + T cells followed by differentiating conventional CD3 + CD103 + TCRab from TCRcd IEL (cd IEL). The patient with NEC showed significant reduction in cd IEL with a corresponding greater proportion of aE integrin (CD103) negative, conventional T cells. Dot plot of total IEL (B) and cd IEL (C) proportions were statistically significantly reduced in NEC tissue compared to non-NEC controls, p,0.001 and p = 0.02, respectively. doi:10.1371/journal.pone.0099042.g001 Figure 2. Developmental regulation of cd IEL subsets in humans. Logarithmic transformed percentages of cd IEL were plotted against gestational age (GA), postmenstrual age (PMA = gestational age plus chronological age) and age. Using Pearson's correlation coefficient we did not detect any association of cd IEL proportions with GA, PMA or age in either NEC or non-NEC control patients. doi:10.1371/journal.pone.0099042.g002 shown to produce IL17 under inflammatory conditions [37,38].
To determine whether a similar mechanism may play a role in the human neonatal gut, we measured the gene expression of retinoic acid-related orphan nuclear hormone receptor C (RORC) in the small intestinal mucosa of 15 NEC patients compared to 7 surgical controls. Human RORC is an analogue to the murine retinoid orphan receptor (RORct), which drives expression of IL17 in cd IEL [36]. Since expression of IL17 is dependent on cell stimulation and IEL numbers were too low to isolate sufficient cells for stimulation assays, we used RORC gene expression as a correlate for IL17 production [39]. By quantitative RT-PCR, RORC gene expression in NEC samples was reduced by a median of 10 fold (p,0.001, Figure 6A). Next, we sought to determine if the reduction of RORC expression in NEC could be explained by loss of cd IEL. We measured RORC gene expression in LPL and IEL isolated from identical tissue sections from non-NEC controls. RORC gene expression was significantly higher in IEL compared to LPL (p = 0.01, Figure 6B). In addition, we found a statistically significant positive correlation between total TCRcd + IEL proportions and RORC gene expression (Pearson R 2 = 0.41, p = 0.02 ( Figure 6C). Cumulatively, these data suggest that loss of cd IEL in NEC may limit intestinal barrier defense through decreased production of IL17.
Intestinal injury in TCRd-deficient mice is associated with increased TNFa but decreased IL17A gene expression To investigate the role of cd IEL in mucosal homeostasis and cytokine response, we measured mRNA expression of intestinal TNFa and IL17A in mice lacking cd IEL and exposed to experimental gut injury as described above. At baseline, there was no difference in the histologic appearance of control dam fed wild type or TCRd 2/2 mice ( Figure 7A). When subjected to experimental gut injury, TCRd 2/2 mice were found to have significantly worse disease scores compared to wild type mice (2.160.1 versus 2.560.1, p,0.05) ( Figure 7B). TCRd 2/2 mice also exhibited increased incidence of injury (defined as severity scores .2) when compared to wild-type mice (59% vs. 29%). Similarly, intestinal TNFa and IL17A mRNA expression was low in the steady state. In response to PAF-induced epithelial injury, intestinal mRNA expression of both TNFa and IL17A increased in wild type mice. Interestingly, TCRd-deficient mice demonstrated significantly reduced expression of IL17A (7-fold versus 22fold induction in IL17A expression, p,0.05) ( Figure 8). These data suggest that epithelial injury may induce TCR cd T cells to express IL-17 in order to protect the intestinal barrier.
Occludin gene expression is decreased in NEC tissue
Occludin forms rings at sites of cd IEL/epithelial contact and promotes cd IEL migration into epithelial monolayers [40]. Enterocytes internalized occludin in experimental NEC but expression in human NEC was unchanged in the small intestine by immunohistochemistry [16]. We sought to determine occludin Figure 5. Intestinal injury in wild-type mice is not associated with a selective reduction in cd IEL. A) Flow cytometry of small intestinal intraepithelial cells isolated from 2 weeks old dam fed wild-type (WT control, n = 2) or wild-type mice subjected to experimental gut injury as described (WT + PAF/LPS, n = 2). C57BL/6J mice stained for CD103, CD3, CD8a, TCRcd and TCRb as described above. Intraepithelial cells were pregated on CD103 + , CD3 + to depict IEL and then further gated on TCRcd and TCRb as shown. B) Percent (mean 6SE) cd IEL (defined as percent of total IEL that were TCRcd + , TCRb -IEL). Data are representative of 3 independent experiments (NS indicates no statistical difference between groups). doi:10.1371/journal.pone.0099042.g005 expression in human NEC tissue to test the possibility that reduced expression may inhibit migration of cd IEL into the intraepithelial compartment. We found statistically significant reduction in occludin gene expression in by quantitative RT-PCR in 16 NEC tissue sections compared to 13 controls (p,0.0001, Figure 9).
Discussion
Although the exact biological function of cd IEL is elusive, these cells reportedly play an important role in innate mucosal immune responses by preventing invasion of pathogenic bacteria [41], partially mediated through the production of IL17 [35,36]. In addition, cd IEL maintain epithelial barrier function through production of keratinocyte growth factor in mice [15,42] and protect from dextran sodium sulfate (DSS) induced colitis [11,43]. Furthermore, cd IEL appear to be critical for immune homeostasis [44,45]. Since epithelial barrier disruption, invasion of pathogenic bacteria and exaggerated inflammation are key contributors to the development of NEC in the preterm infant [46], we sought to determine the developmental regulation of cd IEL in the small intestinal mucosa of preterm infants and a possible role in NEC pathogenesis. We demonstrate here for the first time abundance of cd IEL in the preterm gut but also a statistically significant reduction in acute NEC. Different subtypes of cd IEL exist [34]; however we focused on CD8 + cd IEL, because of their dominance in the small intestine [47]. The loss of CD8 + cd IEL in NEC could represent a disproportional lack of immune regulatory IEL, which may be critical in the phase of precipitously increasing antigen exposure [10].
We do not know the reason for reduced IEL proportions in NEC. We considered the possibility that the reduction of IEL may be due to loss of epithelium through tissue necrosis. However, as shown in Figure S1, analyzed NEC tissue contained epithelium and IEL, although in lower numbers compared to non-NEC controls. We controlled for NEC-associated epithelium loss by calculating the fraction of IEL within the total number of epithelial CD3 + CD8 + cells. In addition, the preferential reduction of cd IEL compared to ab IEL cannot be explained by absence of enterocytes.
We contemplated the possibility of contamination from conventional lymphocytes in the lamina propria. We think this is unlikely since our protocol effectively separates IEL and LPL cells as previously published and shown in Figure 5B [21]. To further confirm the purity of the IEL populations, we performed flow cytometry analysis on the remaining tissue (LPL) following IEL preparation and did not detect any CD103 + TCRcd + cells. We have previously described an increase in non-regulatory T cells in NEC lamina propria [21] and therefore it is possible that reduction in IEL proportions in NEC is due to additional T cells entering the epithelium. However, as described above, non-NEC samples contained twice as many epithelial T cells in as NEC samples making data skew by contaminating cells unlikely. In addition, influx of CD3 + cells in NEC would not explain the specific reduction in the cd IEL fraction. Figure 7. cd T cells reduce experimental gut injury. A) Representative H&E staining of distal small intestines isolated from dam fed wild-type (1) or TCRd 2/2 (3) mice with normal histologic appearance; or wild-type (2) or TCRd 2/2 (4) mice subjected to experimental gut injury (PAF/LPS) as described (scale marker = 100 mm). Note shortened villi and epithelial sloughing with inflammatory infiltrate in wild-type PAF/LPS mice (2) and submucosal edema with severe villous sloughing in TCRd 2/2 PAF/LPS mice (4). B) Histologic severity score (mean 6SE) of distal small intestinal sections obtained from dam fed wild-type (WT control) or TCRd 2/2 (tcrd 2/2 control) mice; or wild-type (WT PAF/LPS) or TCRd 2/2 (tcrd 2/2 PAF/LPS) mice subjected to experimental gut injury as described. Data are representative of 4 independent experiments with at least 3 mice per condition per experiment (*p,0.05). doi:10.1371/journal.pone.0099042.g007 We wondered if the immature mucosal immune system contributed to the reduced cd IEL proportions in the small intestine of patients with NEC. While an inverse relationship between number of villus IEL and increasing age has been reported in adults [48], the postnatal developmental regulation of cd IEL in preterm infants was unknown. We found robust proportions of cd IEL early in life even at extreme prematurity. In addition, we defined the postnatal development of cd IEL in human non-NEC infants showing a U-shaped distribution in the last trimester ( Figure 3). TCRcd IEL may be initially recruited to the immature gut as the predominant IEL subtype in order to protect against potential injury at a time when the gut barrier is immature and exposure to new bacterial antigens is rapidly growing [49].
One potential mechanism for the reduced cd IEL fraction in preterm infants at risk for NEC may be in-utero exposure to inflammation. Histological chorioamnionitis with fetal involvement has been considered a possible risk factor for NEC [50] and inflammation associated with this pregnancy complication may lead to occludin endocytosis and therefore reduced migration of cd IEL into the intraepithelial compartment [39]. Occludin internalization has been reported in experimental NEC [16] and we show that small intestinal occludin gene expression was significantly decreased in NEC tissue compared to non-NEC controls. We consider chorioamnionitis a more likely candidate for cd IEL reduction than inflammation associated with NEC because our control group included infants with conditions that involved intestinal perforation with a significant inflammatory response.
Homing and/or retention of lymphocytes in the intestinal epithelium is maintained by expression of integrin aEb7, which is regulated by TGFb signaling [51,52]. We recently discovered overexpression of its negative regulator Smad7 in NEC tissue [53]. Inhibited TGFb signaling reduces expression of integrin aE (CD103), which in conjunction with integrin beta 7 forms a complete heterodimeric integrin molecule that is thought to mediate retention of IEL in the epithelium [54]. Downregulation of TGFb may also play a role in reduced expression of RORC [55] and enhanced T cell mediated inflammation in NEC tissue [21,56].
NEC occurs only in a subgroup of preterm infants and its risk is increased with lack of breast milk feeding and a microbiome with decreased diversity [6,46,57,58]. Expansion of intestinal cd IEL in mice depends on bacterial interaction [36] and the altered microbiome in NEC may contribute to underdevelopment of cd IEL. Dietary natural aryl hydrocarbon receptor (AhR) ligands are critical for normal intestinal immune development [59] and postnatal maintenance of IEL [60]. Lack of AhR signaling has been implicated in the pathogenesis of inflammatory bowel disease [61]. The role of AhR ligands in maintaining cd IEL in preterm infants is unknown and should be explored in future studies. In conclusion, we demonstrate for the first time the postnatal development of cd IEL in the premature intestine and therefore contribute to the understudied area of human neonatal mucosal immune development [62]. We further show that the normally enriched fraction of cd IEL in the ileum of premature infants is significantly reduced in surgical NEC. Complimentary animal and human data suggest a potentially important role of cd IEL in IL17 production and intestinal barrier protection. Ways to recruit and maintain this likely important T cell population in the preterm gut could serve as a novel strategy to reduce or prevent NEC and other intestinal complications originating early in life. | 2017-04-20T02:34:12.922Z | 2014-06-06T00:00:00.000 | {
"year": 2014,
"sha1": "c9e65235eda563ca5be40557ae0178912f10dd1e",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0099042&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "d76d4af23afb3140b0ca5487dd2a88110a60e764",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
259383714 | pes2o/s2orc | v3-fos-license | Association of prenatal obesity and cord blood cytokine levels with allergic diseases in children: A 10-year follow-up cohort study
Background and aim Although studies have associated elevated prenatal obesity with increased risk of various diseases in offspring, little is known regarding the immune system. The aim of this study was to evaluate the relationship between prenatal obesity and levels of cytokines in umbilical cord blood and development of allergic disease during the first 10 years of life in an offspring. Methods A cohort of term infants born at the ShaoXing Women and Children Hospitals in China in 2011 was enrolled in this study. Flow cytometry was performed to measure levels of various cord blood cytokines, namely IL1β, IL2, IL10, IL6, IL8, IL17, IL12, TNF-α and IFN-γ. Next, logistic regression was used to explore the association of prenatal BMI with the development of allergic disease. The relationship between levels of each cord blood cytokine with prenatal BMI, and allergic disease development was tested using linear and logistic regression analyses, respectively. Results After 10 years of follow-up, higher prenatal BMI was significantly associated with development of allergic disease in children (HR = 2.45, 95% CI:1.08–5.57, P = 0.033). We also adjusted for maternal age, education and infant gender, and found that prenatal BMI was significantly associated with higher levels of IL12 (P = 0.023) and IL1β (P = 0.049) in cord blood. Moreover, we adjusted for maternal age, education, allergic dermatitis, gestation age and infant gender, and found that increase in each unit (1.26 pg/ml) in IL17 was associated with a 55.5% higher risk of allergic disease in 10-year-old children (HR = 1.55, 95%Cl: 0.99–2.45, P = 0.056). Meanwhile, after adjusting for maternal age, education level, gestation age, prenatal BMI, gestational weight gain, infant gender and birthweight, we found that for every unit increase in IL10, IL6 and IL1β, the risk of overweight/obesity in children after 10-year follow-up increased by 18.7% (HR = 1.19, 95%Cl: 1.01–1.40, P = 0.042), 13.9% (HR = 1.14, 95%Cl: 1.02–1.27, P = 0.021) and 41.3% (HR = 1.41, 95%Cl: 1.02–1.95, P = 0.036), respectively. Conclusions Prenatal obesity was positively correlated with allergic diseases in offspring. Cord blood cytokine may play mediating roles in the associations of prenatal obesity with offspring allergic diseases.
Introduction
Obesity is a current global health problem that is becoming more common among pregnant women and children [1,2]. Recent studies have associated obesity with asthma and allergies [3,4], while other investigations have demonstrated that prenatal obesity can cause obesity and changes of immunophenotype in offspring [5][6][7]. These alterations are considered part of the fetal programming (or fetal imprinting) phenomenon. Fetal programming is a series of adaptive mechanisms incited by stimuli acting during critical periods of growth and development, that affect local fetal cellular environments through changes in gene expression, with profound and permanent consequences on tissues structure and function. The ensuing changes can be transmitted from offspring to the next generation. Studies have shown that obesity can affect several factors, including growth factors, cytokines and hormones.
Allergic diseases are characterized by skewed balance of the Th1/Th2 ratio, away from Th1 and towards allergy-promoting Th2 cells. Numerous studies, targeting cord blood mononuclear cells (CBMCs) obtained at birth, have demonstrated that newborns who subsequently develop allergy and atopy exhibit obviously low levels of Th1-associated cytokines, such as IFN-γ and IL-12 [8][9][10]. Several studies have quantified levels of cord blood immunoglobulin E (IgE) and phytohemagglutinin (PHA) stimulated cytokine-response profiles and demonstrated their potential as early predictive markers for allergic diseases [11]. To date, however, no consensus has been reached on the subject, and information regarding the apparent relationship between prenatal BMI umbilical cord blood of cytokine and development of allergic diseases in children remains dearth. In this study, we hypothesized that prenatal obesity not only alters cytokine levels in cord blood, but this alteration also contributes to development of allergic diseases or obesity in children aged 0-10 years.
Study participants and selection criteria
We recruited mothers and infants in this prospective newborn study. A total of 500 pregnant women (>37 weeks, singleton pregnancy) who visited the ShaoXing Women and Children Hospitals in 2011 were enrolled. The study objectives were explained to the subjects, who later voluntarily signed a written informed consent prior to enrolment. The study was approved by the Ethics Committee of Shaoxing Maternity and Child Health Care Hospital (Approval No. 2018035). Pregnant women were excluded from the study if they met the following criteria: had pre-existing medical conditions such as diabetes mellitus, seizures, and serious psychiatric disorders; drug or alcohol abuse; sexually transmitted diseases; in vitro fertilization and ovulation induction; and preterm premature rupture of membranes. Neonates with respiratory distress syndrome and pathological jaundice after birth were also excluded. Analysis was limited to 169 of 338 enrolled participants in whom cytokines were measured and pregnancy BMI and childhood anthropometric measurements were available. Selection criteria of the enrolled subjects are summarized using a flow chart in Fig. 1.
Data collection
Women were interviewed at enrollment, and information on their education as well as current and previous pregnancies collected. We also recorded maternal height and weight at enrollment, with the latter assessed based on maternal recall. Maternal pregnancy BMI (kg/m 2 ) was calculated. Data involving the parity, mode of delivery, pregnancy complications, intrapartum complications, allergic dermatitis, asthma, and smoking were collected. A sterile needle puncture was used to obtain cord blood samples immediately after cord clamping, which typically occurred within 60 s of delivery. Serum was separated from the umbilical cord blood within 6-24 h, by centrifuging it for 10 min at 300 g. The serum was immediately stored at − 80 • C until further analysis. Levels of IL-2, IL4, IL-6, IL-8, I-10, IL-1β, IL-12, IL-17, IFN-γ, and TNF-α in cord blood serum were determined via flow cytometry microsphere chip capture technology level. The human T helper cell 1/2 cytokine kit II (BDTM CBA Human Th1/Th2 Cytokine Kit II, product of BD Biosciences) and FACSCalibur flow cytometer (American BD company) were used according to the manufacturer's instructions with minor modifications. Briefly, 96-well filter-bottom plates were first wetted with buffer, and beads conjugated with capture antibodies targeting each cytokine added. Serum (in duplicate wells) was then added to the wells, followed by serial dilutions of cytokine standards. Plates were incubated at room temperature on a shaker for 2 h, then for another 18 h at 4 • C. Next, the plates were washed on a vacuum manifold, then incubated with a biotin-labeled detector antibody cocktail for 2 h at room temperature on a shaker. The plates were washed again and incubated with streptavidin-PE for 40 min. A final set of washes was performed and the beads resuspended in reading buffer. Samples were acquired on the Luminex MAP200 instrument, with collection criteria set for 100 beads per analyte (2000 beads total). Data were analyzed using MasterPlex software (Hitachi Software Engineering America Ltd., MiraiBio Group). Each infant's height and weight were also measured at every visit. We also extracted each child's status on allergic diseases from medical records and information collected during study visits. Allergic diseases included food allergies, atopic dermatitis, allergies rhinitis and allergic asthma. Offspring's data were compiled annually for 10 years.
Statistical analysis
We used the median prenatal BMI (26.36 kg/m 2 ) to stratify the mothers into two groups. Continuous variables were presented as means ± standard deviation (SD), whereas categorical data were presented as frequencies and proportions. Levels of inflammatory factors were log-transformed prior to subsequent regression analysis. Comparisons between L-BMI and H-BMI groups were performed using t-test or non-parametric test for continuous variables and with χ 2 or Fisher's exact tests for categorical variables.
The relationship between prenatal obesity, and cord blood cytokine levels with children's allergic diseases was analyzed using a 3step statistical analysis. Firstly, we employed logistic regression to determine the association between prenatal BMI and children's allergic disease, then applied linear regression models to determine the relationship between maternal prenatal BMI and levels of each cytokine in umbilical cord blood (log-transformed). Next, we employed logistic regression models to evaluate the relationship between levels of each cytokine in umbilical cord blood (10-fold log-transformed) with child anaphylactic disease, and overweight/obesity. All analyses were performed using IBM SPSS software version 23, and packages implemented in R version 4.0.3. All tests were two-tailed, and data followed by P < 0.05 considered statistically significant.
Participant characteristics
A total of 169 subject pairs (a mothers and her infant) completed the 10-year follow-up and were included in the final analysis. The mean maternal and gestational ages of the study group at delivery were 27 years and 39 weeks, respectively, with an average weight gain of 15.3 kg during pregnancy. The mean ages of the children were 10.3 years. A summary of demographic and clinical characteristics of mothers and infants is shown in Table 1. The mean prenatal BMI for the L-BMI and H-BMI groups were 24.43 and 28.81 kg/ m 2 , respectively. There were significant differences in gestational weight gain between the L-BMI and H-BMI groups (14.37 kg vs. 16.33 kg, P < 0.05). Notably, patients in the H-BMI group had significantly higher numbers of cesarean section deliveries than those in the L-BMI group (P < 0.05). Moreover, subjects in the H-BMI group exhibited higher SBP (P = 0.073) and more infections during pregnancy (P = 0.099) than their L-BMI counterparts, although the differences were not statistically significant. Similarly, we found no statistically significant differences between the two groups with regard to prenatal characteristics, including parity, gestational age, pregnancy complications, intrapartum complications, allergic dermatitis, asthma, smoking, and maternal education (P > 0.05). Analysis of infants revealed that half of them were males, with those in the H-BMI group exhibiting significantly higher birthweights than their L-BMI counterparts (P < 0.05).
Relationship between prenatal BMI and development of allergic diseases in children
Profiles of the relationship between prenatal BMI (exposure) and childhood allergic diseases (outcome) are presented in Table 2. In summary, high prenatal BMI was associated with children's allergic disease (HR = 2.45, 95% CI:1.08-5.57, P = 0.033) after 10 years of follow-up.
Prenatal BMI and levels of cytokines in cord blood
The distribution profile of cytokines in cord blood for patients in the L-BMI and H-BMI groups are presented in Fig. 2. Nonparametric tests showed that subjects in the H-BMI group had higher IL-8 (P = 0.083) and IL-10 (P = 0.029) than their counterparts in the L-BMI group. Levels of cytokines in cord blood caused by prenatal BMI, based on Linear regression, are presented in Fig. 3A. After adjusting for maternal age, education and infant gender, prenatal BMI was significantly associated with higher concentrations of IL12 (β = 0.006, P = 0.023) and IL1β (β = 0.006, P = 0.049) in cord blood. In addition, prenatal BMI was marginally associated with high levels of IL2 (β = 0.003, P = 0.062), TNF (β = 0.005, P = 0.051), and IFN-γ (β = 0.006, P = 0.099). Epidemiologically, these results indicated that for every 1 kg/m 2 increase in prenatal BMI, IL12, IL1β, IL2, TNF, and IFN-γ increased by 0.0139, 0.0139, 0.0068, 0.0116, and 0.0139 pg/ml, respectively.
Relationship between cytokine levels in cord blood and development of allergic diseases in children
The association of cytokine levels in cord blood with childhood allergic diseases and overweight/obesity are presented in Fig. 3B
Table 2
Association between prenatal BMI (exposure) and childhood anaphylactic disease. Childhood
Discussion
This is the first study to demonstrate the cytokines profile in cord blood of obese pregnant women, and explore their impact on the development of allergic diseases and obesity in their offspring. Based on our results, we conclude with the following three points: (1) prenatal obesity is positively related with levels of IL12 and IL1β in cord blood; (2) high prenatal BMI may affect the occurrence of allergic diseases in offspring; and (3) cytokines in umbilical cord blood can mediate the effect of prenatal obesity on the development of childhood allergic diseases.
Studies have shown that overweight before or excessive weight gain after pregnancy can affect the growth and development of children, possibly through the inflammatory pathway [5,[12][13][14]. Our results showed that prenatal BMI was positively correlated with the development of allergic diseases in offspring, consistent with previous studies which showed that high maternal BMI was linked to postnatal wheeze and eczema [15,16]. In the present study, we provide the first report of the impact of BMI levels in mothers before delivery on the immune system and their possible inflammatory factor pathways. Analysis of cytokine levels in cord blood in 169 pairs of mothers demonstrated that prenatal BMI was positively correlated with levels of IL12 (P < 0.05) and IL1β (P < 0.05) in cord blood. These results were consistent with findings from previous reports, and support the hypothesis that maternal obesity is associated with low-grade chronic systemic inflammation due to higher levels of pro-inflammatory cytokines, such as IL-1β and induce placental inflammation [17][18][19]. Studies have also shown that obesity induces IL12 production [20]. However, the specific magnitude of the increase in inflammatory factors may vary slightly among different studies [5,7,21], which may be due to several factors such as race or other unknown variables. Nevertheless, it is generally acknowledged that obesity leads to elevated cytokine levels. Levels of maternal cytokines were more skewed to the Th2 response in cases where their offspring had allergic disease. Previous studies have shown that IL17 not only plays an important role in Th2 differentiation [22], but is also involved in pathogenesis of allergic skin diseases, extrinsic atopic dermatitis, and asthma [23][24][25]. IL17A might can stimulate the Th2 cytokine [26]. Our results showed that each unit increase in IL17 was marginally increased risk of allergic disease in 10-year-old children. Our results were consistent with findings from previous studies, where IL17 produced by Th2 cells was not only highly expressed in sensitized mice, but was also associated with food allergy [27,28]. However, only a handful of studies have described the relationship between IL17 levels and development of allergic diseases in children. To our knowledge, this is the first report to demonstrate the association of maternal cytokine profiles with the development of allergic disease in 10-year-old children.
Our results further showed that every unit increase in IL10, IL6 and IL1β, mediated an 18.7%, 13.9% and 41.3% increase in the risk of overweight/obesity in children after 10-year follow-up. A previous study showed that IL-6 promoted amino acid transport in the placental system and upregulated fatty acid uptake in human trophoblast cells, thereby enhancing nutrient transport and fetal growth [5,29,30]. Moreover, Lisa et al. [31] demonstrated that Blimp-1-regulated IL-10 secretion by Tregs white adipose tissue homeostasis.
Limitations
This study has several limitations that should be noted. Firstly, we found no evidence of mediating effect of cytokine, which cannot constitute the pathway. Secondly, sample size enrolled in the present study was relatively small, larger studies are needed to confirm this evidence of mediating effect of cytokines. Thirdly, our follow-up duration was only 10 years. Therefore, the long-term effects of high prenatal BMI on children's adulthood need to be further investigated.
Conclusion
Prenatal BMI was positively correlated with levels of IL12 and IL1β in cord blood. After 10 years of follow-up, levels of IL17 in cord blood were associated with occurrence of allergic diseases. Although we were unable to directly link cytokine levels to maternal obesity and development of allergic diseases in offspring, our results provide new insights into the relationship between cord blood cytokines and maternal obesity and offspring allergic diseases. Further research is needed to validate these findings. | 2023-07-11T00:39:47.226Z | 2023-06-01T00:00:00.000 | {
"year": 2023,
"sha1": "74b96d92a77e6344f9bfe7b8172503d3df7ed8ab",
"oa_license": "CCBYNCND",
"oa_url": "http://www.cell.com/article/S2405844023045838/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "448b9f6a3e234d0934f32fa3b745e381c81dc7fa",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
} |
246521279 | pes2o/s2orc | v3-fos-license | Mode-Reactive Template-Based Control in Planar Legged Robots
Translating the center of mass (CoM) while fixing the orientation of a rigid body supported by relatively massless, actuated limbs is a common problem setting in legged robotics. This paper proposes a hierarchical approach to such maneuvers that decouples CoM task planning from body orientation control in sagittal-plane models, thereby exposing a well-studied and computationally effective low dimensional dynamical system that can be used for CoM task planning. The resulting algorithms directly address the control authority (degree of underactuation) available at a given contact mode, enabling a-priori plans with these intuitive, robust pendular dynamics to be formally embedded at the (virtual) CoM of planar floating-torso models, while focusing high gain posture stabilizing feedback upon the body orientation. A series of numerical and empirical examples address single- and multi-legged leaping — transitional maneuvers where only a single brief stance mode is available to load energy into the CoM and guide its direction. We compare our hierarchical method to a model-based model-predictive controller in one of the tasks, demonstrating similar performance with a significantly smaller computational footprint.
I. INTRODUCTION
Programming a robot's work with provably correct and physically grounded architecture requires an analytically tractable method of relating abstract behavioral specifications for independent low degree of freedom (DoF) body components to a high degree of freedom, coupled hybrid dynamical system whose control authority (degree of underactuation) varies with the contact mode [1]. For legged robots, achieving such command has been particularly challenging due to the complexity of realizing such abstract task specifications (e.g., ''leap on to the box'') in the face of numerous constraints arising from body design (limited actuator force and power) and variation of environmental affordance (limited traction at available footholds). This paper presents theory and experiments that achieve tunable, reliable, highly energetic maneuvers in the sagittal plane for legged robots whose ground The associate editor coordinating the review of this manuscript and approving it for publication was Luigi Biagiotti . contact modes may severely constrain their actuators' control authority.
Specifically, neglecting leg mass, we decompose the body's three degree of freedom sagittal plane dynamics into a virtual mass center and literal orientation in such a manner that a steady posture can be asserted with as high control authority as the contact mode and actuator endowment affords. The decomposition formally guarantees that whatever actuator affordance remains, if any, can be applied with no cross talk to very well understood models of virtual pendular dynamics such that the body's translational motion can be planned and executed with considerable precision, even in the face of substantial parametric uncertainty. This hierarchical approach to planning transitional maneuvers, with formal sequential composition [2] of appropriate submodules, empirically yields robust and repeatable dynamical ascents and descents on a quadrupedal robot tasked with leaping between otherwise unreachable handholds and footholds. [3] robot executing a ''monkey bars'' task (cf. Sec. V-A) using the controller in this paper, with a superimposed image showing the model used to generate the stance phase reference dynamics.
A. RELATED WORK
We restrict our focus to a class of planar floating-torso [4]- [6] models with massless, kinematic-singularity-free legs, and sticking toe contacts (detailed definition in Assumption 1). These modeling assumptions place our analysis within the formal hybrid systems framework of [7]. We also restrict attention to ''pitch steady'' locomotion (prioritized control of body orientation) joining an extensive literature [8]- [10] as formalized in Def. 1.
1) WHOLE BODY CONTROL (WBC)
Given sufficient control affordance (enough toe contacts with suitable traction and an adequate number of actuators in poses far from kinematic singularity) whole-body control (WBC)-specified wrenches applied to the robot's full rigid body dynamics-can be achieved via feedback linearization (FL) [11], [12]. More recent literature has approached WBC through recourse to quadratic programming (QP)-minimizing, pointwise-in-time, the actuators' torque-affine deviation from the specified wrenches-to avoid strict inversion of the dynamics required by FL [5], [13], [14]. Although somewhat more computationally expensive, QP naturally incorporates traction and torque constraints while sidestepping numerical issues in non-invertible (underactuated) configurations. Of course, poorly structured or outright infeasible (e.g., underactuated, or slippery) configurations incur large tracking errors consequent upon the mismatch between desired and produced wrenches.
One method of mitigating (but not solving) these problems in small multilegged robots with light limbs is to rapidly change the contact mode by taking a transient step, or simply relying on high frequency gaits [15], [16]. Another approach to mitigating the underactuation issue in WBC is operational space control or null-space control [5], [13], [14], [17], where task requirements are prioritized and addressed according to the affordance at the current configuration.
In general, depending upon the degree of underactuation, WBC methods cannot guarantee task completion when coordinated control of the body is required. Moreover, even in modes and configurations with adequate control affordance, these direct or optimally approximated wrench control methods rely intimately on accurate robot and environment models and may not be sufficiently robust for long term operation in less than perfectly characterized environments.
2) OPTIMAL CONTROL
If the system is not fully actuated, feedback design is possible for controllable systems and has been pursued in the legged locomotion literature via optimal control methods such as LQR or SOS [18]. The imposition of an objective function on the state space relieves the burden of specifying WBC target wrenches. Its integration over a specified time horizon can potentially mitigate the limitations of direct WBC by leveraging the improved affordance of favorable contact modes to relax the passage through impoverished configurations while making optimal progress.
Model predictive control (MPC) is an increasingly popular approach to optimal control that can offer both computational efficacy and formal insight by rewarding reference motion sequences over a specified finite time horizon [19]. MPC has been successfully applied to direct WBC by transforming the control input space from joint torques to toe forces [20], and it has been used to generate reference wrench sequences for subsequent WBC by application to a simplified target dynamical control system [14], [21], [22].
Optimal control methods rely on accurate models due to their iterated appearance in the horizon-long integrated objective function. Hence, poorly modeled environments or imperfectly represented robot properties-inevitable sensorimotor infidelities and, particularly, sensitive model mismatch incurred by difficult to measure parameters [23]-can cause failure [24]. Moreover, all are computationally significantly more expensive (even MPC with linearized, approximated dynamics [20]) in comparison to pointwise-in-time WBC methods.
3) FROM REFERENCE TRAJECTORIES TO ANCHORED TEMPLATES
In general, optimal control methods are applied to regulate whole body motion around state space reference trajectories. These reference signals themselves often arise as optimized interpolants constrained to respect some simplified approximant of the complete hybrid dynamics [25], or they may be generated by numerically integrating forward from the current state some similarly simplified reference dynamical system [26], [27]. However, in all these approaches, by dint VOLUME 10, 2022 of being fixed ahead of time, the models used cannot adapt to unforeseen contact events.
This paper takes its place in a long tradition of dynamical model reduction associated with biological observations of low degree of freedom ''template'' dynamics emerging from complicated high degree of animal bodies that ''anchor'' the simpler behavior [28]. Their empirical validity and utility in control of robotic systems has been observed in numerous instances [29]- [31]. A complete formal account of hierarchical composition for classical dynamical systems can be found in [32] while its formal extension to the hybrid setting remains a work in progress [33]. In this paper, we use the terms ''template'' (reduced-order model that generates the reference dynamics) and ''anchoring'' (its embedding via WBC), as they are defined in Appendix VI. We use the term ''template planning'' to refer to the selection of reference dynamics whose controlled ''anchoring'' wrenches in the whole body dynamics will be integrated by the robot's physics to directly expose the desired motions in real time.
From the perspective of robot locomotion, our work follows a large body of prior walking control literature utilizing LIP (linear inverted pendulum) and ZMP (zero moment point)-based methods [9], [34], in that we prioritize the stabilization of body orientation over the regulation of CoM translation. Specifically, this paper replaces by algorithmic prescription of templates prior work in which the template (e.g. linear inverted pendulum [35], SLIP [36]) has been selected by intuition and domain expertise. Thus, our controller applies the appropriate anchored template composition in a mode reactive manner in real-time.
B. OUR APPROACH: MODE-REACTIVE TEMPLATES
This paper contributes new theory for sagittal plane control of legged machines along with empirical demonstrations of its efficacy for planning and executing highly energetic maneuvers requiring multiple hybrid transitions through variously underactuated modes. We introduce a family of pitch-steady anchoring controllers for algorithmically selected pendular templates that govern CoM translational dynamics with as much remaining control authority as contact mechanics and actuator endowment allow. We provide formal convergence guarantees for the both the pitch and the CoM subsystems assuming perfect traction. We illustrate the utility of these simply tunable pendular templates for planning transitional maneuvers by application to two different leap sequences requiring careful attention to foothold and handhold placements along the way. We demonstrate the resulting closed loop hybrid systems by implementing them on the Minitaur quadruped, [3], [15], depicted in Fig. 1, first in their idealized ''pinned toe'' form and second, a ''traction-aware'' version, using a pointwise-in-time QP to relax that naive anchoring into a WBC which is feasible relative to the available model of substrate coulomb friction. Both these empirically demonstrated versions are computationally-efficient enough to run in real time at 1KHz control rates on the embedded microcontroller on the robot. The anchorability of the embedded open-loop template controller in this paper (the accuracy of which is sensitive to modeling errors), as well as success in other applications as discussed in Sec. VI, are evidence toward the robustness of our proposed anchoring scheme.
We also implement (in simulation) an MPC-based WBC for one of the tasks as a representative state-of-the-art alternative, for comparison and to illustrate the benefits of our modular hierarchical controller composition. Our numerical experience is that such a conventional approach will often fail in that highly energetic underacatuated setting if the reference trajectory is chosen naively. Hence, we accord this MPC-based anchoring the benefit of our algorithmically chosen mode-reactive template as its infinitesimal reference trajectory generator. In other words, we present the ''best case'' alternative comparison performance by explicitly accounting for underactuation issues in a manner that has not been reported in the prior WBC approaches described above. As we report in our results (Sec. IV-B2.b), the state-of-the-art MPC-based WBC alternative yields performance similar to our proposed hierarchical anchoring but incurs a significantly more burdensome computational footprint.
In sum, this paper presents for the first time a direct correspondence between arbitrary configurations of a class of sagittal plane locomotion models, and dynamical template models that can accurately capture the available affordance as well as be utilized for computationally tractable template planning. The formal correspondence ensures that task execution plans created with these simplified models can be effectively anchored into the floating torso, and the simplicity of the models ensures that computationally-constrained legged robots can execute dynamically challenging behaviors as we demonstrate experimentally. Moreover, since these template models resemble well-studied dynamical systems like point particles and inverted pendula, template controllers leveraging momentum-or energy-based methods can be utilized directly (as we show in our empirical demonstrations). Lastly, The combinatorial explosion in the number of dynamics modes that need to be considered has motivated ''contactimplicit'' trajectory optimization techniques [26], and we hope that the computational tractability of the presented approach can pave the way for an online reactive analogue.
In Sec. II, we introduce a general class of floating-torso locomotion models for consideration, and proceed to then subdivide this class according to the type of anchoring the differing control affordances allow (Fig. 2, Prop. 2). We discuss the resulting behavior under these types of anchoring in Sec. III and summarize our results in Fig. 2C. In Sec. IV, we demonstrate template-based control of dynamic leaping behaviors on a simulated monoped with offset torso, and the physical Minitaur robot.
II. MODELING A CLASS OF PLANAR MECHANISMS
We introduce a general class of planar models (depicted in Fig. 2B) to which our analytical results apply. The joint configuration θ = (θ 1 , . . . , θ k ) includes all the limbs, where θ j is the configuration of limb j. While normally we would expect a subset of legs to be in contact, the focus of this paper is on the stance dynamics of a single contact mode, and so without making any assumptions about k ∈ N (i.e., it could one or greater), we assume all k contacts are active.
We use the notation D x f (x) to indicate the Jacobian (matrix of partial derivatives of the function f evaluated at state x), and sometimes omit the first subscript if it is the only argument for the function. To disambiguate, where possible, we use bold lowercase symbols to denote vectors, lowercase symbols to denote scalars, and uppercase symbols to denote matrices.
Assumption 1 Floating Torso Model: The model has 1) a single massive rigid body, and all other links are massless; 2) no kinematic singularity (Dg j are full rank); 3) all contacts are sticking contacts; 4) the body orientation φ ∈ S 1 is a cyclic variable in the Lagrangian, i.e. ∂L ∂φ = 0, though it has kinetic energy, ∂L ∂φ = 0. The physical interpretation is that the body has its mass distributed uniformly, so that there is no net moment due to gravity about the CoM. Based on Assumption 12, since each Dg j (θ j ) is full rank, each θ j ∈ j has dimension at least 2, and could be composed of revolute or prismatic joints. All of these conditions hold for (among others) the models depicted in Fig. 2B. For the physical robots shown, the massless leg assumption (Def. 11) relies on published evidence [37]- [39] and our conjecture of its effectiveness; the sticking contact assumption (Def. 13) relies on the specific operating conditions.
The full configuration space includes joints and body coordinates as in [40], such that the configuration space is the product of, respectively, joint and body configurations, ( j j ) × SE (2). Let n denote the total number of joints (adding up sizes for each θ j ), and let m denote the total number of actuated joints.
A. BACKGROUND: FLOATING TORSO KINEMATICS AND DYNAMICS
Let R : S 1 → R 2×2 be a function that maps an angle on to a rotation matrix. Each toe creates a contact constraint, written in the (inertial) world frame as where a j remains constant during stance (Assumption 13), and we denote by a(q) a stacked version with all the contacts j ∈ {1, . . . , k} in the contact set. The contact Jacobian iṡ where we define the matrices . . . . . . For a single massive rigid body with massless legs (Assumption 11), the unconstrained dynamics can be derived using a simple Lagrangian, where potential terms γ include gravity and compliance.
In the planar setting (unlike the spatial setting), the R (n+3)×(n+3) unconstrained inertia tensor is constant and so there is no Coriolis matrix. The dynamics can be derived using a constrained Lagrangian as in [40], where the upper n rows of the inertia tensor corresponding to the massless legs are zero, and the lower diagonal 3 × 3 block (corresponding to the ''body'' DoFs x) is The dimensions of various matrices are The columns of A x correspond to the SE(2) body configuration. In (5), B represents the mapping of actuator torques to the generalized coordinates, stacking the contribution from each limb. We observe in (5) that Assumption 11 allows us to impose a sort of ''decoupling'' of the reaction forces λ from the dynamical effects (terms dependent onq,q). VOLUME 10, 2022
B. CoM AND PITCH DYNAMICS
Based on Assumption 12, A θ (3) is full rank, and so we can solve 1 for λ from the top row of (5). From (3), we can find a left pseudo-inverse of A T θ , where Dg † j is a standard pseudo-inverse; applying this to (5), In case Dg j is square (the limb kinematics are not redundant), (9) simply contain the diagonal blocks of A †T θ horizontally stacked. The last row of (9) picks up the last row of A T x from (3), which can be simplified further by cancellations of R and Dg: define the pitch affordance vector We emphasize that when the contacts are active (and the contact locations a j are fixed), c j (the vector connecting the toe location to the CoM) does not vary with φ, but rather only with p, a fact we shall exploit in Prop. 1. Using (3) and (1), we see that the j th block column of the lower row of A T x is Using this in (10) together with (9), we get whereĪ := I · · · I (horizontally stacked for each leg).
III. INPUT-DECOUPLED ANCHORING: CoM TEMPLATES
In currently-practiced template-based control, the target reference dynamics are typically corrupted by high-gain anchoring forces during transient operation down to the attracting submanifold. 2 These perturbations make it particularly difficult to directly deploy template controllers, especially in the context of the energetic non-steady transitional maneuvers targeted in this paper. To address this issue, we propose a new type of anchoring, where we explicitly develop (Prop. 1) a template coordinate change for isolating the template (taskrelated) dynamics (both on T as well as along the anchoring transients down to it) from perturbations due to the (potentially high-gain) control effort required to anchor it:
Definition 1 (Input-Decoupled Anchoring): Controlled anchoring where the anchoring forces [44, Appendix A] do not appear in the reduced dynamics.
In addition to putting forward a procedure for inputdecoupled anchoring, we provide a closed-form expression for the reduced (restriction or zero) dynamics on T that has not been possible before in the literature other than in isolated cases [45]- [47] (in the first two examples of which, attraction down to the template submanifold was also not guaranteed). 3 These advances in concert make it possible to attach some guarantees of success to template-based controllers anchored on floating-torso bodies. To underscore the value of exposing an uncorrupted template model to the higher level task in this manner, we empirically demonstrate the input-decoupled anchoring of non-asymptotically stable templates in transitional leaping tasks (such as in Fig. 1). These targeted Hamiltonian systems are particularly sensitive to perturbations generated by the anchoring process.
We exclusively examine (the ubiquitous set of) tasks prioritizing orientation stabilization: Definition 1 Pitch-steady behavior: In the behavior, the body pitch stably tracks φ to φ * (desired body orientation). Specifically, the closed loop dynamics admit the orientation error function 4 → 0 as a LaSalle function. The anchoring posture that embeds the template is the submanifold of the state space, We first define the ''virtual leg'' coordinate projection from the coordinates of the physical system (p, physical CoM, and φ, orientation) into r ∈ R 2 as where h(p, φ), a correction term, is defined abstractly in (37) and constructed by successive approximants given in (41). This correction reduces to the literal mass-center projection (p, φ) → p on the pitch-steady submanifold, T (Def. 1). In the text below, we use the term ''virtual CoM'' in world-frame to refer to r + 1 k j a j (q) = h(p, φ). This expression exhibits the template coordinates as describing the configuration of the classical notion of a ''virtual leg'' joining the torso's mass center to the centroid of the toe contact locations [29], while corrupted by correction terms that disappear along with the orientation error. Before we construct h (in the proof of Prop. 1 and Appendix VI), we need some additional definitions and computations: Note that the conventional usage of the · † notation would dictate that x † be the pseudoinverse of x, but in this case we define c † φ as stated here to avoid repeating the cumbersome x T † notation.
are decoupled from u φ . The specific form of E T (which affects the virtual CoM afforfance) depends on the system, and is explored in Prop. 2. The remainder term in (13) asymptotically approaches zero with orientation error, and is detailed further in Appendix VI.
Proof: We propose a control strategy that recruits a single dimension of τ as the anchoring (pitch-steadying) torque for orientation control, leaving the remaining inputs In (14), we used the condition e T φ E T = 0, and we remind the readers that ζ in (15) is the orientation error as defined in Definition 1.
Note that as long as c φ (p) = 0, this is well-defined. The (rightmost) cancellation term in (15) is only required for cases where there is joint compliance (G θ = 0) in order to drive the orientation error to 0 (as we will see in the next subsection). When omitted, the Lyapunov argument in (18) shows that the orientation error will be driven down to a ball outside which the quadratic error K dφ 2 dominates the noise due to the G θ term. We leave u T as a free input for now and define it where we use it for template control in Prop. 2.
We utilize a fast (high gain) pitch control strategy of the form (15) for anchoring the pitch-steady behavior. Note that the last row of (11) is where we observe that there are no gravity-like terms in this row due to Assumption 14. Substituting (14) in the above, where we can see that the orientation dynamics have been decoupled from u T . Using (15), the closed loop dynamics take the form VOLUME 10, 2022 Using a quadratic Lyapunov function This shows that the T submanifold of the state space (Def. 1) is not only attracting, but also invariant (since the closed-loop orientation dynamics (16) are decoupled from other dynamics). This satisfies property 2a. Lemma 1 (stated and proved in Appendix VI) reveals that the first few terms of r (12) are where due to our definition of ζ (Def. 1), O(∇ζ 2 ) = O(ζ ), and the remainder terms are observed to disappear with the orientation error ζ (φ), satisfying property 2b. Lemma 1 also shows that we get the template restriction dynamics (13), and that they are appropriately decoupled from u φ , as claimed in property 2c. For purposes of comparison, we also briefly describe a naive non-input-decoupled anchoring procedure, and also its relation to input-decoupled anchoring, in Appendix VI.
Remark 1 (Input-Decoupled-Anchorable System Examples):
To convey an intuitive idea of the conditions for Prop. 1, in Fig. 2, we depict a number of familiar sagittal plane abstractions and existing physical robots to exemplify the variety of systems covered by Prop. 1. For concreteness and without loss of generality (since any kinematic-singularityfree design can be mapped via an appropriate kinematics coordinate change), in this subsection we assume a revoluteprismatic (RP) kinematics for the leg, where the notation θ ji denotes joint angle i of leg j. We solve for The j th block row of the above simplifies to Dg T j J (d j + g j ) using (1). Using (20), letting d j = (d xj , d zj ) and c j2 , s j2 be the cosine and sine of the relative leg angle θ j2 , .
Consider the special case in (21) where d j = 0, so that satisfying the condition of Prop. 1. So for a single leg attached at the CoM (d j = 0), actuation of only the θ j2 joint (hip angle) is required for decoupled orientation control. This design corresponds to a version of Raibert's planar hopper [29] with an unactuated compliant leg shank (middle of Fig. 2). Raibert's simple three-part controller [29] made this choice as well. However, when d j = 0, a decoupled orientation controller needs contributions from the shank extension actuator as well. Our construction (14) can be used to generalize the orientation controller to this case, where Raibert's decoupled control strategy cannot be directly applied [48]. Proposition 2 Template Behavior: For systems satisfying Prop. 1, the template control signal contribution in (13) can be further decomposed using columns of E T , e c ∈ R 2k , and the columns of E f are orthogonal to both e φ , e c . The reduced dynamics (13) admits a re-expression of the leading right hand side term as where u f ∈ R m−2 is a ''free'' virtual input that can be utilized for template control. In (7) Proof: First, note that from (10), e T φ e c = 0, i.e. e c satisfies the condition to be in the column span of e ⊥ φ , as required by the condition in Prop. 2. Next, the first term on the right hand side of (13) is agreeing with (23). If E f = 0, since we are assured that E f is orthogonal to e c , some component of the template control signal acts along a direction tangential to r, thus affording full 2DoF control of the virtual CoM.
Remark 2 (Definition of UA, CFA, FA Templates): We depict these template models in Fig. 2 along the top row. Each template has two DoFs r ∈ R 2 (translation of the virtual mass center relative to the virtual toe), but they differ in the actuator affordance, and, hence, the span of accelerations that can be imparted to the virtual CoM. The unactuated (UA) model has no available actuators and cannot be controlled, the central-force-actuated (CFA) model can be accelerated along the direction of the (virtual) leg like the spring loaded inverted pendulum (SLIP) model [49], and the fully-actuated (FA) model can be completely controlled in its (physical as well as virtual) sagittal plane. we present a nearly ubiquitious example where due to joint compliance G θ , the reduced dynamics are closely approximate to those of an inverted pendulum (IP) subject to gravitational and compliance forces ( Fig. 2A). When one actuator is added and we can use Prop. 22, we have shown that the remaining input acts as a central force. Long years of experience with approximations (e.g. [50], [51], etc.) suggest that even when cancellation of the gravity-like terms is not possible or ill-advised (due to model or sensor noise, etc.) the central force model can be leveraged to gain productive insight into the dynamical behavior, as leveraged in the template controllers devised in Sec. IV-A, IV-B, and V-A.
When additional actuation is available and we can use Prop. 23, we have shown that the virtual CoM can be controlled as a fully-actuated point mass, agreeing with the feedback linearizability of these models.
Remark 4 (Anchoring Preserves Natural Dynamics: Raibert Hopper Example):
The remaining terms on the right side of the CoM template dynamics (23) preserve any natural compliance from the full dynamics (5) on the invariant pitchsteady manifold. We demonstrate this next using the Raibert hopper with parallel shank compliance (center column of Fig. 2), which satisfies Prop. 21. Since we are restricting our attention to ζ = 0, with the toe location a(q) = 0 (without loss of generality), in (19) we have r = p. We assume the leg has the RP kinematics of (20), and gravitational and compliance terms in the potential energy of the form where ρ 0 is the nominal leg extension. For the radial joint compliance G θ , we use the constraint equation (1) to get Using (8) and substituting in (24), we can note that θ j1 = p and simplify the expression to Since there is only one leg, theĪ in (23) is just the identity matrix. Additionally, using (24) in (5), we get Putting these together in (23), we get the reduced dynamics Even if the monopedal model of Remark III-A has an actuated shank, it does not satisfy Prop. 23. A suitable example satisfying this last condition is the sagittal plane biped of Fig. 2 with m = 4 inputs, resulting in u f ∈ R 2 . The full input decomposition is a set of orthogonal columns [e φ , e c , E f ], the first two of which (e φ (10) and e c (22)) have been defined.
B. TRACTION-AWARE VERSION
The anchoring in the prior subsection does not account for crucial traction constraints conventionally incorporated in WBC. In this section we focus on our usage in a sagittal-plane biped, such as Minitaur (Fig. 5), with fully-actuated rigid legs, i.e. B = I , G θ = 0. We implement a QP that utilizes the results of Prop. 1 with arbitrary desired template dynamics m br + G p =: v des , where we use the · des superscript notation to denote the desired control signal. Using the controller (14) with decision variable (to be minimized by the optimization) u = (u φ ,ṽ), whereṽ := E T u T in (13), we can formulate a pointwise-in-time optimization problem where f j ∈ R 2 are stance toe forces for each stance leg j. The objective (27) includes the anchoring and template control terms respectively, where for the latter, (13) allows us to relate −Īṽ to the desired right-hand-side v des .
However, next we additionally incorporate a traction constraint (29). Note that c φ (p) I u = c φ (p)u φ +ṽ (14) = A −T θ τ is a stacked vector of applied toe forces f j . For each of these f j , we use a friction cone approximation, |f jx | ≤ µf jz , and require (29). Thus the constraints are linear in u, and the objective is quadratic in it. Torque constraints are straightforward to add to this optimization problem (since the leg Jacobian linearly relates the toe force f i to the joint torques). However, the direct drive (hence offering greater power density at the expense of reduced force density [15]) nature of Minitaur made these constraints overly conservative in our application, and so we left them off in our empirical trials.
We observe that a zero toe force f j = 0 is always feasible for (28)- (29), and so in configurations where the pitch-steadying control cannot be supported by the available traction, the requested toe forces diminish to zero, resulting in the robot collapsing to the ground.
IV. NUMERICAL RESULTS
We now apply the analytical results above to design controllers for hopping and leaping tasks, using the templates of Fig. 2.
A. PITCH-STEADY RAIBERT HOPPER
First, we present a numerical simulation of a planar hopper with a passively-compliant leg attached to the CoM (Fig. 3), which satisfies Prop. 21. As we have discussed in Remark III-A, the application of the controller (14) to this model produces dynamical behavior of the virtual CoM (19) closely reflecting SLIP (26). We simulate this model with our reduction controller tasked with steadying the pitch to φ * = −1 rad, and compare the resulting behavior of the physical CoM (in red) as well as the virtual CoM (13) (in blue) to the ideal SLIP behavior (in magenta, dashed). The SLIP model is initialized with the same initial conditions as the virtual CoM.
We make the following observations from Fig. 3: (a) the orientation is stabilized and displays the exponential attraction of the decoupled stable anchoring dynamics prescribed in (15); (b) the virtual CoM behavior closely resembles that of SLIP (especially in the angular momentum about the virtual toe); (c) the radial dynamics of the virtual CoM, a UA template (Fig. 2), closely resemble the passively-compliant SLIP leg.
In the following section, we examine a model with an added actuator in the leg that enables actuation of the radial component of the virtual CoM for template-based control of a leaping task.
B. PITCH-STEADY LEAPING WITH OFFSET-HIP MONOPED
We apply our results to a leaping task on a sagittal-plane monoped with offset hip, as shown in Fig. 4. The task is to correct the body orientation to a desired φ * while attaining a desiredṙ * LO liftoff CoM velocity. For this task, we first develop a template controller using the configuration-reactive template selection of Prop. 2. To anchor these reference dynamics, we compare two methods: the hierarchical controller we have presented here (15) and a model-predictive WBC, as we now detail.
1) TEMPLATE CONTROLLER
Using the results in this paper, we devise a controller for a CFA template for an underactuated monopedal leaping task (Fig. 2) to (a) anchor the template model to the posture φ = φ * in the physical robot using the control component u φ (15), and (b) control the template dynamics to accomplish the leaping task using the component u T (22).
Since we find it convenient to work in virtual-leg polar coordinates, we define ρ = r , ψ = r as functions of the virtual leg position r (12). We use a template control input, u c , (acting as a central force (23)) which performs proportional velocity control of the radial component ρ around an (15), showing the virtual leg r (12) superimposed in blue on the physical model. While the frames are sequenced from left to right, the direction of travel of the CoM is right-to-left, and the snapshots are evenly spaced in time between t = 0.1s (touchdown) and the liftoff time. Along the bottom rows, we plot various template states using our proposed controller (blue) as well as when the reference dynamics are tracked by an MPC-based WBC, applying the same template controller (30) to the full body dynamics (magenta). Note that the behavior with the two implementations is very similar.
event-sequenced pair of setpoints as follows
where w(ρ des ) := k v (ρ des −ρ), whereρ p , ψ p are constant controller parameters whose values are obtained from the task and initial condition as described below, andρ des is a placeholder for the argument of the function w. Here χ p is an analytic switching function that transitions from 0 to 1 on the event that ψ crosses ψ p .
The key insights underlying this controller exploit the dynamical properties of the CFA template, in particular (a) the angular momentum about the virtual toe [50], [51], i.e., is only perturbed by gravity in stance α(q,q) ≡ α, and (b) the radial DoF can be velocity-controlled so as to impart the kinetic energy needed for the leap without disturbing the angular DoF. Next, we give a computational prescription for the task parameters,ρ p , ψ p , of (30) and an intuitive account of their meaning. Under the assumed template behavior described above, note thatψ = α/ρ 2 is sign-definite through the behavior, and so ψ can be used as a monotonic ''phase-like'' variable. Thus, the controller (30) drives the system sequentially through the ''wait'' (χ p = 0) and ''push'' (χ p = 1) phases, and takeoff is controlled (by releasing the leg) when r = ρ LO . In the following, we use the notationr = r/ r . Define v * := ṙ * LO . Conservation of (31) in the push phase implies Here we know the left hand side, and J Tṙ * LO , and can use the equation above to calculate r LO =: ψ LO .
Next, in the push phase, we can integrate both sides oḟ ψ = α/ρ 2 (31): Under our assumed template behavior, the radial velocity is controlled to (constant)ρ p in the push phase, so we pull it out of the integral to get VOLUME 10, 2022 Lastly, to find the value ofρ p necessary to ensure the takeoff velocity of the assumed velocity-controlled CFA template matches the desired valueṙ * LO , note that the polar transformation follows r = ρr ⇒ṙ =ρr + ρJrψ.
Applying to the desired liftoff state, where ψ LO , ρ LO , anḋ r LO =ṙ * LO are known, we can calculatė Now (33), (34) together completely define the necessary terms in the controller (30).
2) NUMERICAL SIMULATION
In the task shown in Fig. 4, the simulated robot is launched just before touching down (this event is marked by the light gray vertical line in the plots) initial pitch φ approximately horizontal, and initial horizontal velocityẋ = −1.5 m/s. The target orientation is φ * = −1 rad (w.r.t. vertical), and the target takeoff velocity isṙ * LO = (−2, 2) m/s. The plots along the bottom row show various template states, and in the second-from-right plot, we also display the soft switch signal χ p (22) with dashed lines.
a: INPUT-DECOUPLED ANCHORING
From the closeness of the traces in the top two plot panes of Fig. 4 to their commanded counterparts in dashed grey, we can conclude that the task is completed successfully by the naive template controller above. We consider its successful anchoring (which relies very heavily on the conservation of (31), a dynamical property of the mode-reactive template selection) evidence to the utility of input-decoupled anchoring. Since the template controller relies on an assumption of constant angular momentum (31), we use a low-passed filtered version of the measured angular momentum. As visible in the bottom left plot, α remains roughly constant and almost unaffected by the switch to the push phase in the case of our proposed controller (blue), and ultimately this results in the template CoM velocity closely matching the commanded values. In addition, we can observe the following from Fig. 4: (a) the orientation can be effectively controlled while controlling the template (φ after controller engagement, bottom right); (b) the template CoM velocity can be altered drastically in the push phase as evidenced byż (second from left) with no coupling to the φ dynamics in the case of our proposed control (blue).
b: COMPARISON TO AN MPC-BASED WBC
For comparison, we present results from an implementation of an MPC-based whole-body controller for tracking the desired accelerations output by the template controller. Note that by using the mode-reactive template (contribution of this paper) to generate the reference dynamics, we avoid underactuation issues that would plague a conventional MPC implementation with naive reference dynamics. The MPC uses as its dynamics model a discretized linearization of the floating torso dynamics (11) along with a first-order integration scheme. We have described the details of the formulation of the MPC in Appendix VI. With access to a complete and accurate model of the system dynamics (note the presence of M x and G(x) in the equations of Appendix VI, and their absence in (15), (23)) and a dynamically feasible reference, the MPC successfully stabilizes the body pitch and can drive the CoM velocity to the desired quantity.
We believe that the most important contribution here is the algorithmic mode-reactive template selection, and have demonstrated its effectiveness with a new anchoring controller (Sec. VI) as well as an MPC-based WBC here. Based on the comparison in Fig. 4, we can conclude that our simple template controller based around momentum can be anchored by both methods with similar performance. However, the traction-aware version of the proposed Sec. III controller is less reliant on an accurate model, and is significantly more computationally-efficient than the MPC, facilitating implementation onboard Minitaur (Fig. 1).
The modular architecture of the controller allows not just the same template controller (22) to be used with different anchoring strategies (as discussed above), but also for different template control strategies to be utilized with the same anchoring. For example, the hierarchical pitch-steady anchoring procedure (Sec. III-B) could be utilized with the output of a trajectory optimization solution to the leaping task above via its output v des , as an alternative to (22).
As a very approximate estimate of the relative complexity of the traction-aware QP (Sec. III-B) and the MPC here, the total number of nonzeros in the objective and constraint matrices in the QP of Sec. III-B is 11, whereas the number of nonzeros in H (44) in the MPC is approximately 70000. The former is also solved onboard Minitaur's embedded 180MHz Cortex-M4 microcontroller in a few hundred microseconds, while the latter takes about 120ms on a desktop processor. The former is implemented in C and solved using OSQP [52], while the latter is solved via MATLAB's solvers. Note that a well-optimized MPC implementation for dynamically simpler tasks can run onboard in millisecond timescales on a laptop-class processor [20], but would likely require longer horizons for a task like the one simulated here.
A. UNDERACTUATED MONOPEDAL MONKEY BARS TASK
Many useful leaping tasks such as achieving a foothold on a high ledge [53] or door opening [54] require the ability to reach objects or handholds outside of a platform's quasi-static workspace. To showcase the effectiveness of this controller in such problem settings, we devise a representative task (Fig. 1) which requires the robot to swing from a handhold, Fig. 1, showing the convergence of the body to the desired pitch and pitch velocity, (top two plots) as well as the relative consistency of angular momentum throughout stance (bottom plot, corresponding with only the green wait and blue push modes). Note that the traces are plotted such that the end of the ''wait'' phase is at t = 0.25 s in all trials, and the time at which the shortest wait mode begins is denoted by ''Latest TD time. '' We cut the horizontal axis off at t = 0.4 s since the latest takeoff time is earlier than that. Each trial is displayed with a different color in the bottom plot, and they are only drawn when the leg is in contact, as calculated from (31). B Position and velocity of CoM in the trials of Sec. V-A show resemblance to a SLIP (26) simulation (dashed magenta).
release it, and then upon landing perform a single stance leap that enables a reach toward and grasp of a second handhold. The handholds are placed sufficiently high and far apart that they will be reachable only if (a) the body pitch is maintained near vertical, and (b) the trajectory of the CoM maintains its forward velocity, and achieves a sufficient vertical velocity to allow the upper legs to reach the target handhold.
This task is similar to the one in Sec. IV-B, but while in Sec. IV-B the task requires control of the takeoff velocity vector precisely, this one requires less precise control of the velocity. The properties of the CFA template are used predominantly for stance control in Sec. IV-B, and for selecting an appropriate landing leg angle here. The rapid stabilization of body orientation (Def. 1) is critical in both tasks, thus motivating a pitch-steady anchoring in either case.
We construct a sequential composition of three template controllers defining a hybrid system as in [55] whose mode transitions are triggered by guards sensitive only to the template states. The first (a brachiating template [46] which executes swing-up to excite the leap-down from the bar) and the third (a point particle manipulator to position the aerial mode toes to grasp the bar [55]) lie outside the scope of this paper, hence we will only provide the briefest description that permits interpreting the data presented in Fig. 5. During the middle stance mode, we implement input-decoupled anchoring (Prop. 1) to ensure that the body pitch is stabilized to a desired φ * and that the reduced virtual CoM behaves in a SLIP-like fashion (Prop. 2). For our experiments we used both the pinned-toe version (14) as well as the traction-aware version of Sec. III-B.
The construction of the template controller, takes the form of a conventional sequential composition [2] of the ''wait'' and ''push'' controllers, and G (wait) denotes the goal set of the wait controller. This closely resembles the template controller of Sec. IV-B1, with the following differences: (a) we replace the piecewise-constant velocity command (30) in the wait mode with a SLIP-like virtual spring control in the wait mode (to minimize the required peak torque at the switching instants); (b) we replace the analytic switch χ p with a discrete switch from wait to push mode when the orientation error state enters an -ball around the origin and (c) in the push phase, we replace the velocity controller (30) in the push mode a constant command, u r,max , saturating the actuators' torque outputs in the fashion of [53], launching the aerial ascent to the targeted next monkey bar. Leveraging the input-decoupled template behavior (23) and prior work on monopedal hopping, we select the initial leg angle before landing using Raibert's neutral point approximation whereẋ is the horizontal velocity of the CoM, T s is the stance duration, ρ is the nominal leg extension,ẋ d is the desired horizontal CoM velocity, and kẋ is a control gain. The unreliable nature of velocity estimates makes it prudent to pre-select an estimate ofẋ during the descent. Since the VOLUME 10, 2022 virtual leg extension ρ is held constant in flight, the expression becomes the constantψ = 0.21 radians. We first directly implement the pinned-toe version of the analytical controller (14), and test across 20 trials. Of these trials, the task is successfully completed 9 times, with the vast majority of failures caused by traction loss events. We observe successful task completion in 9 out of 10 trials using the traction-aware version of Sec. III-B, and the single failure in this implementation was one related to traction: the required motor torques needed to stabilize the body could not satisfy the friction constraints (1), and so the objective (27) could not be sufficiently minimized, resulting in the body collapsing and falling over. Fig. 1 superimposes snapshots of task execution, accompanied in Fig. 5A by state trajectory traces of all 10 of the traction-aware trials as well as one each of a success and failure using the pinned-toe version. The plots reveal that φ is effectively stabilized to the target range, and the virtual leg angular momentum remains roughly constant through the energetic behavior. This figure also presents an overview of the ability of the controller presented here to decouple the complex leaping problem into two lower-dimensional control problems: the behavior designer works within the CoM behavior depicted in Fig. 5B, which reveals a phase-portrait and CoM trajectory resembling the SLIP template (26), while the controller isolates the behavior of the orientation DoF.
B. BIPEDAL LEAP: IMPOSING HORIZONTAL CONTROL
Our past experience [55] with leaping on to objects-as part of a suite of pedipulation behaviors-with Minitaur provides a backdrop against which to compare advances with the proposed controller. In such problem settings, due to the number of possible unwanted collisions between the body or legs and object to be manipulated, it is crucial that the body be able to stabilize to a desired orientation while the CoM is energized for the leap.
In the double stance case, using the FA template (per Prop. 2), we can utilize a control input u T for direct control over the vertical and horizontal components of the CoM in the sagittal plane, while stabilizing the body pitch. We execute the leap with the FA template by simply servoing to a desired takeoff velocity, as in [55]. Fig. 6 (left) presents data from two sets of 5 trials to demonstrate the ability of the controller to quickly correct the pitch to a desired value (level in this case), as seen in the left and center of the figure, while accelerating the CoM to a desired velocity trajectory, as seen in the bottom two subplots. Fig. 6 (right) presents data from two more sets of 5 trials each, this time with the pitch initially level, but with the front and back legs at different extensions to maintain this attitude. We see that the controller successfully maintains the orientation while imparting significant acceleration to the CoM.
VI. CONCLUSION
In this paper, we presented an algorithmic and formal mode-reactive template selection procedure that facilitates the online construction of feasible reference dynamics. We anticipate that using these templates in a mode-reactive template planner preceding the anchoring step (as we have demonstrated here) can enable better behaviors than a conventional approach of assuming a FA template model (suffer from underactuation) or prior trajectory optimization (computationally too demanding to run reactively), followed by WBC. We also presented an accompanying hierarchical ''pitch-steady'' prioritized anchoring strategy that is computationally very efficient, and utilized it to demonstrate dynamically challenging leaping behaviors with (CFA and FA) template-based controllers on a quadrupedal robot. These two components are connected in a modular fashion, where either the template controller could be replaced by trajectory optimization, or the anchoring controller could be replaced by a different whole-body controller (Sec. IV-B2.b). Both applications illustrated the value of input-decoupled anchoring, whereby the strong control authority required to stabilize pitch within the short available stance mode preceding the leaps could barely be detected in the state trajectories when projected onto the virtual template coordinates. We remind the reader that the guaranteed dynamically-feasible reference dynamics from mode-reactive templates can enhance the performance of any WBC approach, including (but not limited to) the two we compared in Sec. IV-B2.b. Additionally, the dynamical simplicity of the template models enables reactive template (re)planning, which is crucial if the behavior includes combinations of single-and double-contact intervals and potentially unintended contact [55].
There are a number of avenues of future extension. First, in terms of the model, though our limited focus on a massive torso and massless legs results in a simple expression for the angular kinetic energy, this idea could be generalized to settings with distributed mass-such as flexible spines [56], inertial appendages such as tails [57] and flails-by controlling centroidal angular momentum [10], or the net angular momentum about the CoM. Second, here we restricted our attention to planar models, but an extension to the spatial case does not present any conceptual obstacles. Lastly, work currently underway by the second author is investigating the application of this strategy to the control of steady-state gaits, as well as tasks that include non-point-attractor orientation dynamics such as bounding [37].
APPENDIX A BACKGROUND: TEMPLATES AND ANCHORS
Hierarchical control structures and reduced-order models have been studied in the literature with the language of ''templates'' (reduced dynamics residing on an invariant ''template submanifold'' T ) and anchoring dynamics that render T attracting [28], [44], or ''zero dynamics'' (restriction dynamics on T ) and ''virtual constraints'' that render T attracting and invariant [42]. The benefits of hierarchy include modularity in the control design [29], [58], [59] allowing control designers to pull back [44] template controllers on to the anchoring body (''template-based control''), and its empirical usage in robotics has a long tradition stretching back to Raibert's hoppers [29]. There is a long and continuing tradition of using such reduced locomotion models, in turn, as control targets to be exposed to higher level task controllers [30], [45], [60], [61], as well as in optimization-based WBC [35], [36].
APPENDIX B INPUT-DECOUPLED ANCHORING VIRTUAL LEG PROJECTION
First, we define the notation D p h =: Lemma 1:r does not depend on u φ if we can find h (12) such that Proof: Taking derivatives of (12), Now all the u φ terms must appear inp,φ; they cannot in the last parenthesized terms (until further derivatives are taken). From (6), (11), the first two terms from above that are affected by the input torque τ are where O(τ 0 ) refers to terms without any τ -dependence. Thus, from (12), we need that 1) The matrix H p I · · · I + m b i b h φ c T φ has a non-trivial nullspace.
2) The anchoring control u φ points in that nullspace direction. Next, we show that (37) is a sufficient condition for the former: Using (37) to rewrite h φ in the parenthesized matrix in (39), which clearly has c φ itself as a nullspace vector.
Using (14) in (11) and (39)- (40), but now looking only at the first two rows and denoting G p as the projection of G x on to the first two rows, we see that (up to O(φ,φ)), . So we see that u φ does not appear in this expression. Additionally, from (19), H p = I + O(φ), thus giving us the expression of (13).
Lemma 1 shows that the template coordinates r are unaffected by the anchoring force u φ , satisfying our Def. 1. The main restriction to applying it is that h(p, φ) satisfying (37) exists. The following lemma shows that we can approximate this function.
Lemma 2: With r = h 0 + h 1 + . . ., the approximation error for (37), can be controlled to orders of the orientation error ζ , δ k = ∇ζ k k!δ k (p), by setting whereδ is defined recursively asδ k (p) := −D pδk−1 (p) · c φ (p). Proof: We use a proof by induction: the base case h 0 = p reveals that δ 0 =c φ (p) =δ 0 (p). The induction step is The induction step relies at least upon the ∂h k−1 ∂pc φ (p) term cancelling with the ∂h k ∂φ term. This requires a factorial in the denominator so that ∂h k ∂φ will be multiplied by 1 (k−1)! and that the lingering factor of ∂h k−1 ∂pc φ (p) will also have that denominator and cancel. So, this series (with k terms) can approximate the error (37) to O(∇ζ k ).
APPENDIX C NON-INPUT-DECOUPLED PITCH-STEADY ANCHORING
In this section we present a feedback-linearization approach to anchoring pitch-steady dynamics and demonstrate its pitfalls compared to the approach of Prop. 1.
Proposition 3: If B T A †
θ c φ = 0, where c φ is in (10), A † θ is in (8), and B is in (5), then we can preferentially prescribe the template manifold as attracting and invariant to satisfy Def. 1.
Proof: We set a pseudo-inverse for the coefficient of τ in the last row of (11). In closed loop, where the feedback law assigned u φ will be specified in (15), the last row of (11) is = −k p ∇ζ − k dφ , which can render the ζ = 0 template manifold attracting and invariant.
We can evaluate the Lie derivative of the first two rows of (11) along the flow, where we can find u φ as a (noise-contributing) defect in thë p dynamics. In other words, the closed-loop dynamics of the template coordinates cannot be decoupled from the anchoring dynamics, thus failing Def. 1. We shall rectify this deficiency with an additional assumption and a new anchoring controller in Prop. 1.
Remark 6 (Relationship to Input-Decoupled Anchoring): First, note that the condition required in Prop. 1 implies satisfaction of the condition for conventional anchoring in Prop. 3: if A T θ c φ ∈ B, for given υ = 0, we can find τ s.t. Bτ = A T θ c φ υ. Then, c T φ A †T θ Bτ = c T φ c φ υ = 0, 5 and so it must be true that c T φ A †T θ B = 0 which implies anchorability in the sense of Prop. 3.
In exchange for the slightly stricter condition, we get the benefit of property Prop. 12c: (13) has no dependence on the (possibly large) control force u φ (15) (as posited by Def. 1), and only on the O(φ,φ) ''state error'' terms, which vanish on the template manifold. In practice, as we show in Sec. IV, the orientation-error-dependent terms cause negligible disturbance also off this manifold, while the ''classical'' alternative of relying on template-like behavior of the actual CoM suffers from large disturbances introduced by the anchoring process.
APPENDIX D MPC-BASED WBC
Solving for acceleration in (9) for a single-toed contact mode simplifies to ∈ R 2 are the forces at the toes and u := ϒ 1 is enforced using an equality constraint. The dynamics of the system for state y := ẋ x ∈ R 6 can then be described aṡ x G(x) u Linearizing around an operating point (y 0 , u 0 ), the system can be approximated aṡ The objective of the quadratic program is specified as a quadratic error of the states from the reference states, The reference state y * is obtained from the outputs of our template controller (22) acting on the mode-reactive CFA template, and the reference input isū * = 0 (i.e. ϒ 0 = 0 0 T ). In the objective function for the MPC, there is a quadratic error from each y k in the horizon to y * . The state penalty matrices (Q and P) and input penalty matrix R are also different in the two modes. In particular in Fig. 4, The guard φ p between the two modes is calculated as in Sec. IV-B1 with the SLIP-like angle ψ := (p − a) and there is no use of the virtual COM by the controller. The QP can be written to include or not the traction constraints (A µ , b µ above) that we apply in Sec. III-B. The horizon length for the example was N = 100, and MPC timestep of dt = 1ms, so that the length in time of the horizon is T = Ndt = 100ms, and the MPC was recalculated every 10ms. | 2022-02-04T16:21:31.566Z | 2022-01-01T00:00:00.000 | {
"year": 2022,
"sha1": "75bbd43aff1f084d9b47d4a97b956d1c8c12e858",
"oa_license": "CCBY",
"oa_url": "https://ieeexplore.ieee.org/ielx7/6287639/9668973/09702146.pdf",
"oa_status": "GOLD",
"pdf_src": "IEEE",
"pdf_hash": "2216d206251146b86d64eb831419c5c20d5d9c8a",
"s2fieldsofstudy": [
"Engineering",
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
53072646 | pes2o/s2orc | v3-fos-license | Modeling and simulation of an acoustic well stimulation method
This paper presents a mathematical model and a numerical procedure to simulate an acoustic well stimulation (AWS) method for enhancing the permeability of the rock formation surrounding oil and gas wells. The AWS method considered herein aims to exploit the well-known permeability-enhancing effect of mechanical vibrations in acoustically porous materials, by transmitting time-harmonic sound waves from a sound source device---placed inside the well---to the well perforations made into the formation. The efficiency of the AWS is assessed by quantifying the amount of acoustic energy transmitted from the source device to the rock formation in terms of the emission frequency and the well configuration. A simple methodology to find optimal emission frequencies for a given well configuration is presented. The proposed model is based on the Helmholtz equation and an impedance boundary condition that effectively accounts for the porous solid-fluid interaction at the interface between the rock formation and the well perforations. Exact non-reflecting boundary conditions derived from Dirichlet-to-Neumann maps are utilized to truncate the circular cylindrical waveguides considered in the model. The resulting boundary value problem is then numerically solved by means of the finite element method. A variety of numerical examples are presented in order to demonstrate the effectiveness of the proposed procedure for finding optimal emission frequencies.
Introduction
The decrease of oil and gas recovery from a reservoir is clearly an important problem that affects the energy industry. One of the main causes of such problem is the local reduction of the reservoir permeability around producing wells due to the deposition of scales, precipitants and mud penetration during exploitation which, over time, give rise to an impermeable barrier to fluid flow [10]. Well stimulation methods play a prominent role in the exploitation of these essential natural resources as they are intended to increase the permeability of the reservoir, allowing the trapped fluid to flow toward the borehole and thus enhancing the productivity of the well. Various well stimulation methods are used in practice to cope with local deposits, including solvent and acid injection, treatment by mechanical scrapers and high pressure fracturing. Each one of these conventional methods have significant drawbacks and undesirable effects. Some of them, for instance, are expensive and produce damage to the well structure, while others are highly polluting, leading to harmful ecological effects associated with the contamination of underground water resources [10,4].
The demonstrated effectiveness of mechanical vibrations on enhancing fluid flow through porous media [4,12,2], on the other hand, has led to the development of the so-called acoustic well stimulation (AWS) methods, which nowadays have broad acceptance by the hydrocarbon industry mainly due to the fact that they partially overcome the aforementioned issues.
This paper considers an AWS method based on the transmission of acoustic waves, emitted by a transducer submerged into the well, to the rock formation surrounding the well. The transducer is designed to trigger one of the physical processes known to enhance the permeability of the porous medium. Among such physical processes, we mention the reduction of the fluid viscosity by agitation and heating, stimulation of elastic waves on the well walls (to reduce the adherence forces in the layer between oil and rock formation), excitation of natural frequencies associated with the vibration of the fluid inside the porous medium, and the formation and collapse of cavitation bubbles near clogged pores of the rock formation. A variety of transducer designs have been proposed over the last three decades, which consider operation frequency and intensity ranges selected to target one (or several) of the aforementioned physical processes [21,6,19,18].
This paper presents a mathematical model and a numerical procedure that allows us to find optimal emission frequencies for which the amount of energy transmitted from the transducer into the rock formation is maximized. The proposed methodology can potentially improve the performance of the whole class AWS methods considered, as the aforementioned physical processes take place within the porous medium. In detail, we develop a mathematical model based on the Helmholtz equation and an impedance boundary condition [7] that effectively accounts for the porous solid-fluid interaction at the interface between the rock formation and the well perforations [26]. Exact non-reflecting boundary conditions derived from Dirichlet-to-Neumann (DtN) maps are utilized to truncate the circular cylindrical waveguides considered in the model [9,22,23]. The resulting boundary value problem is numerically solved by means of the finite element (FE) method [25,15,13]. Optimal emission frequencies are then found by scanning the quotient of the emitted energy to the transmitted energy-toward the region of interest-over a range of frequencies. As expected, the optimal emission frequencies correspond to field distributions for which resonances occur inside the perforations.
The outline of this paper is as follows: The mathematical model is presented in Section 2. The DtN-FE method is then described and validated in Section 3. Section 4 provides numerical results for realistic well configurations. Section 5, finally, gives the concluding remarks of the present work.
2 Mathematical model
Geometry
A perforated well is created through two successive processes called drilling and completion. The former begins by drilling a borehole in the ground, which is covered by metal pipes that are attached to its walls by a layer of cement (cf. Figure 1). This part of the process, commonly referred to as casing, aims to stabilize the borehole structure. Once the well is cased, the completion process begins by shooting with explosives the portion of the casing that passes through the reservoir level-where the oil is trapped-forming small holes across the casing and the cement layer, and into the reservoir. These holes, referred to as perforations, are aimed at enabling the oil to flow from the reservoir into the well.
Upon completion, two different zones of the well can be identified; the zone containing the perforations, which we call the perforated domain, and the remaining part of the well, which we call the cylindrical domain. The perforated domain, denoted by Ω p , is assumed to be bounded. In addition, we assume that the cylindrical domain consists of two (semi-infinite) circular cylinders placed above and below the perforated domain, which we denote by Ω + and Ω − , respectively. The model of a perforated well utilized in this paper then, corresponds to a locally perturbed circular cylinder defined as Ω w = Ω p ∪ Ω + ∪ Ω − . The interface between the perforated and the upper (resp. lower) cylindrical domains is denoted by Γ + (resp. Γ − ). Finally, the transducer (source) is assumed to occupy the bounded domain Ω s ⊂ Ω p with boundary ∂Ω s = Γ s . We refer to Figure 2 for the definition of all the relevant domains considered in the mathematical model.
Acoustic waves
The transducer is herein modeled as a time-harmonic vibrating surface Γ s that operates at a fixed frequency f = ω/2π, where ω > 0 denotes the angular frequency in radians. Being excited by a single time-harmonic source, the pressure P , the density , and the velocity V fields eventually reach a stationary (time-harmonic) regime for which P (x, t) = Re p(x) e −iωt , (x, t) = Re ρ(x) e −iωt , and V (x, t) = Re v(x) e −iωt , where t > 0 denotes the time variable and p, ρ and v denote the amplitudes of the pressure, the density and the velocity, respectively, which only depend on the position x. The linearized equations of state and conservation of mass and momentum in this case, read as [7,17] where c > 0 and ρ 0 > 0 denote the speed of sound and the equilibrium density of the fluid that fills the well, respectively. Suitably combining equations (1a), (1b) and (1c) we then obtain that p Note that dissipation effects can be easily taken into account by considering a complex wavenumber with spatial absorption depending on the equilibrium density and the shear and bulk viscosities [17]. For presentation simplicity, however, we only consider real wavenumbers.
Boundary conditions
Throughout this paper we consider boundary conditions of the form on the surfaces of the well (Γ w = ∂Ω w ) and on the transducer (Γ s ), where ζ ∈ C denotes the dimensionless surface impedance and the function g corresponds to the excitation prescribed on the surface Γ s of the transducer. The dimensionless impedance takes the form ζ = χ + i ξ, where χ and ξ (χ, ξ : Γ w ∪ Γ s → R) are known as the resistive (real) and reactive (imaginary) parts of the impedance, respectively. The dimensionless impedance ζ and the pressure field p are related to the time-averaged energy flux through Γ w by the formula [7] The time-averaged acoustic energy radiated by transducer, on the other hand, is given by The spatial dependence of the dimensionless impedance ζ in (3) is determined by the mechanical properties of the various materials that are in direct contact with the fluid. Being the casing made of metal (see Section 2.1)-which is usually modeled as a sound hard (Neumann) boundary condition-the admittance 1/ζ is taken equal to zero over the cylindrical domain and the cased portion of the perforated domain. The sound hard boundary condition (1/ζ = 0) is also used on the transducer Γ s . In order to determine suitable impedance values to be used over the boundary of the perforations, in turn, we follow the analytical calculations presented by J. E. White in [26] for the wall impedance at the interface between a liquid and a porous material. According to these calculations, the wall impedance Z-defined as the quotient of the pressure amplitude to the normal velocity amplitude on the boundary of the perforation-is given by where H (1) 1 denote the Hankel functions of the first kind and order zero and one, respectively [1], r 0 is the radius of the perforation, κ is the permeability of the porous medium, η is the shear viscosity of the fluid, and m = φη/(κB), being φ the porosity and B the bulk modulus of the fluid in the pore space. It is important to highlight that the impedance model (6) is valid under the assumption that r 0 is smaller than the wavelength λ = 2π/k. For the sake of completeness, the analytical derivations leading to (6) are reproduced in A. On the other hand, in order to link Z with the dimensionless surface impedance ζ we get, from the momentum conservation equation (1c), the relation which combined with the definition of Z yields From (3) with g = 0 and (7), we obtain that Z = ρ 0 cζ. Therefore, the dimensionless surface impedance to be utilized in (3) on the surface of the perforations is given by
Boundary value problem
We are now in position to put together the boundary value problem to be solved in what follows of this paper. The time-harmonic pressure field p : Ω → C, which is driven by the transducer submerged into the well, satisfies where the dimensionless impedance ζ is given by (8) on the boundary of perforations and it equals infinity (i.e., 1/ζ = 0) everywhere else on Γ w (see Section 2.3). In order for the boundary value problem (9) to be well-posed, p has to satisfy a certain radiation condition-which differs from the classical Sommerfeld condition-that is expressed in terms of the propagative modes associated with the upper and lower unbounded cylindrical domains Ω + and Ω − [9,22,23].
3 Dirichlet-to-Neumann Finite Element Method
The DtN map
In what follows we present a DtN-FE method for the numerical solution of (9). Notice that standard finite element (FE) methods do not directly apply to this problem due to the unboundedness of the domain Ω. The DtN-FE method is based on the DtN operators T ± that map the boundary values p| Γ ± on Γ ± into the corresponding normal derivatives ∂p/∂n| Γ ± on Γ ± [9,22,3]. As these DtN maps provide exact non-reflecting boundary conditions on Γ ± they allow us to write a boundary value problem posed on the bounded domain Ω = Ω \ (Ω + ∪ Ω − ) = Ω p \ Ω s that is equivalent to (9) and is suitable to be solved by FE methods (or any other standard numerical method for solving PDEs).
In order to provide explicit expressions for the DtN maps, we first introduce a cylindrical coordinate system (r, θ, z), with r ≥ 0, 0 ≤ θ ≤ 2π and z ∈ R, upon which the upper and lower cylindrical domains can be expressed as Ω ± = {r < R, ±z > H} ⊂ R 3 , where H > 0 denotes the truncation height and R > 0 denotes the radius of the well. The series representation of the desired DtN maps are then obtained by applying the method of separation of variables to solve the Helmholtz equation in the domains Ω ± with Neumann boundary condition on the surface {r = R}. Enforcing the radiation condition-by eliminating both down-going (resp. up-going) and exponentially growing solutions in Ω + (resp. Ω − )-we obtain the following Fourier-Bessel series for the pressure field [22] where, letting j n,m ≥ 0 denote the m-th non-negative zero of the derivative of the Bessel function of first kind J n , we have that v n,m (r, θ) = c n,m J n (λ n,m r) e inθ and λ n,m = j n,m R , The Fourier coefficients p ± n,m in (10), in turn, are given by where p(r, θ, ±H) = p| Γ ± . Taking normal derivative of (10) on Γ ± (with unit normal vectors pointing toward Ω ± ) we finally arrive at the following expression for the DtN maps
Equivalent boundary value problem
Using the continuity of the pressure field and its normal derivative across Γ ± we thus obtain the following equivalent boundary value problem for the pressure field in the bounded domain Ω.
Multiplying the Helmholtz equation (12a) across by a test function q ∈ H 1 (Ω) and integrating by parts, we arrive at the variational (or weak) formulation of (12), which is expressed as follows: where a(p, q) = Ω k 2 q p − ∇q · ∇p dx + Γp ik ζ q p ds + The well-posedness of the variational problem (13) can be easily established following the analysis presented in [9].
Finite element discretization
The discretization of the variational formulation (14) by finite elements is straightforward. We consider a family of regular tetrahedral meshes T h of the domain Ω, such that Ω = T ∈T h T (Ω is assumed to be a tetrahedral domain) where h = max{diam T : Using standard linear Lagrange elements, the approximate solution p h of (14) is expressed as where N is the number of nodes of the mesh and {φ 1 , φ 2 , . . . , φ N } is the nodal basis of the finite dimensional function space V h = q ∈ H 1 (Ω) : q ∈ C 0 (Ω), q | T ∈ P 1 (T ), ∀ T ∈ T h ⊂ H 1 (Ω) where C 0 (Ω) denotes the set of continuous functions in Ω, and P 1 (T ) denotes the set of polynomials of degree at most one defined in T . A system of equations for the node values p i , i = 1, . . . , N in (15) is obtained by substituting p by p h in (14) and taking test functions q h from the nodal basis of V h . Doing so, and further replacing the bilinear form a by an approximate bilinear formã, given by (14a) but with the DtN maps T ± in the last two integrals expressed in terms of truncated series representations, we obtain the linear system In order to ensure the uniqueness of the solution of the linear system, it suffices to consider truncated series representations of the DtN maps that include all the modes satisfying |λ n,m | ≤ k [14].
Remark 3.1. It is worth mentioning that one of the main advantages of the proposed absorbing boundary conditions over perfectly matched layers (PMLs) lies in the fact that the absorbing boundaries Γ ± can be placed arbitrarily close to the region of interest (near the perforations and the transducer) provided that a sufficiently large number of modes are considered in the truncated series representations of the DtN maps. Off-the-shelf PMLs that absorb only propagative modes, on the other hand, would have to be placed far away enough from the region of interest so that all the evanescent modes are sufficiently attenuated, leading to larger computational domains and larger linear systems. Alternatively, PMLs that absorb both propagative and evanescent modes can also be used, provided that the mesh is properly refined to account for the frequency increment within the absorbing layers [16].
Validation
In this section, we present a numerical experiment devised to validate the proposed DtN-FE method.
We thus consider a test geometry consisting of a non-perforated well and a spherical transducer, given by Ω w = x = (r cos θ, r sin θ, z) ∈ R 3 : r < R ⊂ R 3 and Ω s = x ∈ R 3 : |x − y| < δ , respectively, where Ω s is centered at a point y ∈ Ω w and δ > 0 is small enough so that Ω s ⊂ Ω w . On the spherical surface of the transducer, we prescribe the excitation where G is the Green's function of the infinite cylinder with homogeneous Neumann boundary conditions, which can be expressed in terms of the Neumann-Laplace eigenfunctions [24] as with x = (r cos θ, r sin θ, z) and y = (ρ cos ϑ, ρ sin ϑ, ζ). It is easy to verify, from the definition of the Green's function, that is in fact the exact solution of (9) for the test geometry considered. This exact solution (17) is then compared with approximate solutions obtained by means of the DtN-FE method described in Section 3 for various mesh sizes h > 0. In order to compare both the exact and the approximate solution, we define the relative error where Π h p denotes the Lagrange interpolation of the exact solution using the tetrahedral mesh T h . The results of this numerical experiment are presented in Figure 3, which displays the relative numerical errors (18) for the test problem with R = 0.5, y = (0, 0.25, 0) and δ = 0.2. The unbounded computational domain Ω was truncated by introducing artificial boundaries Γ ± placed at z = ±H, with H = 1.5. Clearly, the numerical solution converges to the exact solution as the grid size tends to zero at a rate that is slightly faster than the expected second-order rate.
Numerical simulations
This section presents numerical simulations of the AWS method modeled in this paper. The values of the relevant physical constants of the fluid and the porous material-needed to evaluate the wavenumber k = ω/c and the surface impedance ζ in (8)-are displayed in Table 1. In detail, the fluid is assumed to be crude oil, with physical constants taken from [2], and the porous rock formation is assumed to be sandstone, with permeability and porosity values obtained from [27]. In order to properly simulate the operation of the AWS method, the excitation g on the surface of the transducer has to be suitably prescribed. For that purpose, the transducer is modeled as a constant-amplitude time-harmonic vibrating surface Γ s with g = 1 N m. A more sophisticated transducer model can be easily incorporated into the simulations by considering more general functions g ∈ H −1/2 (Γ s ).
The generic well configuration to be considered in the simulations is depicted in Figure 4, which includes the definition of the relevant geometrical parameters. Three particular well configurations are initially considered, with specific geometrical parameters provided in Table 2. The 1st, 2nd and 3rd well configurations include N p = 6, 8 and 10 perforations, respectively. The resulting computational domains, which were meshed using Gmsh [8], are shown in Figure 5.
Next, we compute the energy transmission through the surface of the perforations and the energy emitted by the transducer using formulae (4) and (5), respectively, for a certain range of frequencies f = ω/(2π). In order to find (local) optimal emission frequencies, we look for local [5]. The numerical values of the porous solid constants, on the other hand, were taken from [26].
respectively-which are dimensionless quantities-as functions of the excitation frequency f = ω/(2π). The pressure field p in (19a) and (19b) corresponds to the solution of (12) and Γ j p ⊂ Γ p , j = 1, · · · , N p , denotes the surface of the j-th perforation (the perforations are sorted from top to bottom). Note that in virtue of the conservation of energy principle and the fact that 0 ≤ Q ≤ 1, Table 2: Geometrical parameters of the well configurations considered. The dimensions of the transducer were selected according the device described in [20]. The dimensions of a perforated well, on the other hand, were taken from [11] Parameter we have that the quantity 100 × Q corresponds to the percent of energy effectively transmitted to the porous reservoir rock through the perforations. Figure 6 displays the consolidated energy transmission factor Q as a function of the excitation frequency f = ω/(2π) for the three well configurations laid out in Table 2 and Figure 5. In these numerical simulations, the transducer is placed exactly at the center of the perforated domain. Sharp peaks of the energy transmission factor Q-many of them reaching values close to the upper bound Q = 1-are observed at various frequencies for the well configurations considered (e.g., the peaks values around f = 0.895, 1.585, 2.79, 3.695 and 5.525 kHz). The existence of these peaks is explained by resonance phenomena taking place inside the perforations. As illustrated by the pressure field at a peak frequency displayed in Figure 7, the factor Q attains its local maxima at "resonance" frequencies, for which the associated pressure field exhibits inordinate large amplitudes inside the perforations.
The large correlation between the location of the peaks for the various well configurations observed in Figure 6, on the other hand, can be explained by the resonant frequencies of an individual perforation. In fact, large values of Q j are expected to occur at the resonant frequencies of the j-th perforation. Since the same perforation radius, the same perforation length, and the same location of the transducer are utilized in the three configurations considered, all the perforations are expected to resonate collectively at approximately the same frequency. Therefore, the factors Q j , j = 1, . . . , N p attain simultaneously local maxima at these "resonance" frequencies. To look into that in more detail, we present Figure 8-which displays the individual factors Q j , j = 1, . . . , N pwhere it can be clearly observed that the factors Q j attain collectively local maxima at certain frequencies that indeed correspond to the largest peak values of Q observed in Figure 6. Although the simulation results presented above seemingly indicate the existence of optimal frequencies for which nearly 100% (Q ≈ 1) of energy transmission is achieved, in practice, uncertain variations in the shape of the perforations might result in an overall reduction of the peak values of Q. To briefly study the effect of small shape variations on the location of the local maxima of Q, we consider perturbations of the three aforementioned configurations, which are generated by introducing random changes in the perforation radius, the perforation length, and the location of the transducer. Figure 9 displays the Q factors obtained for the new well configurations, where it can be observed a much weaker correlation between the location of the peak values, as compared to the results presented in Figure 6. This weaker correlation is further explained by the results displayed in Figure 10, which show that, as expected, the factors Q j , j = 1, . . . , N p do not attain their local maxima at the same frequencies. Despite this fact, remarkably large peak values of Q (Q ≈ 1) are still observed. Nearly perfect transmission is achieved in this case by excitation of "resonant" frequencies associated with just a few perforations, for which the local energy transmission factors Q j lies well above 50% (e.g., the plot at the top of Figure 10-corresponding to the first well configuration-around f = 3.24 kHz, where Q 1 = 0.81). Figure 11 displays the pressure field at one of the peak values of Q, where large pressure amplitude values inside some of the perforations are again observed. We thus finally conclude that, in principle, it would be possible to achieve
Concluding remarks
A mathematical model-based upon the Helmholtz equation and the use of a suitable impedance boundary condition-and a DtN-FE procedure are presented for the numerical simulation of an AWS method. The existence of optimal emission frequencies, associated with acoustic resonance phenomena, is demonstrated by means of numerical simulations for a variety of realistic well configurations. We believe that the proposed methodology and the numerical results presented in this work provide valuable information for design and optimization of the AWS method as its performance can be significantly improved by properly selecting the operating frequencies of the AWS device (transducer).
A White's wall impedance model
Let us consider a circular cylinder of radius r 0 > 0 which is assumed to be filled with a liquid and surrounded everywhere by an unbounded porous material. The pressure P and the average flow velocity V in the radial direction are related by Darcy's law where η denotes the shear viscosity of the fluid, and κ denotes the permeability of the porous material. Note that it is assumed in (20) where B denotes the bulk modulus of the fluid in the pore space and φ denotes the porosity, lead to ∂V ∂r Combining equations (20) and (21) where m = φη/(κB). Further assuming that the velocity and pressure fields in the porous material are time-harmonic, i.e., P (r, t) = Re {p(r) e −iωt } and V (r, t) = Re {v(r) e −iωt }, we obtain that the pressure amplitude p satisfies the Bessel differential equation d 2 p dr 2 (r) + 1 r dp dr (r) + iωm p(r) = 0, r > r 0 .
Looking for bounded outgoing-wave solutions at infinity fulfilling the boundary condition p(r 0 ) = p 0 at the interface between the fluid and the porous material (r = r 0 ), we arrive at where H 0 denotes the Hankel function of the first kind and order zero [1]. From Darcy's law (20), on the other hand, we obtain that the velocity amplitude v is given by Combining (22) and (23), it is straightforward to evaluate the wall impedance Z, which is defined as the quotient of the pressure amplitude p to the radial velocity amplitude v at the surface of the cylinder, that is, | 2017-05-05T12:06:03.000Z | 2017-05-05T00:00:00.000 | {
"year": 2017,
"sha1": "a330d0c6e82e96be9bdcc087c255dd10aedf09b6",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1705.02182",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "a330d0c6e82e96be9bdcc087c255dd10aedf09b6",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Materials Science",
"Physics",
"Geology"
]
} |
262383371 | pes2o/s2orc | v3-fos-license | Identification of TLRs as potential prognostic biomarkers for lung adenocarcinoma
Lung adenocarcinoma (LUAD) is one of the most common tumors with the highest cancer-related death rate worldwide. Early diagnosis of LUAD can improve survival. Abnormal expression of the Toll-like receptors (TLRs) is related to tumorigenesis and development, inflammation and immune infiltration. However, the role of TLRs as an immunotherapy target and prognostic marker in lung adenocarcinoma is not well understood and needs to be analyzed. Relevant data obtained from databases such as ONCOMINE, UALCAN, GEPIA, and the Kaplan–Meier plotter, GSCALite, GeneMANIA, DAVID 6.8, Metascape, LinkedOmics and TIMER, to compare transcriptional TLRs and survival data of patients with LUAD. The expression levels of TLR1/2/3/4/5/7/8 in LUAD tissues were significantly reduced while the expression levels of TLR6/9/10 were significantly elevated. LUAD patients having low expression of TLR1/2/3/5/8 and high expression of TLR9 had a poor overall survival while patients with low expression of TLR2/3/7 presented with worse first progress. TLR4, TLR7 and TLR8 are the 3 most frequently mutated genes in the TLR family. Correlation suggested a low to moderate correlation among TLR family. TLR family was also involved in the activation or inhibition of the famous cancer related pathways. Analysis of immune infiltrates analysis suggested that TLR1/2/7/8 levels significantly correlated with immune infiltration level. Enrichment analysis revealed that TLRs were involved in TLR signaling pathway, immune response, inflammatory response, primary immunodeficiency, regulation of IL-8 production and PI3K-Akt signaling pathway. Our results provided information on TLRs expression and potential regulatory networks in LUAD. Moreover, our results suggested TLR2/7/8 as a potential prognostic biomarker for lung adenocarcinoma.
Introduction
Lung cancer is a common malignancy with the highest cancer-related death rate. [1]It is the leading cause of death in men and the second most leading cause of death in women.Morbidity and mortality due to lung cancer are highest in high-income countries. [2]The 5-year survival rate for lung cancer is 19%, and it is estimated that lung cancer will account for 13% of all new cancer cases in the United States in 2019. [3]There is growing evidence that while the 1-year survival rate for stage I patients is 81% to 85%, it has declined sharply to 15% to 19% in stage IV patients with most patients (approximately 75%) already in advanced stage (stage III/IV) at the time of diagnosis. [4,5]Lung cancer is mainly divided into small cell lung cancer (SCLC, about 15% of cases) and non-small cell lung cancer (NSCLC, about 85% of cases).The main histological subtypes of NSCLC are adenocarcinoma (LUAD) and squamous cell carcinoma (LUSC).Approximately 38.5% of lung cancers are lung adenocarcinoma and its incidence has been increasing over recent years. [6]Thus, there is an urgent need to explore and develop effective biomarkers for early detection and diagnosis of lung adenocarcinoma.
Toll-like receptors (TLRs) are a family of transmembrane receptors which recognize pathogens and damage-related molecular pattern molecules that play an important role in inflammation and immune response. [7]The human TLR family consists of TLR1 through TLR10. [8]They are widely expressed on both resident lung cells as well as on infiltrating cells of myeloid and lymphoid origin, connect the innate to the adaptive immune system. [9]As sensors of lung cancer cells, TLRs may promote growth, angiogenesis and invasion of lung cancer cells and regulate the behavior of cancer stem cells (CSCs). [10]TLR agonists have established therapeutic benefits as anti-cancer agents that activate immune cells in the tumor microenvironment and facilitate the expression of cytokines which allows for infiltration of anti-tumor lymphocytes and the suppression of oncogenic signaling pathways. [11]In breast cancer, TLR4 and TLR7 acted as prognostic biomarkers and associated with poor prognosis. [12]High TLR10 levels indicated a shorter overall survival (OS) and disease-free survival (DFS) rated in glioma. [13]Moreover, TLRbased gene signature could predict the prognosis of hepatocellular carcinoma patients. [14]Presently, studies have described a general expression profile and mechanism of TLRs in LUAD.However, the selection of appropriate TLRs as biomarkers for treatment and prognosis of LUAD is still not clearly understood.
We systematically analyzed the expression level and prognostic value of TLRs in LUAD based on multiple large databases in the current study.In addition, correlation, cancer-related pathway, drug sensitivity, immune infiltrates and function analysis of TLRs family in LUAD were also analyzed.Results of our analysis may reveal potential new biomarkers for the diagnosis and treatment of LUAD.
Oncomine
ONCOMINE (www.oncomine.org/) is a cancer microarray database and web-based data mining platform designed to facilitate discovery from genome-wide expression analysis. [15]It has become an industry-standard tool with 700 + independent datasets.In the current study, expression of TLRs in LUAD was evaluated by extracting mRNA data.Significance thresholds were set as P < .05,1.5-fold change and rank of the first 10% of a gene.
UALCAN
UALCAN (http://ualcan.path.uab.edu.) is an interactive portal for in-depth analysis of TCGA gene expression data.This resource can be used as a platform for validating target genes and identifying tumor-subgroup specific biomarkers. [16]The expression data of TLRs in our study were obtained by using the "expression analysis" module of UALCAN.Significance thresholds were set as P < .05.
GEPIA
GEPIA (http://gepia.cancer-pku.cn/) is an interactive web server developed by Peking University for analyzing RNA sequencing expression data from TCGA and GTEx projects.The customizable functions provided by GEPIA, such as tumor/normal differential expression analysis and Pearson correlation analysis were applied in this study. [17]gure 1.The transcription levels of TLR family members in different types of cancers (ONCOMINE).The graph shows the numbers of datasets with statistically significant mRNA over-expression (red) or down-regulated expression (blue) of the target gene.The threshold was designed with following parameters: P value of .05 and fold change of 1.5.TLRs = toll-like receptors.www.md-journal.com
The Kaplan-Meier plotter
Based on multiple databases (GEO, EGA and TCGA), the Kaplan-Meier plotter (www.kmplot.com) was able to assess the impact of the 54k gene on survival in 21 cancer types.The lung cancer data set included 3452 cases.The system includes gene chips and RNA-seq data sources.The primary purpose of the tool is to identify and validate survival markers based on meta-analysis. [18]In this study, the prognostic value of the mRNA expression of TLRs was evaluated by using the Kaplan-Meier plotter.OS, first progress and post-progression survival of patients with LUAD were determined by dividing the patient samples into 2 groups based on median expression (high vs. low expression) and assessing using a Kaplan-Meier survival plot with a hazard ratio with 95% confidence intervals and log rank P-value.
GSCALite
GSCALite (http://bioinfo.life.hust.edu.cn/) is a web-based analysis platform for gene set cancer analysis. [19]The portal is userfriendly; it allows analysis of single nucleotide variation (SNV), cancer pathway activity and drug sensitivity analysis of TLR family in LUAD.
LinkedOmics
LinkedOmics (http://www.linkedomics.org) is publicly available portal which includes multi-omics data from all 32 TCGA Cancer types. [20]The Web application has 3 analysis modules: LinkFinder, LinkInterpreter, and LinkCompare.The first 2 analysis modules were used by us in the current study.Differentially expressed genes which correlated with TLRs were explored with the TCGA LUAD cohort (n = 515) using Pearson's correlation coefficient.Volcano statistical plots for individual genes were created by the LinkFinder module.Enrichment analysis (kinase-target enrichment, miRNA-target enrichment and transcription factor-target enrichment) of TLRs in LUAD with GSEA was performed using the Link-Interpreter module.The rank criterion was an FDR < 0.05 and 500 simulations were performed.
David 6.8
A comprehensive set of integrated biological knowledgebase and analytic tools for investigators to understand biological meaning behind large list of genes is provided by DAVID (https://david.ncifcrf.gov/home.jsp). [21]The Gene Ontology (GO) enrichment analysis and Kyoto Encyclopedia of Genes and Genomes (KEGG) pathway enrichment analysis was performed using DAVID 6.8 and the results were visualized with R project using a "ggplot2" package with a P value of .5 as the threshold.Biological processes, cellular components (CC), and molecular function were included in the GO enrichment analysis.
GeneMANIA
GeneMANIA (http://www.genemania.org) is a flexible, userfriendly web interface using a very large set of functional association data.Association data include protein and genetic interactions, pathways, co-expression, co-localization and protein domain similarity. [22]
Metascape
Metascape (http://metascape.org) is a web-based portal designed to provide a comprehensive gene list annotation and analysis resource for experimental biologists. [23]The "Express Analysis" module was used for enrichment analysis of TLR family members and neighboring genes in this study.
TIMER
TIMER web server (https://cistrome.shinyapps.io/timer/) is a Tumor Immune Estimation resource for systematical analysis of immune infiltrates across diverse cancer types. [24]In our study, "Gene module" was used to evaluate the correlation between TLRs level and the 6 immune infiltrates (B cells, CD4 + T cells, CD8 + T cells, Neutrophils, Macrophages and Dendritic cells) in LUAD.The "SCNA module" was used for comparing tumor invasion levels between tumors having different somatic copy number alterations for a given genes
The prognosis analysis of TLR family in Patients with LUAD
Correlation between differentially expressed TLRs and clinical prognostic value was evaluated using used Kaplan-Meier plotter.Regrettably, the data of background is not available in the different group of patients.The results are shown in Figure 4A-I , P = .0471)were associated with a worse first progress.There was no evident correlation of the differential expression of TLR family with post progression survival.In order to identify the independent factors affecting the prognosis of LUAD patients, we also performed univariate and multivariate analysis.As a result, the data of univariate and multivariate analysis suggested that TLR2 and TLR3 expression were the independent factors affecting the prognosis of LUAD patients (Fig. 5A and B).Therefore, we suggested that TLR2 and TLR3 were potential prognostic biomarkers in LUAD.The percentages of genetic alterations in genes based on the 117 samples from NCI Genomic Data Commons is as follows: (TLR1, 6%; TLR2, 8%; TLR3, 3%; TLR4, 51%; TLR5, 6%; TLR6, 8%; TLR7, 13%; TLR8, 13%; TLR9, 6%; TLR10, 12%).From the above data, it can be seen that TLR4, TLR7 and TLR8 were the 3 most frequently mutated genes in the TLR family.
The result of TLR family in the famous cancer related pathways have been shown in Figure 6C.A significant role is played by the TLR family in LUAD by mainly activating apoptosis pathway, EMT pathway, hormone ER pathway, RAS/MAPK pathway and RTK pathway, and inhibiting cell cycle pathway, DNA damage response pathway and hormone AR pathway.
Results from gene set drug resistance analysis from GDSC IC50 drug data have been shown in Figure 6D.The low expression of TLR10, TLR1, TLR7, TLR8, and TLR2 are related to drug resistance of 40, 36, 35, 15, and 12 drugs or small molecules, respectively in LUAD patients.
Functional enrichment analyses of co-expression genes correlated with TLRs in LUAD
The mRNA sequencing data from 515 LUAD patients in the TCGA was analyzed by using the Function module of LinkedOmics.We recorded the genes with Pearson correlation coefficient top 10 as neighboring genes as shown in the volcano plot (Fig. 7A-J).DAVID 6.8 and Metascape were utilized to analyze the functions of differentially expressed TLRs and their neighboring genes.Enrichment results of the top 10 genes altered in the TLR family members and neighboring genes have been displayed in the heatmap (Fig. 8).It shows the top 10 most highly enriched GO items and KEGG items using DAVID 6.8 and visualized with R project.The data of GO-Biological processes function analysis (Fig. 8A) suggested that immune response, inflammatory response, TLR signaling pathway were associated with the tumorigenesis and progression of LUAD.Plasma membrane, MHC class II protein complex, cell surface, integral component of membrane and luminal side of endoplasmic reticulum membrane, endolysosome membrane, transport vesicle membrane, clathrin-coated endocytic vesicle membrane were the 10 most highly enriched items in the GO-CC category (Fig. 8B).The GO-molecular function analysis (Fig. 8C) revealed that differentially expressed TLRs and their neighboring genes were mainly enriched in transmembrane signaling receptor activity, receptor activity, MHC class II receptor activity and protein complex binding, peptide antigen binding, lipopeptide binding, TLR2 binding, lipopolysaccharide receptor activity and double-stranded RNA binding.Furthermore, the data of KEGG pathway in Figure 8D suggested that the functions of TLRs in LUAD was mainly enriched in TLR signaling pathway and antigen processing and presentation.
We use GeneMANIA in the current study to construct a protein-protein interaction network of TLR family members and neighboring genes and perform functional analysis (Fig. 9).The functions were mainly enriched in regulation of cell activation and lymphocyte activation, positive regulation of cytokine biosynthetic process, immune response activating cell surface receptor signaling pathway, antigen receptor-mediated signaling pathway and adaptive immune response based on somatic recombination of immune receptors built from immunoglobulin superfamily domains.
The results were verified using functional enrichment analysis results obtained from Metascape (Fig. 10; Table S3, Supplemental Digital Content, http://links.lww.com/MD/J712 and Table S4, Supplemental Digital Content, http://links.lww.com/MD/J713).The top 20 GO items for the TLRs and their neighboring genes are presented in Figure 10A and B and Table S3, Supplemental Digital Content, http://links.lww.com/MD/J712.The TLR family members and their neighboring genes were mainly enriched in immune response-regulating signaling pathway, regulation of cytokine production and TLR signaling pathway.The top 6 KEGG pathways for the TLRs and their neighboring genes are presented in Figure 10C and D and Table S4, Supplemental Digital Content, http://links.lww.com/MD/J713.The functions of differentially expressed TLRs and their neighboring genes were mainly enriched in TLR signaling pathway, primary immunodeficiency and PI3K-Akt signaling pathway.We performed a protein-protein interaction enrichment analysis for better understanding of the correlation between differentially expressed TLRs and LUAD (Fig. 10E and F).The protein-protein interaction network found that biological function was most associated with TLR signaling pathway and regulation of IL-8 production.
Immune infiltrates analysis of TLR1/2/7/8 in LUAD
The above results confirmed that there is a reduction in the expression of TLR1/2/7/8 in LUAD and it is involved in the activation and suppression of famous cancer related pathways as well as drug resistance.Hence, we concluded that TLR1/2/7/8 can be a potential prognostic biomarker and a major player in the pathogenesis and progression of LUAD.Previous studies have mentioned that TLRs play an important role in the innate immune response. [29]Hence, we analyzed the correlation between TLR1/2/7/8 and LUAD immune infiltration.There was a positive correlation between TLR1 expression and the infiltration of B cells (Cor = 0.467, P = 1.We also analyzed the role of somatic copy number alterations (SCNA) of TLR1/2/7/8 in immune infiltrates.As expected, SCNA of TLR1/2/7/8 commonly inhibited the immune infiltrates which included CD8 + T cell, neutrophil cell, dendritic cell, macrophage, CD4 + T cell and B cell (Fig. 11E).
Sub-group analysis of TLR1/2/7/8 transcription in patients LUAD
The results of further subgroup analysis have been demonstrated in Figure 12.A total of 515 LUAD samples in TCGA with various clinicopathological features consistently showed low transcription TLR1/2/7/8.In subgroup analysis, TLR2/7/8 transcription levels were significantly lower in LUAD patients than in healthy subjects based on gender, age, race, stage of disease, smoking habits and status of lymph node metastasis (Fig. 12A and D).Therefore, expression of TLR2/7/8 expression may be a potential diagnostic indicator in LUAD.
Discussion
TLRs, mainly expressed by innate immune cells, are involved in inducing and regulating adaptive immune responses. [7]Based on the hypothesis that cancerous cells evade the immune system, TLRs activation may induce Th1-like and cytotoxic immunity via TLR signaling pathway which in turn will allow anti-tumor lymphocyte infiltration and inhibit oncogenic signaling pathways.The inhibition of oncogenic signaling pathways will lead to tumor cell death, and will ultimately regression or stagnation of the tumor. [9,11]Studies above suggested that TLRs may be associated with tumor immunotherapy.All TLRs in LUAD have however never been studied.Hence we attempted to investigate the prognostic value and biological function of TLRs in LUAD in the current study.Expression analysis revealed that the levels of TLR1/2/3/4/5/7/8 were decreased in LUAD while the levels of TLR6/9/10 were increased.LUAD patients having low expression of TLR1/2/3/5/8 and high expression of TLR9 presented with poorer overall survival while patients having a low expression of TLR2/3/7 presented with poorer first progress.And TLR2 and TLR3 expression were the independent factors affecting the prognosis of LUAD patients.These results were consistent with previous studies.A study by Bianchi F et al also observed that expression of TLR3 protein in NSCLC was associated with a good OS. [30]Another study showed that high expression of TLR5 was associated with the better prognosis of NSCLC. [31]Therefore, we suggested that TLR2/3 may act as potential prognostic biomarkers in patients with LUAD.
Cancer-related pathways analysis revealed that the TLR family played significant functions in LUAD mainly by activating apoptosis pathway, EMT pathway, hormone ER pathway, RAS/ MAPK pathway and RTK pathway.It also inhibited cell cycle pathway, DNA damage response pathway and hormone AR pathway.[34] The migration ability of NSCLC is related to TLR4 and MAPK signaling pathway. [35,36]Thus, TLR family may the pathogenesis and progress of LUAD can be regulated by TLR family via the pathways mentioned above.
The mRNA expressions of TLRs were significantly differentially expressed in LUAD.We found a low to medium correlation among the differentially expressed TLRs.This suggested that a synergistic role is played by these cytokines in the tumorigenesis and progression of LUAD.
We performed GO functions and KEGG pathways analysis.This revealed that TLRs were involved in TLR signaling pathway, immune response, inflammatory response, primary immunodeficiency, regulation of IL-8 production and PI3K-Akt signaling pathway.Previous studies have demonstrated that these functions and pathways were associated with carcinogenesis and progression.Activation of TLR-3 signaling pathway promoted proliferation and invasion of NSCLC cells. [37]An important role is played by TLRs in inducing the inflammatory response in the innate immune system.TLRs play an important role in the immune response against tumor cells due to a definitive connection between chronic inflammation and cancer. [10]A study by Zhang et al [38] suggested that activation of the PI3K/ Akt signaling pathway induced IL-8 expression and promoted in vitro angiogenesis.Thus, TLRs may affect the pathogenesis and progress of LUAD via the signaling pathways mentioned above.
TLRs are pattern recognition receptors expressed by cells of the innate immune system. [39]We observed in our study that the level of TLR1/2/7/8 significantly correlated with the immune infiltration level and also that SCNA of TLR1/2/7/8 could generally inhibit the immune infiltrates.Liu et al [40] suggested that tumors which lacked memory B cells or had an increased number of M0 macrophages were associated with the poor prognosis in LUAD at early clinical stage.Immune cells infiltrating tumor in lung cancer may be an important determinant of prognosis and response to immunotherapy. [41]ome immune gene biomarkers had been reported to be therapy target and play a significant role in LUAD.Multiple antibody inhibitors of PD-1/PD-L1 and CTLA-4 have shown efficacy as first-, second-and even third-line treatment in patients with NSCLC. [42]The results suggested that TLRs may be used as prognostic indicators and therapeutic targets for tumor immunotherapy.
We found that T cell antigen receptor (TCR) activation is associated with SYK, LCK, LYN and other tyrosine kinases phosphorylation on analyzing the differentially expressed TLR1/2/7/8 transcription factor targets, kinase targets and miRNA targets in LUAD. [43][46][47] Interestingly, the role of the transcription factor target IRF family in the regulation of cell cycle and apoptosis as well as its important function in immune response, immune cell development, and the regulation of tumorigenesis has been emphasized,. [48,49]Thus, TLR1/2/7/8 may regulate cell cycle, apoptosis and immune response in LUAD via these transcription factor targets, kinase targets and miRNA targets.
Our study has some limitations.First, it discusses changes at the gene level and not at the level of the protein.Secondly, the sample data is basically derived from the TCGA database and it is advisable have other databases to verify the conclusions.Moreover, further study should be performed to verify our results.
Conclusion
Our results provided information on TLRs expression and potential regulatory networks in LUAD.Moreover, our results suggested TLR2/7/8 as a potential prognostic biomarker for lung adenocarcinoma.
Figure 3 .
Figure 3.The expression of TLR family members in LUAD patients.(A) The relative level of TLR family members in LUAD.(B) Pearson correlation of TLR family members.LUAD = lung adenocarcinoma, TLRs = toll-like receptors.
Figure 4 .
Figure 4. Kaplan-Meier plotter reveals the overall survival differences based on mRNA level of TLR family members in LUAD patients.(A-E) LUAD patients with decreased expression of TLR1/2/3/5/8 presented with worse overall survival (OS) (P < .05).(F) LUAD patients with over expression of TLR9 presented with worse overall survival (P < .05).(G-I) Kaplan-Meier plotter reveals the first progress differences based on mRNA level of TLR family members in LUAD patients.LUAD patients with decreased expression of TLR2/3/7 presented with worse first progress (FP) (P < .05).LUAD = lung adenocarcinoma, TLRs = tolllike receptors.
Figure 5 .
Figure 5.The result of univariate and multivariate analysis.(A) Univariate analysis of TLR2 and TLR3 expression in LUAD patients.(B) Multivariate analysis of TLR2 and TLR3 expression in LUAD patients.LUAD = lung adenocarcinoma, TLRs = toll-like receptors.
Figure 6 .
Figure 6.Gene landscape and drug sensitivity analysis.(A and B) Summary of alterations of TLR family members in LUAD.(C) Pathway activity of TLR family members in LUAD.(D) Drug sensitivity between tumor and normal samples of TLR family members in LUAD.LUAD = lung adenocarcinoma, TLRs = toll-like receptors.
Figure 8 .
Figure 8. Enrichment analysis of the genes altered in the TLR family members and neighboring genes.The heat map display the enrichment results of the top 10 genes altered in the TLR family members and neighboring genes.The enriched times in Biological processes (A), Cellular components (B), Molecular functions (C), and KEGG pathway (D) analysis.KEGG = Kyoto Encyclopedia of Genes and Genomes, TLRs = toll-like receptors.
Figure 9 .
Figure 9. Protein-protein interaction network of TLR family members and neighboring genes (GeneMANIA).Protein-protein interaction (PPI) network and functional analysis indicating the gene set that was enriched in the target network of TLR family members and neighboring genes.Different colors of the network edge indicate the bioinformatics methods applied: co-expression, pathway, website prediction, co-localization, shared protein domains, physical interactions and genetic interactions.The different colors for the network nodes indicate the biological functions of the set of enrichment genes.TLRs = toll-like receptors.
Figure 10 .
Figure 10.The enrichment analysis of TLR family members and neighboring genes (Metascape).(A) Heat map of Gene Ontology (GO) enriched terms colored by P values.(B) Network of GO enriched terms colored by P value, where terms containing more genes tend to have a more significant P value.(C) Heat map of Kyoto Encyclopedia of Genes and Genomes (KEGG) enriched terms colored by P values.(D) Network of KEGG enriched terms colored by P value, where terms containing more genes tend to have a more significant P value.(E) Protein-protein interaction (PPI) network and 4 most significant MCODE components form the PPI network.(F) Independent functional enrichment analysis of 3 MCODE components.TLRs = toll-like receptors. | 2023-09-26T06:17:07.781Z | 2023-09-22T00:00:00.000 | {
"year": 2023,
"sha1": "93fa1642e5a074a36ed00e9c09819ef6a50c7401",
"oa_license": "CCBYNC",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "92172dd37f0ce14205023b9ec0ee3804c48b6aaf",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
219537886 | pes2o/s2orc | v3-fos-license | ENDOCRINOLOGY IN THE TIME OF COVID-19: Remodelling diabetes services and emerging innovation
The COVID-19 pandemic is a major international emergency leading to unprecedented medical, economic and societal challenges. Countries around the globe are facing challenges with diabetes care and are similarly adapting care delivery, with local cultural nuances. People with diabetes suffer disproportionately from acute COVID-19 with higher rates of serious complications and death. In-patient services need specialist support to appropriately manage glycaemia in people with known and undiagnosed diabetes presenting with COVID-19. Due to the restrictions imposed by the pandemic, people with diabetes may suffer longer-term harm caused by inadequate clinical support and less frequent monitoring of their condition and diabetes-related complications. Outpatient management need to be reorganised to maintain remote advice and support services, focusing on proactive care for the highest risk, and using telehealth and digital services for consultations, self-management and remote monitoring, where appropriate. Stratification of patients for face-to-face or remote follow-up should be based on a balanced risk assessment. Public health and national organisations have generally responded rapidly with guidance on care management, but the pandemic has created a tension around prioritisation of communicable vs non-communicable disease. Resulting challenges in clinical decision-making are compounded by a reduced clinical workforce. For many years, increasing diabetes mellitus incidence has been mirrored by rising preventable morbidity and mortality due to complications, yet innovation in service delivery has been slow. While the current focus is on limiting the terrible harm caused by the pandemic, it is possible that a positive lasting legacy of COVID-19 might include accelerated innovation in chronic disease management.
Foreword
This publication was created through collaborative working between UK and International diabetes leaders and experts. It represents the situation, as of 12 April 2020, 4 weeks post 'lock-down', at which time 10 621 people have died of COVID-19 in the UK (1) and 99 690 internationally (2). The current UK Government advice to people with diabetes is to follow general public 'stay at home guidance'. Shielding/ complete self-isolation is not currently stipulated for diabetes, unlike very high-risk individuals with severe respiratory illnesses or compromised immunity. This advice is largely mirrored internationally.
Diabetes and COVID-19
COVID-19 has resulted in the biggest disruption to healthcare delivery in living memory. New policy and healthcare working practice have been rapidly introduced. This article focuses on changes to diabetes care delivery during the pandemic. Currently it is unclear whether people with diabetes are at higher risk of contracting COVID-19. However, they are clearly at higher risk of poor outcomes once infected. Among 7162 US cases reported by the CDC (28 March), the percentage of COVID-19 patients with at least one underlying health condition (e.g diabetes) was nearly three-fold higher among those requiring (1) intensive care unit admission (78%) and (2) hospitalisation (71%) compared to people not hospitalised (27%) (3,4). People with diabetes may also have a ~2-3 fold increased mortality following COVID-19 infection in early reports (4,5). Age, gender, multimorbidity, low socioeconomic status, and degree of pathogen exposure are risk factors for disease severity, but it is not yet clear which are independent contributors (5,6,7,8,9).
Acute care for individual people with diabetes and COVID-19
Acute illness suspected or confirmed to be due to COVID-19 may require modification of current guidelines, particularly for safe use by staff unfamiliar with diabetes management, to prevent hypoglycaemia and severe hyperglycaemia (10). Guidance to emergency/admissions departments should include glucose measurement on all admissions, as a significant number of COVID-19 positive patients not previously known to have diabetes present with marked hyperglycaemia. Additionally, ketones should be checked both in everyone with known diabetes and in those without known diabetes who present with a blood glucose above 12 mmol/L. Anecdotal reports suggest that unusual presentations of diabetic metabolic emergencies including diabetic ketoacidosis (DKA) or mixed DKA and Hyperosmolar Hyperglycaemic State (HHS) in type 2 diabetes with the risk of DKA being greater in those on SGLT-2 inhibitors. SGLT-2 inhibitors and metformin should therefore be stopped in all patients on acute presentation, given potential association with metabolic emergencies and of AKI.. Figure 1 gives useful 'Front Door' guidance for individual patients based on experience from UK centres (https://www.diabetes.org. uk/resources-s3/public/2020-04/COvID_Front_Door_ v1.0.pdf). Non-COVID-19 related DKA and HHS should be managed using standard protocols and additional support implemented to reduce admissions in known high-risk individuals (see subsequently).
Diabetes-related population health risks during the COVID-19 pandemic
In the general population, restrictions on travel and person-to-person contact may lead to worsening of risk factors for complications and poorer health outcomes from established diabetes and rising diabetes incidence through:
Changes in lifestyle
• More sedentary behaviour/less activity • Lack of access to healthy foods • Lack of face-to-face, peer and professional support for weight loss/healthy lifestyle • Reduced carer/family support for self-management • Increased alcohol consumption (11) • Deterioration in mental health, due to stress and isolation and reduced wider family network/peer support.
Reduced population complications screening/acute treatment changes
• Failure to monitor renal function appropriately in people with CKD leading to avoidable admissions with fluid overload, anaemia or electrolyte disturbance • Failure to monitor blood pressure/other CVD risk factors appropriately leading to preventable CVD events • Delay in management of diabetes-related foot ulcers/ foot infections and sight-threatening retinopathy leading to preventable amputations and blindness.
• Inappropriate continuation and discontinuation of medications (12)
Role of diabetes specialist/community teams during the time of COVID-19
It is vital to maintain patient safety while accelerating patient flow through the hospital and delivering closely managed outpatient services to prevent avoidable admissions and readmissions. Achieving this involves: 1) Reducing risk of COVID-19 infection through clear articulation of evolving COVID-19 advice to people with diabetes, to enable understanding of risk status and expectations around healthcare service interaction 2) Preventing people with diabetes in the community falling ill from diabetes-related complications (hypoglycaemia, diabetic ketoacidosis (DKA), hyperosmolar hyperglycaemic state (HHS), and foot infections) 3) Assisting people with diabetes out of hospital when they become unwell to prevent admission for diabetes-related complications (as mentioned previously) 4) Supporting inpatient teams (especially on COVID wards) to manage people with acute diabetes complications safely, including those in ICU with high insulin requirements 5) Providing education for frontline inpatient teams who are unfamiliar with diabetes management 6) Facilitating early discharge to the community with programmed daily diabetes follow-up to prevent readmission 7) Supporting primary care diabetes management It is critical to maintain a skeleton service capable of delivering 1, 2, 3 and 6 to keep people with diabetes out of hospital, as well as a more significant service for diabetes care in hospital. This should ideally include a limited inpatient/community weekend service.
Practical advice for ongoing out-patient management
Challenges of outpatient care delivery are compounded by reduced staffing levels due to illness and deployment of clinicians to 'frontline' duties. Smaller numbers of staff are thus manning skeleton outpatient services. The duration of outpatient service disruption is currently uncertain.
1-3 months)
Suggestions: • Face-to-face clinic review should only occur where health benefits of attendance outweigh the risks associated with patient movement (i.e. potential individual and wider societal COVID-19 spread) • Pregnancy, foot services, and management of newly diagnosed people with Type 1 diabetes may need to continue at full capacity, as per national guidance (13) • Delay routine screening tests unless the patient is at very high risk of deterioration, for example, due to age, trajectory of previous test results or past history • Accept that numbers achieving routine care processes (e.g. BP, lipid, HbA1C, renal function, ACR, feet and eye screening (14)) will reduce (2) Medium to longer term service interruption (e.g. 3-12 months) As normal service interruption lengthens, the risk of harm due to delayed screening that could have informed early intervention and complication avoidance increases. It may become necessary to enable routine diabetes screening, using facilities and procedures that minimise COVID-19 transmission risk. Patients at highest risk of deterioration should be prioritised using risk algorithms if possible.
Risk assessment in care delivery
While some IT-integrated risk assessment tools are available (e.g. Eclipse; https://www.prescribingservices. org), they have not been adapted for risk assessment around routine care delivery during COVID-19, which is generally being done intuitively. More advanced datadriven machine learning models could support decisions. Models already exist for prediction of (1) mortality (16), (2) glycaemic control deterioration, (3) DKA (17), (4) eye disease (18,19,20), (5) foot ulcers/amputations, (6) kidney failure (20) and (7) cardiovascular complications (20). Most models, however, remain within academic papers and few are linked to front-line user interfaces supporting real-time care prioritisation (21). In addition, previously developed models were trained in data drawn from contexts where populations studied were largely attending regular, scheduled clinical review/screening visits. The current pandemic has driven an unprecedented degree of routine clinical review deferral. An excess allcause mortality in weeks 12 and 13 of 2020 has been observed when compared with the same weeks in 5 years to 2019, which is not fully attributable to COVID-19 (22), suggesting that COVID-19 may be impacting healthcare system performance. Machine learning models predicting individual risk of adverse outcomes due explicitly to COVID-19 service disruption could prove useful for risk stratification and care planning. We support a call for urgent work in this area.
Outpatient care delivery; potential for remote/digital tools
Consultations
In compliance with the 'stay at home' mandate, most structured education and non-urgent routine care has been cancelled or is being delivered where https://eje.bioscientifica.com necessary through remote consultation. Remote consultations may take longer than face to face, but rates of 'nonattendance' may be lower and patient satisfaction can be high (23). Online prescribing, reordering, dispensing and delivery should be encouraged.
2) Education
Patient self-management is key to good diabetes outcomes, encompassing management of lifestyle, insulin dose titration, foot self-care, and adherence to treatment plans. Healthcare professional (HCP) support can be delivered using online tools or telephone, although many patients value the use of physical aids and peer support of face-to-face/group sessions. Online structured education resources include: online DESMOND (https://www.desmond-project.org.uk), BERTIE (type 1 diabetes) (https://www.bertieonline. org.uk), and the MyDiabetesMyWay (MDMW)/ MyWay Digital platform (https://mywaydigitalhealth. co.uk). Apps incorporating 1:1 digital coaching include: Oviva (https://oviva.com/uk/en/), Changing health (https://www.changinghealth.com), Our Path/ Second Nature (https://www.changinghealth.com), Liva (https://livahealthcare.com), Omada (https:// www.omadahealth.com) and Livongo (https://www2. livongo.com). Some providers are offering discounted or free services during the pandemic. Use of public information sites for COVID-19/sick day rules are valued. The NHS Scotland MyDiabetesMyWay COVID-19/sick day advice page had 13 443 views with 98% positive ratings/comments (n=368) over 3 weeks since 16 March. Digitally supported diabetes self-management has the potential to be effective (24) and could be cost saving (25). Whether uptake increases during the pandemic period, preventing care deterioration, remains to be seen (26 attend hospital approximately fortnightly during later pregnancy. Glucose data feedback could be facilitated remotely using technology including specific systems, for example, Sensyne Health's GDM-Health app (https://www.sensynehealth. com/gdm-health). b. Activity, physiological parameters and 'internet of things' monitoring: Many apps/ online platforms enable sharing of home activity, blood pressure, weight and other readings with healthcare teams, either through automatic/ bluetooth connectivity or manual data entry (e.g. https://mwdh.co.uk, https://mymhealth.com). These could support ongoing lifestyle change and enable treatment optimisation. c. Biochemical blood and urine testing: Nationally recommended blood and urine screening (13) may need deferred. Remote consultation without recent HbA1c results is challenging, particularly if no home glucose data is available. Products enabling HbA1c home testing (27,28), and systems using smart phone embedded technology enabling 'at home' diagnostics (e.g. DipiO; https://healthy.io/ Testcard; https://testcard.com for urine Albumin Creatinine Ratio), could help, but none have been widely implemented to date. d. Foot care: Remote solutions to support neurovascular assessment, preventative podiatry work and active foot disease treatments are limited, but simple at home neuropathy tests such as the 'Ipswich touch the toes test' may be sufficiently reliable when performed by a relative or carer (29), and home foot pressure mats/remote neuropathy detection systems could assist where available (30). Home upload of digital photographs with or without additional wound tracking applications (e.g. https://healthy.io/wound) may reduce attendance episodes for ulcer treatment. e. Eye screening: Currently, eye screening is widely facilitated through industrial screening cameras in clinical centres linked to systematic image review with or without artificial intelligence grading (31). Smart phones have been used as retinal cameras, but the technology does not yet enable individual home ownership and is currently largely utilised through community hubs, for example, in rural India (32).
As routine complication screening declines, a deterioration in outcomes is predicted. Whether remote solutions can 183:2 G75 Clinical Practice Guidance D J Wake and others Diabetes service delivery and COVID-19 https://eje.bioscientifica.com be rapidly implemented to plug the gap remains to be seen. Diabetes screening, monitoring and education will ultimately be deliverable from the home, through standard personal mobile devices. The main barriers will be changing healthcare organisations, procurement/ reimbursement practices, and supporting end users. While 80% of UK adults own smartphones (33) and 95% between the age 16 and 74 in UK access the internet regularly (34), the majority of people with type 2 diabetes are over the age of 65 (35,36) and some may lack skills to independently use digital tools. Technology user support in healthcare is distinctly under-resourced, and changes in service delivery during this stage of the pandemic may increase health inequalities. There are also concerns that mental health may deteriorate due to stress and social isolation. Online tools and telephone support may require signposting for high-risk individuals.
Innovation and procurement/commissioning
The COVID-19 pandemic has enforced a period of disruptive innovation. As a result, information governance barriers are crumbling and procurement rules are being rewritten. One NHS trust saw an 18-month planned Microsoft Teams implementation happen over a weekend and a 20-year rule disallowing healthcare professional-patient email contact changed overnight. In the USA, reimbursement barriers for telemedicine services are rapidly evaporating. A reform of procurement procedures has enabled rapid commissioning and deployment of services and systems. The first wave has rightly focused on solutions with immediate impact, such as COVID-19 testing kits, vaccine development, hand wash, and ventilators, but attention may turn to tools supporting chronic conditions management if social distancing measures continue.
Diabetes opportunities resulting from COVID-19 restrictions
• The use of technology/remote consultations may increase in the long term, meaning more flexible care delivery accommodating patient lifestyle, work and carer commitments (with secondary environmental (less travel), and economic (less time off work) benefits (37)), replacing rigidly timed protocolised face-to-face appointments. Care delivery may also better support acute needs, including proactive delivery of sick day guidance.
• Rigid reliance on standard guidelines for populations may give way to more individualised patient-centred care. Risk stratification may increasingly become part of service delivery, with proportionately more time focusing and re-engaging those at highest risk of deterioration including disengaged populations. This could transform care outcomes and cost of delivery; currently a small percentage of high-risk diabetes patients consume disproportionate costs due to treatment of complications (38,39).
• Efficiency in care delivery could improve through continuation of COVID-19 clinic service and personnel restructuring.
Conclusions
The COVID-19 pandemic has required rapid adaptation of care delivery, supported by governmental and national body recommendations, but has created a conflict around where care priorities should lie. Whether similar systematic change is possible in less developed, lower resourced countries remains to be seen. People with diabetes could suffer disproportionately during COVID-19. Service restructuring and digital tools may reduce risks of health decline during this period. While the current focus is on limiting the terrible harm caused by the pandemic, it is possible that COVID-19 might leave a legacy of accelerated deployment of innovative pathways and approaches to chronic disease management supporting person-centred care.
Disclaimer
Due to the emerging nature of the COVID-19 crisis, this document is not based on extensive systematic review or meta-analysis, but on rapid expert consensus. The document should be considered as guidance only; it is not intended to determine an absolute standard of medical care. Healthcare staff need to consider individual circumstances when devising the management plan for a specific patient. | 2020-06-09T13:02:46.693Z | 2020-06-05T00:00:00.000 | {
"year": 2020,
"sha1": "33408f485f06fc6b0223a2875e82908204f73c7c",
"oa_license": "CCBY",
"oa_url": "https://eje.bioscientifica.com/downloadpdf/journals/eje/183/2/EJE-20-0377.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "0929b8ea558b2ab30c0c922469fd138180e97bd3",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
953708 | pes2o/s2orc | v3-fos-license | Transcranial Alternating Current Stimulation Increases Risk-Taking Behavior in the Balloon Analog Risk Task
The process of evaluating risks and benefits involves a complex neural network that includes the dorsolateral prefrontal cortex (DLPFC). It has been proposed that in conflict and reward situations, theta-band (4–8 Hz) oscillatory activity in the frontal cortex may reflect an electrophysiological mechanism for coordinating neural networks monitoring behavior, as well as facilitating task-specific adaptive changes. The goal of the present study was to investigate the hypothesis that theta-band oscillatory balance between right and left frontal and prefrontal regions, with a predominance role to the right hemisphere (RH), is crucial for regulatory control during decision-making under risk. In order to explore this hypothesis, we used transcranial alternating current stimulation, a novel technique that provides the opportunity to explore the functional role of neuronal oscillatory activities and to establish a causal link between specific oscillations and functional lateralization in risky decision-making situations. For this aim, healthy participants were randomly allocated to one of three stimulation groups (LH stimulation/RH stimulation/Sham stimulation), with active AC stimulation delivered in a frequency-dependent manner (at 6.5 Hz; 1 mA peak-to-peak). During the AC stimulation, participants performed the Balloon Analog Risk Task. This experiment revealed that participants receiving LH stimulation displayed riskier decision-making style compared to sham and RH stimulation groups. However, there was no difference in decision-making behaviors between sham and RH stimulation groups. The current study extends the notion that DLPFC activity is critical for adaptive decision-making in the context of risk-taking and emphasis the role of theta-band oscillatory activity during risky decision-making situations.
INTRODUCTION
When facing risky situations humans have to weigh up the consequences of failure against the rewards for success. Assessing risk inevitably involves a conflict between the desire to win and the fear of penalty. In such situations, the ability to identify and weight risks and benefits is highly important in order to make proper predictions concerning potential outcomes that will best serve individual survival and future goals. In this regard, the cognitive architecture, neural, and electrophysiological basis of decision-making processes in the context of risk-taking has gained a lot of attention in the last two decades. Studies of patients with focal brain lesion (e.g., Bechara et al., 1994Bechara et al., , 1996Tranel et al., 2002), alongside numerous neuroimaging and electroencephalogram (EEG) studies (e.g., Rogers et al., 1999;Paulus et al., 2001;Sanfey et al., 2003a,b;Ernst and Paulus, 2005;Trepel et al., 2005;Krain et al., 2006;Rao et al., 2008;Gianotti et al., 2009;Hare et al., 2009;Mohr et al., 2010) suggest that decision-making processes involve a distributed subcortical-cortical network that includes multiple prefrontal, parietal, limbic, and subcortical regions.
Within this network, prefrontal cortex (PFC) involvement appears to be vital in decision-making under risk. Based on traumatic brain injuries or other pathologies affecting the PFC (Bechara et al., 1996;Rahman et al., 2001) it seems that PFC dysfunction typically manifests in a tendency for riskier decisionmaking behavior and an apparent disregard for negative consequences of actions during risky decision-making. In particular, the dorsolateral prefrontal cortex (DLPFC) has been considered to play an important role in decision-making under risk, probably due to its function in executive control, goal maintenance, and inhibitory control (Miller and Cohen, 2001;Knoch et al., 2006;Rao et al., 2008;Hare et al., 2009), as well as decision implementation (Mohr et al., 2010). This hypothesis seems particularly plausible for right hemisphere (RH) role in risky decision-making under risk ("RH hypothesis"), and mostly pronounced in right PFC/DLPFC function as found in patients with right-sided lesions (Tranel et al., 2002;Clark et al., 2003), and is supported by several neuroimaging, EEG, and brain stimulation studies (e.g., van't Wout et al., 2005;Knoch et al., 2006;Fecteau et al., 2007a;Gianotti et al., 2009), and by a recent meta-analysis (Mohr et al., 2010). For instance, a repetitive transcranial magnetic stimulation (rTMS) study showed that individuals displayed riskier decision-making in a standard gambling paradigm after disruption of the right, but www.frontiersin.org not the left, DLPFC (Knoch et al., 2006). Mohr et al. (2010) found that the right DLPFC (in conjunction with parietal cortex) has a role in risk processing during decision-making, particularly in the implementation of the risk decision, and the integration of the risk information with other aspects that may be relevant.
However, several findings call in to question the RH hypothesis role in risky decision-making under risk. For instance, a transcranial direct current stimulation (tDCS) study showed that after bilateral DC stimulation individuals displayed a conservative, riskaverse response style in a standard gambling paradigm (Fecteau et al., 2007b). In this study, unilateral DC stimulation to left or right DLPFC did not affect decision-making style at all, whereas both kinds of bilateral DC stimulations, regardless of electrodes polarity, produced the same behavioral outcome. Furthermore, in another tDCS study it has been found that DC modulation of the DLPFC influenced driving behavior, with anodal excitation of both the left and the right DLPFC leads to a more careful driving behavior (Beeli et al., 2008). Similar to Fecteau et al. (2007b), Beeli et al. (2008) did not find any clear functional lateralization patterns. These findings add to previous studies and suggestions such as Clark et al. (2003) report that patients with left-sided prefrontal lesions also displayed abnormal risk-taking behaviors, and to a meta-analysis of different neuroimaging studies which revealed that risky and ambiguous decision-making elicited activity bilaterally in the PFC (mainly orbitofrontal and DLPFC; Krain et al., 2006). This variety of evidence suggests that functional DLPFC lateralization in risk-taking behavior is still an unsolved issue that calls for further examination. Moreover, past studies, mostly studies that utilized brain stimulation techniques such as TMS and tDCS are restricted in the way they can uncover what is the electrophysiological mechanism that underlies the cognitive process in question.
Regional patterns of oscillatory activities can take place according to the behavioral tasks on which the brain is currently engaged (Thut and Miniussi, 2009). Studies into the role of brain oscillations in conflict and reward situations have demonstrated the relevance of oscillations in the theta-band (4-8 Hz). In particular, theta-band oscillatory activity over the medial frontal cortex has been proposed to reflect an electrophysiological mechanism for coordinating neural networks involved in monitoring behavior and the environment as well as facilitating task-specific adaptive changes in performance in conjunction with lateral PFC and sensory-motor areas. Different studies have identified that an induced oscillatory response in the theta-band during feedback processing is greater in power and phase coherence following negative feedback or errors relative to positive feedback or wins (e.g., Luu and Tucker, 2001;Luu et al., 2003Luu et al., , 2004Cohen et al., 2007Cohen et al., , 2008Marco-Pallares et al., 2008;Cavanagh et al., 2009Cavanagh et al., , 2010Christie and Tata, 2009;van de Vijver et al., 2011). Furthermore, when an action or outcome is suboptimal and medial frontal cortex signals a need for adjustment, this also appears to lead to an increase in cognitive control, possibly via the additional recruitment of lateral PFC (Kerns et al., 2004;Ridderinkhof et al., 2004). Lateral PFC is assumed to adjust higher-level decisionmaking strategies to changing contexts and demands and to integrate information over time (McClure et al., 2004;Lee and Seo, 2007).
There is some evidence for the lateralization of the electrophysiological mechanism involved in risk-taking behavior. Gianotti et al. (2009) reported that individual's tonic cortical lateral PFC asymmetry in theta and delta bands predicted their behavior in a standard gambling paradigm. In other words, the extent to which baseline slow-wave oscillations in theta and delta bands was greater in the RH than in the left hemisphere, was positively associated with level of risk taken in Slovic's (1966) risk task. Specifically, using a source localization technique, they found that the baseline cortical activity in the right PFC predicts individual risk-taking behavior. A recent study by Christie and Tata (2009) showed that feedback-induced theta during the Iowa gambling task (IGT) was substantially right lateralized. Christie and Tata's (2009) finding adds to previous suggestions (Gehring and Willoughby, 2004;Marco-Pallares et al., 2008), which promote the hypothesis that medial frontal theta and the recruitment of right lateral PFC reward-related theta-band oscillatory activity may be regarded as the electrophysiological mechanism which mediates decision-making processes during risk-taking situations. In the current study, we aim to investigate this hypothesis and specifically the notion that theta-band oscillatory balance between right and left regions, with a predominance role to the RH, is crucial for regulatory control during decision-making under risk. To the best of our knowledge, no past study has reported a direct causal link between oscillations and lateralization patterns to risky decisionmaking behaviors. In order to investigate this hypothesis, we used a novel stimulation technique called transcranial alternating current stimulation (tACS).
Transcranial alternating current stimulation provides a powerful approach to establish the functional role of neuronal oscillatory activities in the human brain and to explore the functional role of neural oscillations in cognitive tasks by stimulating the brain with biophysically relevant frequencies during task performance. tACS is supposed to induce regional brain oscillations in a frequencydependent manner, thereby interacting with specific functions of the stimulated region (Kanai et al., 2008(Kanai et al., , 2010Pogosyan et al., 2009;Thut and Miniussi, 2009;Zaehle et al., 2010;Paulus, 2011). This technique is still largely unexplored and volume conduction effects are not wholly understood (Kanai et al., 2010;Zaghi et al., 2010;Feurra et al., 2011;Schutter and Hortensius, 2011). Nevertheless, recent studies have demonstrated tACS efficiency in different domains. For instance, Kanai et al. (2010) showed that cortical excitability of the visual cortex as measured by the thresholds for TMS evoked phosphenes, exhibits frequency dependency whereby 20 Hz tACS over the visual cortex enhances the sensitivity of the visual cortex. A recent study by Zaehle et al. (2010) provided direct physiological evidence of interaction between tACS and ongoing alpha oscillation in the occipital region. When tACS was delivered at the alpha-frequency, entrainment of the EEG amplitude in this frequency was observed. A recent study demonstrated that stimulation in alpha and gamma bands over the associative sensory cortex induced positive sensory sensations (Feurra et al., 2011). It has also been demonstrated that tACS at prefrontal sites during sleep improved procedural memory consolidation (Marshall et al., 2006).
Transcranial alternating current stimulation differ from other stimulation techniques that modulate brain frequencies, most Frontiers in Neuroscience | Decision Neuroscience notably rTMS. In general, low-frequency rTMS (<1 Hz) is often used to decrease excitability in an off-line mode (e.g., the task is administrated after the stimulation). In contrast, AC stimulation can possibly lead to one of two outputs: by inducing synchronous changes in brain activity, the AC stimulation can enhance ongoing oscillations and to increase/enhance cortical excitability, or AC stimulation can interrupt with ongoing cortical activity by introducing cortical noise, thus disrupt cortical excitability. This technique therefore allows us to exploit both properties of "enhancement" and "interference" in an on-line paradigm.
In the current study, we investigated whether on-line tACS can modulate the neural excitability of left and right PFC in a frequency-dependent manner. We aimed to examine whether risktaking strategies can be modified in healthy individuals and to provide direct evidence for the causal role of lateralized hemispheric control, frequency-dependent, of risk-taking during a gambling game. Specifically, we focused on the theta-band (4-8 Hz) as the main oscillatory frequency and the DLPFC as the main structure of interest. In the current experiment, participants were randomly allocated to one out of three stimulation conditions that included right or left AC stimulation, or a sham stimulation, and performed the Balloon Analog Risk Task (BART; Lejuez et al., 2002) during the AC stimulation.
The BART is a task which involves learning from experience (i.e., experience-based decision), that was originally developed to be used as a behavioral measure of risk-taking tendencies. The task has been found to have a convergent validity with real-world risk-related situations, and provides an ecologically valid model to assess human risk-taking propensity and behavior (Lejuez et al., 2002;Schonberg et al., 2011). The average number of adjusted pumps a person tolerates in the task was found to correlate with self-reported drinking, smoking, risky sexual behaviors, and substance use in healthy adults and adolescents (Lejuez et al., 2002(Lejuez et al., , 2003a(Lejuez et al., ,b, 2004Aklin et al., 2005;Hunt et al., 2005).
We predicted that AC stimulation over the right DLPFC would increase RH theta-band power; consequently, participants would display a more conservative, risk-averse response style (i.e., smaller number of average adjusted pumps during the BART compare to sham). On the other hand, AC stimulation over left DLPFC was predicted to increase LH theta-band power, thus violate the hemispherical balance, and to disrupt decision-making processing; thus, we expected that participants would display riskier decisionmaking style (i.e., larger number of average adjusted pumps during the BART compare to sham). Finally, we investigated whether individual differences such as gender and trait motivation characteristics may moderate tACS effectiveness on performance, since both factors have been suggested to moderate decision-making processes to some extent (Tranel et al., 2005;Demaree et al., 2008).
PARTICIPANTS
Participants in the experiment were 27 healthy college students (mean age = 23.89 SD = 2.45; range 18-30 years, 13 male, 14 female), each participant received 40 Shekel (equivalent to ∼10$) for participating in the experiment. All participants gave informed consent in accordance with the Declaration of Helsinki and the procedures had the approval of the local ethics committee.
Participants had no metallic implants, previous history of any neurological disorders, medication, or substance abuse. All participants were right-handed as assessed by the Edinburgh Handedness Inventory (handedness score ≥90; Oldfield, 1971). The participants were randomly allocated to one of three stimulation groups [LH stimulation (N = 9)/RH stimulation (N = 8)/Sham stimulation (N = 10)].
BALLOON ANALOG RISK TASK
In the BART (Lejuez et al., 2002;Hunt et al., 2005), participants have to make a choice in a context of increasing risk. Participants inflated a computerized balloon by pushing a "pump" button. The balloon can explode at any moment. Participants have to decide after each pump whether to keep pumping and risk explosion, or to stop. In our modified version of the BART, participants accumulated points in a temporary bank with each pump (10 points). When the participant decided to stop pumping, the accumulated points transferred to a permanent bank. However, if the balloon explodes, all of the points accumulated in the temporary bank were lost. The probability that a balloon would explode was fixed at 1/128 for the first pump. If the balloon did not explode after the first pump, the probability that the balloon would explode was 1/127 on the second pump, 1/126 on the third pump, and so on until the 128th pump the probability of an explosion was 1/1 or a certainty. According to this algorithm, the average breakpoint was 64 pumps. Detailed instructions provided to the participants were based on those provided by Lejuez et al. (2002). Following instructions and a short guided practice, the task was administered until 30 balloons (i.e., trials) were completed. Note that participants did not actually receive the final sum of points stored in the permanent bank. Instead, they were informed at the beginning of the session that they are part of a tournament in which they play against other participants, for the prize of 250 Shekel (equivalent to ∼70$), and their objective was to obtain the largest amount of points possible in order to win the prize.
Similar to previous studies that used the BART (e.g., Lejuez et al., 2002), the main outcome measure of the current examination was the adjusted number of pumps. In addition, total number of balloon explosions on the BART was calculated. Adjusted values were calculated based on the average number of balloon pumps on those balloons that did not explode. Adjusted values are preferable, because including balloon pumps from all trials (including those in which balloons exploded) result in the inclusion of trials in which the participants were forced to stop pumping because of the explosion (Lejuez et al., 2002;Aklin et al., 2005). Because the adjusted value consisted only of no-explosion trials, it considers being an index of a more adaptive (non-punitive) form of risk-taking behavior (Hunt et al., 2005). In contrast, evaluating the frequency of balloon explosions provided an index of a more maladaptive form of risk-taking whereby risk exceeded an acceptable level and ultimately was punished (via explosion and loss of money; Hunt et al., 2005). Furthermore, because the BART was performed during the whole stimulation duration, we calculated the time course of this measures (adjusted number of pumps and frequency of balloon explosions for three blocks, each block contain 10 balloons).
www.frontiersin.org FIGURE 1 | Overview of the study design. Participants arrived at the lab, answered the BIS/BAS scales (Carver and White, 1994), and were randomly assigned into one of three stimulation conditions: left, right, or sham stimulation. After a short practice, sham or AC stimulation administered and lasted a total of 15 min, with 6.5 Hz, 1 mA peak-to-peak intensity in the active stimulation conditions. The stimulation started 5 min before the task began and was delivered during the entire course of the BART, which lasted <10 min. Before and after stimulation and the BART participants performed the line-bisection task.
In addition, a recent advance in modeling methods of the BART task, originally introduced by Wallsten et al. (2005), validated recently by Bishara et al. (2009), and further developed by van Ravenzwaaij et al. (2011) propose a model where BART performance is governed by different component processes such as risk-taking (involving the tradeoff between reward and penalties) and general sensitivity to payoff which affects task performance.
Whereas adjusted values and frequency of balloon explosions are usually considered to tap the construct of risk-taking, payoff sensitivity can be measured with the evaluation of participants' deviation from the optimal expected-value strategy. We report these measures in the Results section.
tACS AND GENERAL PROCEDURE
A double blind, randomized and sham-controlled trial was used in a between participants design (see Figure 1). The experiment included three types of stimulation, two active stimulation conditions and one sham condition. We used the international EEG 10/20 system to determine stimulation sites. To stimulate the LH, one electrode was placed over the left DLPFC (F3) and the reference electrode was placed over the left temporal (CP5). To stimulate the RH, one electrode was placed over the right DLPFC (F4) and the reference electrode was placed over the right temporal (CP6). For sham stimulation, the electrodes were placed in the same positions as for active conditions (half of the participants with LH montage and the other half with RH montage).
The stimulation started 5 min before the task began and was delivered during the entire course of the BART, which lasted <10 min. tACS was induced by two 5 cm × 5 cm saline-soaked synthetic sponge electrodes and delivered by a battery-driven, constant-current stimulator (Magstim Ltd., Wales). The waveform of the stimulation was sinusoidal and there was no DC offset. AC was delivered at a frequency of 6.5 Hz and the intensity was 1 mA (peak-to-peak). For active stimulation conditions the AC stimulation was delivered for 15 min. For sham stimulation, stimulation was delivered for 30 s and then turned off. Thus, participants felt the initial itching sensation associated with brain stimulation but received no active current for the rest of the stimulation period. This method of sham stimulation has been shown to be reliable with respect to DC stimulation (Gandiga et al., 2006). In the present study participants were kept blinded with regard to the type of the stimulation; the AC procedure used, with AC delivered at a frequency of 6.5 Hz, did not induce any flickering sensation or any other side effects, as verified by questioning participants after the stimulation.
ASSESSMENT OF MOTIVATION
At the start of the session, participants completed the BIS/BAS scales (Carver and White, 1994) in order to evaluate trait motivational characteristics. The BIS/BAS scales (Carver and White, 1994) measures two independent based dimensions of motivation (Gray, 1987;Pickering and Gray, 1999;Gray and McNaughton, 2000): the BAS, which regulates responses to rewarding stimuli, and the BIS, which regulates inhibitory processes to aversive stimuli. All items were judged on a four-point scale ranging from 1 ("I strongly agree") to 4 ("I strongly disagree"). The BIS/BAS scales assess one behavioral inhibition measure (BIS; e.g., "I worry about making mistakes") and three personality measures related behavioral approach (BAS): (1) The positive anticipation of rewarding events (BAS Reward Responsiveness -BAS RR; e.g., "When I see an opportunity for something I like I get excited right away"); (2) Items tapping strong pursuit rewards (BAS Drive -BAS D; e.g., "I go out of my way to get things I want"); (3) The tendency to seek out new rewarding situations (BAS Fun Seeking -BAS F; e.g., "I am always willing to try something new if I think it will be fun").
LINE-BISECTION
Before and immediately after BART performance and AC stimulation, participants performed two line-bisection trials as a simple and non-invasive behavioral measure of a hemispheric bias. On each trial, participants were asked to mark the exact center of a 180-mm black line printed horizontally on a white sheet of paper. The line was printed at mid height of the page and was closer to the right border on one trial and closer to the left border on the other. Participants used a fine-point pen to bisect the line as accurately as they could. Scores reflected the percent of deviation from the center of the line: positive scores reflect a bias to the right side (stronger LH activation), and negative scores reflect a bias to the left side (stronger RH activation; Goldstein et al., 2010;Nash et al., 2010).
BART PERFORMANCE
The data from the BART task was analyzed with a mixed AVOVA model that included one between-subject factor and one within-subject factors. The between-subject factor was Stimulation Group (LH stimulation/RH stimulation/sham stimulation) and the within-subject factor was Time (first block/second block/third block). Average number of adjusted pumps and total number of balloon explosions served as the dependent variables. When relevant, post hoc analyses were performed using a Bonferroni correction for multiple comparisons. Three participants (two in the sham group and one in the LH stimulation group) were excluded from all analyses as outliers (2 SD above or below the mean of the group for the adjusted number of pumps).
The analysis of average number of adjusted pumps revealed a main effect for Stimulation Group [F (2,21) = 5.63, p < 0.05; η 2 p = 0.35; see Figure 2]. Post hoc tests revealed an effect and show that the LH stimulation group differed significantly from both the sham stimulation (p < 0.05) and RH stimulation (p < 0.05) groups. In addition, the analysis revealed a main effect for Time [F (2,42) = 5.93, p < 0.05; η 2 p = 0.22]. A trend analysis showed a linear trend across blocks one to three [F (1,21) = 7.60, p < 0.05; η 2 p = 0.26]. Post hoc tests reinforced this linear trend, and revealed that the first and the last blocks differed significantly (p < 0.05). However, the analysis did not reveal any significant interaction between the two factors (F < 1).
The analysis of total number of balloon explosions also revealed a main effect for Stimulation Group [F (2,21) = 6.63, p < 0.01; η 2 p = 0.39; see Figure 3A]. Post hoc tests revealed that the LH stimulation group differed significantly from sham stimulation group (p < 0.01), and marginally differed from RH stimulation group (p = 0.056). In addition, the analysis revealed a marginal effect for Time [F (2,42) = 2.96, p = 0.06; η 2 p = 0.12]. Post hoc tests revealed that there was no significance different between the first block (M = 3; SD = 1.56) and the second block (M = 2.75; SD = 1.32). However, the second and the third block (M = 3.54; SD = 1.31) differed significantly (p < 0.001). The analysis did not reveal any significant interaction between the two factors (F < 1).
We further analyzed balloon explosions frequencies by defining for each participant whether a balloon explosion was a one-time explosion or a sequential explosion (a one-time explosion was defined as the number of total balloon explosions minus number www.frontiersin.org
Table 1 | Pearson's correlations among different BART performance parameters.
(1) (2) (3) (4) (5) (1) One-time - (2) Seq.Tot −0.51** - of explosions in trial n that were followed by no-explosion in trial n + 1; sequential explosion was defined as the number of total balloon explosions minus total number of explosions in trial n that were followed by an explosion in trial n + 1, n + 2, etc.; the two measures are complementary). In addition, based on this simple calculation, we also defined another variable term "Maximum sequential explosions" that reflected the highest number of balloon explosions in a sequence for each participant. Consequently, each participant had three additional measures of the original balloon explosions frequency. The rationale to use these indices is based upon the idea that the use of a maladaptive index of risk-taking (e.g., number of balloon explosions) in a task with a random schedule of explosions may create an artifact with respect to the actual number of explosions that were a result of a risk behavior that exceeded an acceptable level and resulted in an explosion. Overall, the new indices were used in order to verify whether participants in the LH stimulation group indeed tend to pump the balloon more, a tendency that may be manifested not only in a higher overall number of explosions compare to the two other stimulation groups, but particularly in a higher number of non-random explosions. Pearson's correlations coefficients between the three newly defined measures and the other original BART parameters were calculated and presented in Table 1.
The correlations of number of sequential explosions and maximum sequential explosions to the other BART known parameters suggest that these variables are reliably correlated, contrary to the number of one-time explosions. Furthermore, based on these new measures, we conducted a MANOVA with Stimulation Group as between-subject factor and each of the newly defined measures as the dependent variables. We reveled a Stimulation Group effect [Wilks' Lambda = 0.45, F (6,38) = 3.13, p < 0.05; η 2 p = 0.32]. Follow-up testing showed that no stimulation group effect was found with respect to number of one-time explosion (F < 1; see Figure 3B). However, a Stimulation Frontiers in Neuroscience | Decision Neuroscience Group effect was found with respect to number of sequential explosions [F (2,21) = 4.34, p < 0.05; η 2 p = 0.29; see Figure 3C]. Post hoc tests revealed that the LH stimulation group differed significantly from the sham stimulation group (p < 0.05). A similar effect was reveled with respect to the maximum of sequential explosions [F (2,21) = 7.61, p < 0.01; η 2 p = 0.42; see Figure 3D]. Post hoc tests revealed robust effect and show that the LH stimulation group differed significantly from both the sham stimulation (p < 0.01) and RH stimulation (p < 0.01) groups. These analyses confirmed and elaborated on our previous mentioned results by demonstrating that participants who received the LH stimulation demonstrated a strategy of risky decision all along the BART, which systematically differed to the sham and RH stimulation groups. All the groups tolerated a similar number of one-time explosions, that resulted from the inherent nature of the task, but only participants receiving LH stimulation displayed a tolerance for losses, and in particular, sequential losses.
Lastly, we computed a behavioral index that taps participants' behavior in relation to optimal behavior in the BART task (e.g., payoff sensitivity). The optimal expected-value strategy was to pump 64 times and then stop. Explosion points were determined for each balloon in the manner described (i.e., each pump had an a priori probability of 1/128 of yielding an explosion) but with the constraint that explosions were scheduled to occur on average on Pump 64 over the entire 30 balloons and within each sub-block of 10. We calculated the mean squared distance (MSD) of each participant number of pumps at a given trial from the optimal number of pumps. MSD therefore reflects participants' sensitivity to payoffs, so that a closer score to zero represents an optimal strategy. Pearson's correlations coefficients between this measure and the two main BART outcome parameters reported earlier (e.g., average number of adjusted pumps and total number of balloon explosions) were performed, and showed a very high correlation (r = −0.91, p < 0.000; r = −0.80, p < 0.000; for adjusted pumps and balloon explosions, respectively). The fact that these parameters are highly correlated indicates that payoff sensitivity and risk-taking measures are confounded.
GENDER AND MOTIVATION BIAS
We investigated a possible moderation effect of individual differences such as gender and trait motivation characteristics (see Table 2 for descriptive statistics) on performance. First, we separately entered gender as a covariate to the mixed AVOVA models reported earlier. There was no significant effect to gender or any significant interactions with other factors in any of the models. Second, in order to investigate the role of motivation bias on performance, we separately entered BAS, BIS, and BAS subscales scores as covariates to the mixed AVOVA models reported earlier. All models produced non-significant effects for motivation bias. The stimulation groups did not differ in any demographic variables or in any BIS/BAS parameter (F < 1).
LINE-BISECTION BIAS
In order to evaluate how BART performance and AC stimulation affected hemispheric bias as measured by the line-bisection, we analyzed line-bisection scores in a mixed ANOVA model that included Stimulation Group (LH stimulation/RH stimulation/sham stimulation) as the between-subject factor and Time (Before/After) as the within-subject factor. The analysis revealed a main effect for Time [F (1,18) = 23.70, p < 0.000; η 2 p = 0.53]. The line-bisection index was more negative after performing the BART task (M = −0.16, SE = 0.06) compare to before (M = 0.18, SE = 0.07), indicating that the BART had the expected hemispheric effect, i.e., RH engagement which lead to stronger RH activation. This asymmetry shift can be further emphasized -18 out of 24 participants achieved a positive line-bisection score before the BART and AC manipulation (this was significantly higher than 50% by a binomial test, p < 0.05), but after task and stimulation, 18 out of 24 achieved a negative score (this was significantly higher than 50% by a binomial test, p < 0.05). We separately analyzed before and after line-bisection scores for the different AC groups using paired samples t -tests. Participants in the Sham [t (7) = 3.25, p < 0.05] and RH stimulation [t (7) = 5.6, p < 0.001] groups showed the asymmetry shift, however participants in the LH stimulation showed only a non-significant trend [t (7) = 1.47, n.s]. This finding implies www.frontiersin.org that the BART did not produce the expected asymmetry shift within the LH stimulation group.
DISCUSSION
The current study explored the cognitive architecture, neural, and electrophysiological basis of decision-making processes in the context of risk-taking. Overall, we report that participants receiving AC stimulation of 6.5 Hz to the LH, with one electrode located over the left DLPFC and the reference electrode located over left temporal cortex, displayed a risky response style, making more pumps on the BART, and tolerated a larger number of balloon explosions than those with sham stimulation and those with RH stimulation. This is the first study showing that neuromodulation in the theta-band can causally modulate decision-making style in healthy participants. In addition, this result supports previous evidence showing that the DLPFC is causally involved in modulating risky decision-making behaviors.
The current result supports, to some extent, the hypothesis that the theta-band oscillatory balance between right and left regions is crucial for regulatory control during decision-making under risk. As predicted, participants receiving AC stimulation to the LH displayed a risky response style. It has been proposed that in conflict and reward situations, theta-band oscillatory activity over the frontal medial cortex may reflect an electrophysiological mechanism for coordinating neural networks involved in monitoring behavior and the environment as well as facilitating task-specific adaptive changes. Furthermore, induced oscillatory response during feedback processing found to be greater in power and phase coherence following negative feedback or errors relative to positive feedback or wins (Luu and Tucker, 2001;Luu et al., 2003Luu et al., , 2004Cohen et al., 2007Cohen et al., , 2008Marco-Pallares et al., 2008;Cavanagh et al., 2009Cavanagh et al., , 2010Christie and Tata, 2009;van de Vijver et al., 2011). We propose that AC stimulation at the theta-band to the LH, created continuous disruption to participants' ability to process and adjust their actions based on negative feedback or errors, as shown by their persistent tendency to tolerate losses, and in particular, sequential losses. We further claim that the balance between right and left regions, and in particular, the predominance of the RH, is needed in order to be able to adopt a conservative, risk-averse response style during the BART. Since we interfered with this balance and especially with RH dominance, participants lacked the ability to adjust their risk-taking behaviors and tend to display a risky response style.
Previous studies addressed the relative contribution of the right and the left prefrontal regions in risk-taking behaviors and particularly the role of the DLPFC in this kind of behavior. Various studies have provided clear evidence for the role of the right DLPFC in decision-making and risk-taking situations. Using low-frequency rTMS van't Wout et al. (2005) found a disruption to the right DLPFC resulted in accepting more frequently unfair offers and taking longer to refuse unfair offers. Knoch et al. (2006) reported that suppression of activity in the right but not the left DLPFC with low-frequency rTMS made participants choose high-risk prospects more often. Moreover, using a different brain stimulation methodology, i.e., tDCS, Fecteau et al. (2007b) showed that during right anodal/left cathodal stimulation over the DLPFC, participants chose more often the safe prospect compared with the sham and reversed polarization groups. However, other studies have not found clear lateralization effects (e.g., Fecteau et al., 2007b;Beeli et al., 2008). It has been suggested that divergent results from different brain stimulation studies might be due to differences in the risk-taking paradigm used and/or the method of stimulation involved (Fecteau et al., 2007b).
Our results are in line with the RH hypothesis in risk-taking behaviors, and address lateralization in terms of electrophysiological balance between left and right cortical regions in the theta bend. Previous suggestions (Gehring and Willoughby, 2004;Marco-Pallares et al., 2008;Christie and Tata, 2009) have already raised the hypothesis that right medial frontal/prefrontal theta may be regarded as the electrophysiological mechanism which mediates decision-making processes during risk-taking situations, and the present study adds a causal link between the electrophysiological mechanism and theta-band activity to actual behavior.
In addition, it is important to note that we address lateralization in terms of hemispheric shift. It has been recently reported that BART performance elicited greater activity in the right DLPFC (Rao et al., 2008) providing further support to previous studies of patients with right-sided lesions (Tranel et al., 2002;Clark et al., 2003) that reported on a dysfunction in risky decisionmaking behaviors. Apart from the main findings reported earlier, the simple asymmetry index (i.e., the line-bisection task) provided further support for this hypothesis, and showed that only in the sham and RH stimulation groups, but not LH stimulation group, line-bisection bias was more negative after performing the BART compare to baseline performance. This finding indicates that the BART had the expected hemispheric effect, i.e., a RH enduring engagement, which was reflected by a stronger RH activation in those groups only. Tendencies toward rightward versus leftward errors in estimating the actual midpoints are taken to reflect relative primacy of right versus left visual fields, respectively, and neural activity in the contralateral hemisphere (Kinsbourne, 1970;Milner et al., 1992;Goldstein et al., 2010). Even though previous research suggests line-bisection bias may be more a marker of parietal than prefrontal function (Vallar and Perani, 1986), the simple and non-invasive line-bisection task has been recently found to serve as a neural index of asymmetrical activity related to the DLPFC (Nash et al., 2010).
In the present study, we failed to find an effect for AC stimulation over the RH. We expected that following RH stimulation, participants would display a more conservative, risk-averse response style. The results suggest that participants who received this stimulation behave as participants in the sham stimulation. This null result can be marked as a "floor effect" and can be explained in terms of behavioral, methodological, and electrophysiological aspects. First, with respect to behavior, this "floor effect" probably represents a possible limitation of our ability to modulate risk-taking behavior in healthy participants, and to increase their risk-averse response style. It is possible that RH stimulation will be more affective with populations that show deficits in risk-taking tasks such as patients with lesions in the PFC and other clinical populations, such as drug abusers, alcoholics, and pathological gamblers (Bechara et al., 1996;Rahman et al., 2001). Second, this "floor effect" may be also a direct outcome of the task properties, in which it is easy to demonstrate what is considered to be a Frontiers in Neuroscience | Decision Neuroscience risky behavior (e.g., a large number of adjusted pumps and a large number of balloon explosions), but it may be harder to reveal an overcautious, conservative, risk-averse response style. Third, this "floor effect" can be referred as close to the idea of the so called "natural frequency" (Rosanova et al., 2009), by which different corticothalamic brain modules are tuned to oscillate at a topographically organized "natural frequency." It is possible that the AC stimulation to the RH interacted with the neuronal oscillatory activity that already evoked by the task, i.e., the "natural frequency" that characterizes the decision-making processes that usually take place during processing, thus did not modulate any cortical activity or risk-taking behavior.
Overall, our results suggest that during risk-taking situations, the hemispheric balance is important. This suggestion may account for previous conflicting results, mentioned earlier, regarding the relative contribution of the right and left PFC/DLPFC in risky decision-making behaviors. This balance can metaphorically be described as a seesaw between left and right frontal/prefrontal areas that is theta dependent. The right hemispheric shift is vital, and especially the recruitment of right lateral PFC, in order to promote a conservative, risk-averse response style. Hence, it is clear that right prefrontal regions must be functionally and anatomically intact in order to facilitate such an on-line shift. However, the LH is also crucial for this shift, and especially the balance between the two. Theta-band tonic activity balance between left and right prefrontal regions has been found to predict risk-taking behavior (Gianotti et al., 2009), showing the importance of the hemispheric balance right from a pre-stage of risk-taking situations. In addition, this hypothesis is similar to a novel framework of risk processing suggested by Mohr et al. (2010). Based on a meta-analysis on the neural basis of risky behavior, the authors proposed a potential mechanism of risky decision-making that involves two parallel and reciprocal risk processes; one is emotional and the other one is cognitive. These processes involve the anterior insula and the thalamus as the key regions which mediate emotional processing, whereas the dorsomedial PFC evaluates the risk of the stimulus on a cognitive level. According to their framework, both parts of risk processing (emotional and cognitive) inform the actual decision process performed in DLPFC and parietal cortex. It is possible that our hypothesis represents, to some extent, Mohr et al.'s (2010) framework, with the frontal/prefrontal hemispheric balance as the cognitive level of processing, and the mandatory recruitment of the right DLPFC as the exaction phase. This suggestion is reasonable given the findings that when the medial frontal cortex signals a need for adjustment, this also involves an additional recruitment of lateral PFC (Kerns et al., 2004;Ridderinkhof et al., 2004).
A final note, apart from matters of lateralization, the current study addressed the issue of cognitive processes that govern BART performance. Previous work highlighted the role of two key concepts, namely, risk-taking and payoff sensitivity (e.g., Bishara et al., 2009). However, in the current study, risk-taking and payoff sensitivity measures were highly correlated, and presumably are confounded, and means that in practice these definitions of performance are interchangeable, at least for the specific task paradigm used. Therefore, it is difficult to distinguish between these two-component processes and as a consequence to draw a firm conclusion to whether participants in the different stimulation groups were more risk-averse or risk seeking, since participants respond in a risk-aversive manner in general. This issue has been acknowledged previously (e.g., Freeman and Muraven, 2010). In the current experiment, average number of pumps per group was below the average explosion point across balloons (64, which is also the optimal number of pumps to maximize earnings), hence the group that pumped the balloon more earned more points. This finding is not unique to our experiment, as participants generally respond in a risk-aversive manner on the BART (see also Lejuez et al., 2002Lejuez et al., , 2003aBornovalova et al., 2009;Freeman and Muraven, 2010). Apparently, human subjects and also rats (see Jentsch et al., 2010) exhibit risk-averse profiles when performing the BART (or BART alike task in the case of rats), producing fewer than the optimal number of responses, and earning less than possible probably because of over-estimation of the risk associated with the task (Bornovalova et al., 2009;Jentsch et al., 2010).
Several limitations must be considered when interpreting the results. First, the present study used only one band of stimulation frequency, was restricted to specific locations, and measured behavioral effects of a particular risk-taking paradigm. Future research should elaborate the scope of reference and examine more bands, in various cortical locations using a verity of risktaking paradigms. Second, no direct assessment of DLPFC activity was made by any imaging technique before and/or after tACS stimulation, so any attempt to bond between DLPFC activity, tACS effects, and risk-taking behaviors call for further examination. Future research should document neural baseline and changes accruing after AC stimulation in order to be able to infer about the neural circuitry and the mechanisms that are influenced by AC stimulation. Third, we stimulated all participants in the active conditions in our study with 6.5 Hz, thereby ignored possible inter individual variability that may be captured and elaborate our knowledge regarding the electrophysiological mechanism in question. For example, it is possible to stimulate each participant with her/his transition frequency (TF). TF shows a large inter individual variability ranging from about 4 to 7 Hz (Klimesch et al., 1996;Klimesch, 1999), so TF can be measured in order to create a tailored stimulation for each participant in future studies. Forth, in the present study we did not find that individual differences such as gender and/or trait motivation characteristics moderate tACS effectiveness on performance. Although we did not find a gender or motivation difference in the BART measures, additional studies should specifically explore whether there is a gender or motivational bias in decision-making in regards to brain stimulation. Finally, in the current study, we employed a procedure similar to the one used by Hunt et al. (2005), where participants did not actually receive money for their BART performance, rather they were competing for a monetary prize. It is possible that this kind of incentive procedure may have generated a competitive environment and may have bias choice behavior. Future research is needed to clarify this issue.
CONCLUSION
The current study report a novel finding demonstrating that neuromodulation in the theta bend can causally modulate www.frontiersin.org decision-making style by increasing risk-taking behavior in healthy participants and provides further support to previous evidence by showing that the DLPFC is causally involved in modulating decision-making. This study may inspire the use of tACS to further examination of risky decision-making behaviors, and hopefully in the near future would be beneficial as a therapeutic tool for patients with different brain lesions and other clinical populations, such as drug abusers, alcoholics, and pathological gamblers who show deficits in this kind of behavior. | 2016-06-17T18:13:47.187Z | 2011-12-19T00:00:00.000 | {
"year": 2011,
"sha1": "d099409fca6fbd692d1ceb7273aab0c124de515b",
"oa_license": "CCBYNC",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fnins.2012.00022/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "d099409fca6fbd692d1ceb7273aab0c124de515b",
"s2fieldsofstudy": [
"Psychology",
"Biology"
],
"extfieldsofstudy": [
"Psychology",
"Medicine"
]
} |
56249878 | pes2o/s2orc | v3-fos-license | Algebraic structure count of linear phenylenes and their congeners *
The algebraic structure count of the linear phenylene with h six-membered rings is known to be equal to h + 1. We show that the same expression applies if each four-membered ring in the phenylene is replaced by a linear array consisting of k four-membered rings, where k = 4, 7, 10, ... For any other value of k, the algebraic structure count is either 0 or 1 or 2, and does not increase with increasing h.
INTRODUCTION
Whereas linear polyacenes (naphthalene, anthracene, tetracene, ...) belong among the best and longest known polycyclic conjugated hydrocarbons, 1 the analogous linear phenylenes were synthesized only in the recent past; for details on the chemistry of phenylenes see the review, 2 the recent works [3][4][5][6] and the references cited therein.The structure of the linear polyacenes (L h ) and the linear phenylenes (P(1,h)) is shown in Fig. 1.
The p-electron properties of linear polyacenes are well understood. 7In particular, the Kekulé structure count of the linear polyacene with h hexagons is h + 1, which, at the same time, is its algebraic structure count.
][11][12][13][14][15][16] Namely, it was shown 8 that the algebraic structure count of a phenylene is equal to the number of Kekulé structures of the benzenoid hydrocarbon obtained by formally abandoning the 4-membered rings (so-called "hexagonal squeeze").In particular, the algebraic structure count of the linear phenylene P(1, h) is equal to the number of Kekulé structures of the linear polyacene L h , and is thus equal to h + 1. (This special case was known 17 before the general regularity 8 was discovered.) In phenylenes each 4-membered ring is adjacent to two 6-membered rings, and no two 6-membered rings are adjacent.In an earlier work 18 we considered the congeners of linear phenylenes, in which there are several mutually adjacent 6-membered rings, and found that then the algebraic structure count increases much faster than h + 1.In this work we are concerned with the congeners of linear phenylenes, in which there are several mutually adjacent 4-membered rings, namely the systems P(k, h) whose structure is depicted in Fig. 1.We show that their algebraic structure counts follow a completely different pattern and, irrespective of the value of h, are very small or equal to zero.
INTERLUDE: THE ALGEBRAIC STRUCTURE COUNT
The rule that the stability of polycyclic conjugated systems is proportional to the number K of their Kekulé structures holds for benzenoid hydrocarbons. 7Attempts to directly extend this rule to non-benzenoid hydrocarbons failed, because the conclusions thus obtained were in many cases in contradiction to experimental findings.The way out of this difficulty was found by Dewar and Longuet-Higgins 19 and was eventually elaborated in due detail by Wilcox: 20,21 Each Kekulé structure has a so-called "parity" (even, with sign +1 or odd, with sign -1); instead of counting the Kekulé structures, one has to add their signs.The result is called the "algebraic structure count", ASC.The parities are chosen in such a manner that ASC ³ 0. A non-benzenoid conjugated molecule with algebraic structure count equal to a behaves roughly in the same manner as a benzenoid system with a Kekulé structures.This, in particular, means that systems with ASC = 0 are extremeny unstable (and usually non-existent) whereas ASC = 1 and ASC = 2 implies very low stability.
The basic procedure for determining the parity of Kekulé structures is the following: Start with an arbitrary Kekulé structure k 1 and assign to it even parity.The Kekulé structures that are obtained from k 1 by cyclically moving an odd number of double bonds have the same parity, i.e., are also even.The Kekulé structures obtained from k 1 by cyclically moving an even number of double bonds have opposite parity, i.e., are odd.Continuing this procedure we can, step-by-step, determine the parity of all Kekulé structures and then easily compute the ASC.The method is applicable to alternant hydrocarbons (and thus to the conjugated systems considered in this work), whereas in the case of non-alternant species the parity concept is not well defined. 22e illustrate the calculation of the algebraic structure count for the non-benzenoid conjugated systems BCB, P(2,2) and P(2,3).Their Kekulé structures are depicted in Fig. 2. from k 2 by cyclically rearranging four double bonds.Hence the parity of k 3 is opposite to that of k 1 and k 2 .
The conjugated system P(2,2) has a total of 8 Kekulé structures.Of these four are even and four are odd.For instance, the parity of k 2 is even, because k 2 is obtained from k 1 by rearranging three double bonds.The parity of k 4 is also even, because k 4 is obtained from k 2 by rearranging three double bonds.(Note that k 4 cannot be obtained from k 1 by rearranging double bonds within a single cycle.)The parity of k 6 is odd, because k 6 is obtained from k 4 by cyclically rearranging 4 double bonds.Etc.As a final result we get ASC(P(2,2) The conjugated system P(2,3) has 30 Kekulé structures, of which in Fig. 2 are depicted only three.A detailed analysis (same as in the case of BCB and P(2,2)) shows that there are 16 even and 14 odd Kekulé structures.Therefore ASC(P(2,3)) = 16 -14 = 2.
In the general case it is not known how to evaluate ASC without actually constructing all the Kekulé structures and determining their parities.In contrast to the counting of Kekulé structures, 7 no generally applicable recursive method is known for the calculation of the ASC. 23,24However, for the systems P(k, h) we were able to find a pertinent method and determine ASC(P(k, h)) for all values of the parameters k and h.This is outlined in the subsequent sections.
COMPUTING THE ALGEBRAIC STRUCTURE COUNT OF P(k, h)
We first determine ASC(P(2, h)).For this consider the carbon-carbon bond indicated in Fig. 3 by an arrow.In some of the Kekulé structures of P(2, h) this bond is double and in some it is single.If this bond is double, then the single/double-character of a few more bonds is fixed, as indicated on diagram A in Fig. 3.The non-fixed double bonds belong then to two disjoint fragments, of which one is P(2, h -2) and the other BCB.The contribution of these Kekulé structures to the algebraic structure count of P(2, h) is thus equal to ASC(P(2, h -2)) ´ASC(BCB).
If the bond considered is fixed to be single, then the non-fixed double bonds belong to two disjoint fragments, of which one is P(2, 2) and the other is denoted by X, see diagram B in Fig. 3.The contribution of these Kekulé structures to the algebraic structure count of P(2, h) is equal to ASC(X) ´ASC(P(2, 2)). Consequently, From the previous section we know that ASC(BCB) = 1 and ASC(P(2, 2)) = 0. Therefore, we arrive at the recursion relation whose initial conditions are ASC(P(2, 2)) = 0, ASC(P(2, 3)) = 2.This immediately yields
Hence, ASC(P(4, h)) = ASC(P(1, h))
, which -as will be seen in a while -is a special case of a more general result.
We could continue along these lines and deduce expressions for ASC(P(5, h)), ASC(P(6, h)), etc.This however is not necessary, in view of Eq. (1) from the subsequent section.
The finding that the ASC-values of the conjugated systems P(2, h) and P(3, h) are zero or near-to-zero, and that these values do not increase with the increasing size of the molecule are in agreement with the fact that the respective compounds have never been synthe- sized.Another reason for the poor stability of these species would be the enormous steric strain caused by several concatenated 4-membered rings.
A GENERAL RULE FOR THE ALGEBRAIC STRUCTURE COUNT Denote by F 1 and F 2 two arbitrary alternant conjugated-hydrocarbon fragments.Let H and H* be conjugated hydrocarbons whose structures are shown in Fig. 4.Then, In order to prove Eq. ( 1) consider first the Kekulé structures of H.These can be classified into five types, 1, 2, 3, 4, and 5, as shown in Fig. 4. The number of even and odd Kekulé structures of each type are denoted by K i (even) and K i (odd), i = 1, 2, 3, 4, 5. Then tha algebraic structure count of H is given by The summands in (2) corresponding to types 4 and 5 cancel out because of K 4 (even) = K 5 (odd) ; K 4 (odd) = K 5 (even) The Kekulé structures of H* can be classified into several types.Of these 11, 12, and 13 pertain to Kekulé structures of H of type 1 (see Fig. 4).Now, bearing in mind the way in which the parities of Kekulé structures are determined, we have K 11 (even) = K 1 (even) ; K 11 (odd) = K 1 (odd) K 12 (even) = K 1 (odd) ; K 12 (odd) = K 1 (even) K 13 (even) = K 1 (odd) ; K 13 (odd) = K 1 (even) implying The text types of Kekule structures of H* are those marked in Fig. 4 by 21, 22, 23, 24, and 25. these all pertain to type 2 Kekule structures of H.By direct inspection we verity that Because of symmetry the analysis of Kekulé structures of H* of the type 31-35 (not shown in Fig. 4), that correspond to the Kekulé structures of H of the type 3 lead to
Fig. 1 .
Fig. 1.The polycyclic conjugated p-electron systems, the algebraic structure counts of which are studied in this work.
Fig. 3 .
Fig. 3. Diagrams needed for the derivation of the formula for the algebraic structure count of P(2, h).
=
have to examine the Kekulé structures of H * of the types 41-48, which are related to the Kekulé structures of H of the types 4 and 5. Their total contribution is also zero, because four of them (41, 46, 47, 48) are of one parity and the other four (42, 43, 44, 45) of opposite parity.In summary, the algebraic structure count of H* depends on the number of even and odd Kekulé structures of the types 1i, 2i, and 3i and in view of Eqs.(3)-(5), ½ -ASC(H) ½ = ASC(H) which is just the result stated as Eq.(1).
Fig. 4 .
Fig. 4. The general form of the conjugated p-electron systems to which Eq. (1) applies and various types of their Kekule structures. | 2018-12-18T14:23:52.616Z | 2003-01-01T00:00:00.000 | {
"year": 2003,
"sha1": "e9b0dee201892d6ece4c60635615faa16d6e14e4",
"oa_license": null,
"oa_url": "http://www.doiserbia.nb.rs/ft.aspx?id=0352-51390305391G",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "e9b0dee201892d6ece4c60635615faa16d6e14e4",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Physics"
]
} |
237732988 | pes2o/s2orc | v3-fos-license | A Barber-Surgeon’s Instrument Case: Seeing the Iconography of Thomas Becket through a Netherlandish Lens
: The triple anniversary in 2020 of Thomas Becket’s birth, death and translation has been an occasion to review and revisit many of the artefacts associated with the saint and his cult in England and across Europe. Many of these are items directly associated with his veneration in churches or in private devotions, but one object which served in neither capacity is an instrument case currently in the collection of the Worshipful Company of Barbers in London. This unusual object has been studied for its fine silver work, and possible royal associations, but little academic attention has so far been paid to the some of the iconography, particularly that of the scene of the murder of Thomas Becket depicted on the back of the box, the side to be worn against the body. In this article, we show how seemingly unusual elements in the iconography draw on particularly Flemish representations of Becket’s murder that, to date, have received little attention in Anglophone scholarship. From this, we discuss this scene and its significance in understanding the role the iconography may have been intended to serve, and the interplay between the decorative schema and what the surgeon thought about his own role with regard to the use of the case and its tools.
Introduction
The triple anniversary in 2020 of Thomas Becket's birth, death and translation has been an occasion to review and revisit many of the artefacts associated with the saint and his cult in England and across Europe. Many of these are items directly associated with his veneration in churches or in private devotions, but one object which served in neither capacity is an instrument case currently in the collection of the Worshipful Company of Barbers in London. This unusual object has been studied for its fine silver work, and possible royal associations, but little academic attention has so far been paid to the some of the iconography, particularly that of the scene of the murder of Thomas Becket depicted on the back of the box, the side to be worn against the body. In this article, we show how seemingly unusual elements in the iconography draw on particularly Flemish representations of Becket's murder that, to date, have received little attention in Anglophone scholarship. From this, we discuss this scene and its significance in understanding the role the iconography may have been intended to serve, and the interplay between the decorative schema and what the surgeon thought about his own role with regard to the use of the case and its tools.
Materials and Methods: The Instrument Case, Historiography, and Approach
The case is a tapering box with removable lid, of silver, silver gilt, and enamel with a leather and wood lining, 18 cm in height, 6 cm in width, and 5 cm in depth [Figures 1 and 2 -Image of the box]. It was designed to be attached to a surgeon's belt by a silver chain which passes through lions-head loops on the sides of the box and cover. The interior is divided into sections by wood and leather partitions. The box is highly decorated, with silver figures of saints on the front and engraved silver panels on the sides and rear. The chain which passes through lions-head loops on the sides of the box and cover. The interior is divided into sections by wood and leather partitions. The box is highly decorated, with silver figures of saints on the front and engraved silver panels on the sides and rear. The known history of the case has been outlined in an article by Ann Wickham. There is no record of it prior to 1922, when it was found in a box of 'junk' at the London saleroom of the auctioneer William Edward Hurcomb. It passed briefly through the hands of two antique jewellery firms, before it was purchased in 1923 by Viscount Lee. (Wickham 2002, pp. 90-91) On his death in 1955, his widow presented the case to the Worshipful Company of Barbers. While it has largely remained in the private collection of the Company of Barbers since then, the case has recently been featured in the British Museum's 2021 exhibition 'Thomas Becket: Murder and the Making of a Saint', once again bringing it to wider attention. (De Beer and Speakman 2021, pp. 212-13).
This exquisite silver item has attracted some academic attention, mainly focusing on its craftsmanship, date, and attempting to identify potential patrons, recipients, and artists involved in its manufacture. Beginning with the 1931 article by Charles J. S. Thompson, curator of the Wellcome Institute and the Royal College of Surgeons' museum, historians have dated the object through an analysis of the royal arms in particular. From Thompson onwards a terminus ante quem of 1525 for the case seemed to be provided by the royal arms featuring dragon and greyhound supporters, the latter being replaced by a lion in that year. (Thompson 1931, p. 811;Dobson and Walker 1979, pp. 106-7;Wickham 2002, pp. 85-87) Recently Timothy Schroder has shown that other elements of the decoration cannot predate 1527, arguing that the greyhound continued in use after 1525, and thus providing a date of c. 1530 for the case. (Schroder 2020, p. 177) The royal arms have also pointed to connections with the royal court, with most historians agreeing that the piece would have been a royal commission for one of the king's personal physicians. Another historiographical debate around the item has been its place of manufacture, with both Wickham and Schroder concluding that it was made in London, although Schroder argues the Netherlandish elements point to it being made by foreign craftsmen in the city. (Wickham 2002, pp. 88-89;Schroder 2020, p. 178). In our analysis of the case, we focus on the iconography of Becket's murder on the back panel, and use this as an avenue to explore questions of function and use. Historians have concluded that the case would have had a purely ceremonial function, but we argue strongly against this assumption. (Thompson 1931, p. 811;De Beer and Speakman 2021, p. 213) We explore the decoration and ornament of the case as integral to it, and vice versa the function of the case as directly informing the ornamentation. As such, we attempt to move away from positivist art-historical considerations of identifying artist and patron, production techniques and influences, and instead, following James H. Marrow's suggestion, we attempt to appreciate how the piece in the context of its manufacture and use would 'elicit interest, structure understanding, and communicate meaning'. (Marrow 2006, p. 164) We consider the design strategies discernible in the case as a whole, inevitably necessitating a reconsideration of the context of its manufacture. We also look at how the artist of the Becket panel drew on a range of techniques and stylistic influences to represent a well-known scene in a manner that offered a deeper and potentially new understanding of its significance to the user and viewer of the case.
Form and Function
The instrument case was of a type known as a plaster box. It was designed to be hung from the surgeon's belt to hold some of his precision tools while performing operations. The interior is lined with leather and wood, and the interior divided into sections capable of holding six or seven smaller instruments, suggested to have been a large scalpel, forceps, scissors, lancets, and probes (Thompson 1931, p. 811;Wickham 2002, pp. 85-86). A very similar box, albeit much plainer, can be seen amongst the selection of instruments portrayed in Hieronymus Brunschwig's Buch der Cirurgia of 1497, on the right-hand side of the woodcut. [ Figure 3] The Barber Surgeons' case also has a protective leather or 'cuir bouile' carrying-case, probably contemporaneous with the case itself, which is simply decorated with In our analysis of the case, we focus on the iconography of Becket's murder on the back panel, and use this as an avenue to explore questions of function and use. Historians have concluded that the case would have had a purely ceremonial function, but we argue strongly against this assumption (Thompson 1931, p. 811;De Beer and Speakman 2021, p. 213). We explore the decoration and ornament of the case as integral to it, and vice versa the function of the case as directly informing the ornamentation. As such, we attempt to move away from positivist art-historical considerations of identifying artist and patron, production techniques and influences, and instead, following James H. Marrow's suggestion, we attempt to appreciate how the piece in the context of its manufacture and use would 'elicit interest, structure understanding, and communicate meaning' (Marrow 2006, p. 164). We consider the design strategies discernible in the case as a whole, inevitably necessitating a reconsideration of the context of its manufacture. We also look at how the artist of the Becket panel drew on a range of techniques and stylistic influences to represent a well-known scene in a manner that offered a deeper and potentially new understanding of its significance to the user and viewer of the case.
Form and Function
The instrument case was of a type known as a plaster box. It was designed to be hung from the surgeon's belt to hold some of his precision tools while performing operations. The interior is lined with leather and wood, and the interior divided into sections capable of holding six or seven smaller instruments, suggested to have been a large scalpel, forceps, scissors, lancets, and probes (Thompson 1931, p. 811;Wickham 2002, pp. 85-86). A very similar box, albeit much plainer, can be seen amongst the selection of instruments portrayed in Hieronymus Brunschwig's Buch der Cirurgia of 1497, on the right-hand side of the woodcut [ Figure 3]. The Barber Surgeons' case also has a protective leather or 'cuir bouile' carrying-case, probably contemporaneous with the case itself, which is simply decorated with foliate scrolls and incised lines in the manner of the Brunschwig image, or in the manner of that shown resting on top of the surgeon's box in the frontispiece Figure 4]. Another woodcut of surgical instruments often included in editions of Johannes de Ketham's Wudartznei (surgery book) in the 1530s shows a box with its lid raised, revealing the handle-tips of the instruments it contains, among which it is possible to discern the handles of scissors and forceps and the ends of needles and lancets [ Figure 5]. Similarly another illustration from Ryff's Groß Chirurgei shows a box covered in engraved ornament, opened to show compartments containing scissors, lancets, needles, a syringe, and plasters [ Figure 6]. Figure 4] Another woodcut of surgical instruments often included in editions of Johannes de Ketham's Wudartznei (surgery book) in the 1530s shows a box with its lid raised, revealing the handletips of the instruments it contains, among which it is possible to discern the handles of scissors and forceps and the ends of needles and lancets. [ Figure 5] Similarly another illustration from Ryff's Groß Chirurgei shows a box covered in engraved ornament, opened to show compartments containing scissors, lancets, needles, a syringe, and plasters. [ Figure 6]. The high degree of ornamentation and use of precious metals on the Barber-Surgeons' instrument case has led to the suggestion that it was intended for ceremonial use only. (Thompson 1931, p. 811) Yet, practical surgical instruments of the later medieval and early modern period were often also elaborately decorated, high-status objects. (Hartnell 2017, pp. 34-36) The surgeon's tools showed him to be an artisan and craftsman, and marked him out as a professional of high social standing amongst his fellow burgesses and citizens. In the fifteenth and sixteenth centuries, the surgeons and barbersurgeons, whose Companies were not amalgamated until 1540, were highly conscious of a need to promote their claim to social prominence and respectability. This was particularly keenly felt in their social relationship to other medical practitioners and, in light of their special status as medical craftsmen rather than medical theorists, to London's artisanal classes (Colston andRalley 2015, pp. 1103-26;Wear 2000, p. 124;Decamp 2016, pp. 3-21;Pelling 1986). Wills of sixteenth-century barber-surgeons of London frequently bequeath instruments of silver and gold, including 'one of my Launcetts that is sett in gold and enamyled'. (Young 1890, p. 530) It is not unreasonable to think that this instrument case was indeed used during surgery on particularly high-status individuals, reflecting the honour due to the patient and showing the eminence of the surgeon. The instruments in the case were used for delicate and cosmetic operations, and as the frontispiece of the 1559 edition of Ryff's Groß Chirurgei shows, the case could be set aside when the surgeon was engaged in the more physical and brutal aspects of his trade, in this case on top of the surgeon's tool chest as he amputates a client's leg with a bone saw. [ Figure 4] The back of the Barber-Surgeons' case, particularly on the panel depicting Becket's murder, shows two deep indentations, at the mid-point and bottom of his chasuble, which are almost certainly percussion marks worn into the silver by repeated contact with a belt buckle. The high degree of ornamentation and use of precious metals on the Barber-Surgeons' instrument case has led to the suggestion that it was intended for ceremonial use only (Thompson 1931, p. 811). Yet, practical surgical instruments of the later medieval and early modern period were often also elaborately decorated, high-status objects (Hartnell 2017, pp. 34-36). The surgeon's tools showed him to be an artisan and craftsman, and marked him out as a professional of high social standing amongst his fellow burgesses and citizens. In the fifteenth and sixteenth centuries, the surgeons and barber-surgeons, whose Companies were not amalgamated until 1540, were highly conscious of a need to promote their claim to social prominence and respectability. This was particularly keenly felt in their social relationship to other medical practitioners and, in light of their special status as medical craftsmen rather than medical theorists, to London's artisanal classes (Colston andRalley 2015, pp. 1103-26;Wear 2000, p. 124;Decamp 2016, pp. 3-21;Pelling 1986). Wills of sixteenth-century barber-surgeons of London frequently bequeath instruments of silver and gold, including 'one of my Launcetts that is sett in gold and enamyled' (Young 1890, p. 530). It is not unreasonable to think that this instrument case was indeed used during surgery on particularly high-status individuals, reflecting the honour due to the patient and showing the eminence of the surgeon. The instruments in the case were used for delicate and cosmetic operations, and as the frontispiece of the 1559 edition of Ryff's Groß Chirurgei shows, the case could be set aside when the surgeon was engaged in the more physical and brutal aspects of his trade, in this case on top of the surgeon's tool chest as he amputates a client's leg with a bone saw [ Figure 4]. The back of the Barber-Surgeons' case, particularly on the panel depicting Becket's murder, shows two deep indentations, at the mid-point and bottom of his chasuble, which are almost certainly percussion marks worn into the silver by repeated contact with a belt buckle.
The Barber-Surgeons' instrument case is completely covered in ornamentation, in an iconographical scheme that Timothy Schroder has noted is 'very complex' (Schroder 2020, p. 177). The arms of the Barbers Company (three fleams separated by a chevron) granted in 1451 are displayed under the royal arms, and below that the cognisance of the Fellowship of Surgeons (a 'spater' charged with a rose crowned) granted in 1492 (Dobson and Walker 1979, pp. 106-7). The arms are flanked by the patron saints of the barber-surgeons, Cosmas and Damian. On the sides of the bottom section are Sts Catherine and John the Evangelist, the latter also a patron of the medical profession. The presence of St Catherine may be a nod to the king's then wife, Catherine of Aragon, although Catherine was also a very popular saint in late medieval London. Turning the object over, the back is covered with two engraved silver panels. The smaller, topmost, one shows St George slaying the dragon [ Figure 7]. The panel which covers the main body of the case is an image of the martyrdom of St Thomas Becket [ Figure 8]. These are the focus of the discussion below. At the top of the front of the case the royal arms, with dragon and greyhound supporters, and the Tudor rose on the lid point to an association with the royal court. Schroder suggests, from the form of the royal arms and the decorative side panels based on 1527 prints by Heinrich Aldegrever, that this may have been a gift from Henry VIII to his surgeon Thomas Vicary, appointed to the court in 1527 (Schroder 2020, p. 176;Thomas 2006). It is also feasible that Vicary commissioned the case to reflect his own new position. At the simplest level, this would offer an initial reason for the inclusion of Becket imagery on the piece, linking Thomas the saint to Thomas the surgeon. The case may even appear in Vicary's will of 1562, as his 'best plaister box, garnisshed with silver', given to Thomas Bayley, Master of the Barber-Surgeons in 1559 (TNA, PROB 11/4/86; Schroder 2020, p. 337).
The Barber-Surgeons' instrument case is completely covered in ornamentation, in an iconographical scheme that Timothy Schroder has noted is 'very complex'. (Schroder 2020, p. 177) The arms of the Barbers Company (three fleams separated by a chevron) granted in 1451 are displayed under the royal arms, and below that the cognisance of the Fellowship of Surgeons (a 'spater' charged with a rose crowned) granted in 1492. (Dobson and Walker 1979, pp. 106-7) The arms are flanked by the patron saints of the barbersurgeons, Cosmas and Damian. On the sides of the bottom section are Sts Catherine and John the Evangelist, the latter also a patron of the medical profession. The presence of St Catherine may be a nod to the king's then wife, Catherine of Aragon, although Catherine was also a very popular saint in late medieval London. Turning the object over, the back is covered with two engraved silver panels. The smaller, topmost, one shows St George slaying the dragon. [ Figure 7] The panel which covers the main body of the case is an image of the martyrdom of St Thomas Becket. [ Figure 8] These are the focus of the discussion below. At the top of the front of the case the royal arms, with dragon and greyhound supporters, and the Tudor rose on the lid point to an association with the royal court. Schroder suggests, from the form of the royal arms and the decorative side panels based on 1527 prints by Heinrich Aldegrever, that this may have been a gift from Henry VIII to his surgeon Thomas Vicary, appointed to the court in 1527. (Schroder 2020, p. 176;Thomas 2006) It is also feasible that Vicary commissioned the case to reflect his own new position. At the simplest level, this would offer an initial reason for the inclusion of Becket imagery on the piece, linking Thomas the saint to Thomas the surgeon. The case may even appear in Vicary's will of 1562, as his 'best plaister box, garnisshed with silver', given to Thomas The decorative and iconographic schemes on surgical tools were often closely tied to the instrument's function. For example, 'biting' tools such as bone-saws featured animal heads posed in the act of biting, and the instrument case similarly features lions' heads biting onto rings on its sides and base. The importance of appropriate decorative elements to the instrument's overall function was such that instruments were sometimes ornamented to the extent that the embellishments would actually impede their immediate surgical functionality, becoming snagged on clothing or flesh. (Hartnell 2017, pp. 44-50) The figures of the medical saints on the instrument case pointed to the religious protection and blessings bestowed on the user and the patient, but they obtrude from the surface and could The decorative and iconographic schemes on surgical tools were often closely tied to the instrument's function. For example, 'biting' tools such as bone-saws featured animal heads posed in the act of biting, and the instrument case similarly features lions' heads biting onto rings on its sides and base. The importance of appropriate decorative elements to the instrument's overall function was such that instruments were sometimes ornamented to the extent that the embellishments would actually impede their immediate surgical functionality, becoming snagged on clothing or flesh (Hartnell 2017, pp. 44-50). The figures of the medical saints on the instrument case pointed to the religious protection Arts 2021, 10, 49 9 of 24 and blessings bestowed on the user and the patient, but they obtrude from the surface and could potentially become snagged in clothing. Other decorative elements reflect the case's function. The 'spater' (a general term applied to cutting or lancing tools) on which the rose of the Surgeon's Fellowship was charged on the instrument case is depicted as a lancet at close to actual size, and a similar instrument would probably have been part of the contents. A lancet of this type is depicted in use in the c.1425 St William window in York Minster CVMA nVII panel 18a, where a surgeon is performing a procedure on a patient [ Figure 9]. Of the rear panels, which would be against the body of the surgeon, the engraving of St George and the Dragon tied into the courtly status of the object, as he was England's patron saint and particularly associated with the royal dynasty. Henry VIII had St George engraved on his own armour. He was also associated with healing and physical regeneration, and invoked as protection against sudden death and disease, although this was more of an aspect of his cult on the Continent, and particularly in Germany, than in England (Good 2009, pp. 88-93, 104, 123). St Thomas Becket had similarly long been one of England's patron saints and since his murder in 1170 one that was particularly associated with healing, with pilgrims to his shrine able to purchase souvenirs bearing the legend OPTIMUS EGRORUM MEDICUS FIT TOMA BONORUM (Thomas is the best doctor of the worthy sick) (Spencer 1998, pp. 44-53). The images and embellishments were clearly carefully chosen to reflect the status and role of the wearer and the function of the case itself.
Schroder notes that the instrument case would almost certainly have been a collaborative piece, between a plate worker, enameller, and engraver. We might add that the contents of the case, no doubt of a similar quality, would have also required collaboration with a skilled manufacturer of precision tools. Previous studies of the case have suggested that it would have been made in London. Wickham thought the case was English and possibly made by the royal goldsmiths, whereas Schroder argued that it was made in London but probably by or under the supervision of 'stranger' (i.e., non-English) craftsmen (Schroder 2020, p. 178;Wickham 2002, pp. 86-87). It was more usual when commissioning work in precious metals to import craftsmen than finished products, although gold-and silverware was also shipped into London from the Low Countries in the 1520s. London was a centre for collaborative artistic production at workshops staffed by transient artisans, many of which were on their wanderjahr, making it difficult to pinpoint the origin of items such as the instrument case (Berry 2021;Curd 2010, pp. 1-72;Woods 2007).
A number of features point to the artwork of the case having been produced by Continental artisans. Besides the Aldegrever-inspired plates on the side, the clothing of Becket's murderers, and the architectural form of the cathedral in which the murder takes place are all distinctly Netherlandish. A sketch, possibly by the Aldegrever school, in a mid-sixteenth century German 'Jeweller's Pocket Book' and dated to 1530 is very similar to the patterns found on the sides of the instrument case [ Figure 10]. Aldegrever's prints were widely used by artists and craftsmen, as discussed by Rowlands (Rowlands 1988). The earliest references in English to 'plaster boxes' as part of the surgeon's instrument set come from the mid-16th century, and as images of them only appear in Continental texts up to this time it may be that this intricate multi-purpose surgical kit, and the precision surgical tools it would have contained, was in the 1520s associated with production in the Low Countries and Germany 1 . It was common in the early-sixteenth century for a single contractor to organise the creation of complex works of art, including illuminated manuscripts, which required assembling a team of skilled craftsmen, even forming ad hoc workshops for bespoke pieces (Van der Stock 2006, pp. 118-21;Curd 2010, pp. 21-27). The centre of this artisanal production was Antwerp 'the commercial metropolis which boasted an exceptional art market' (Koldeweij 2005, p. 43). The production of commissioned and ready-made altarpieces, for example, was closely controlled by the St Luke's Guild of Antwerp as a way of coordinating the output of different ateliers and guaranteeing quality, and a document of 1460 states that the Antwerp Pand was explicitly constructed 'in order to display for sale books, paintings, sculpture and joinery' as well as tapestry, printing and Arts 2021, 10, 49 10 of 24 other crafts (Jacobs 1998, pp. 158-59, 165). This is the artistic and commercial environment with which the makers of the case would have been familiar, whether working in the Low Countries or as emigres working in England.
Arts 2021, 10, x FOR PEER REVIEW 10 of 25 165) This is the artistic and commercial environment with which the makers of the case would have been familiar, whether working in the Low Countries or as emigres working in England. The art production of the Low Countries and Flemish artists both there and on their wanderjahr in the later fifteenth and early sixteenth century was undergoing something of a transformation. Jacobs (Jacobs 1998, p. 54) has commented on the increased taste for narrative in Netherlandish carved altarpieces in the late fifteenth and early sixteenth centuries, while the 'lively market' (Koldeweij 2005, p. 46) for small, illuminated books of hours for private devotion meant that attention to detail and a representation in which the viewer was a participant and not merely a passive observer became the order of the day. As we shall argue, this is an important factor in the depiction of Becket's martyrdom on the instrument case. The highly collaborative atmosphere (Koldeweij 2005, p. 43) in which studios and artists in different media worked together on prestige projects is exactly the sort of environment in which the case would have been produced and the detail of the narrative developed. This shift in taste towards greater elaboration of the narrative content allowed carvers of altarpieces, for example, to develop and elaborate on scenes and details whilst remaining within the overall constraints of convention. For example, the development of detailed depictions of the apocryphal death of Joseph, the husband of the Virgin Mary, began to appear chronologically situated in between relevant Biblical narratives. Likewise, an increased interest in and use of the liturgical and didactic function of these altar pieces within the celebration of the Mass led to greater narrative depictions and increased interest in allegorical representations of the elements of the Mass, with form and function informing each other. (Jacobs 1998, pp. 62-63) This development in altarpiece carving also appears in the increased range of narrative interpretation in other areas of art such as mystery plays in which the core narrative is embellished and developed to engage the viewer and connect them with the action on stage.
While the case was clearly made for an English patron connected with the royal court, possibly Vicary, who would have instructed the artists, probably through a middle-man, of his wishes for the iconographical composition of the case, a number of curious features in the execution of the iconographical scheme point to the artists being unfamiliar with the contemporary court style. This may even point to a Netherlandish origin for the instrument case itself. One of the iconographical elements which has caused difficulties in dating since it first came to light in the 1920s is the royal arms on the front. These have The art production of the Low Countries and Flemish artists both there and on their wanderjahr in the later fifteenth and early sixteenth century was undergoing something of a transformation. Jacobs (Jacobs 1998, p. 54) has commented on the increased taste for narrative in Netherlandish carved altarpieces in the late fifteenth and early sixteenth centuries, while the 'lively market' (Koldeweij 2005, p. 46) for small, illuminated books of hours for private devotion meant that attention to detail and a representation in which the viewer was a participant and not merely a passive observer became the order of the day. As we shall argue, this is an important factor in the depiction of Becket's martyrdom on the instrument case. The highly collaborative atmosphere (Koldeweij 2005, p. 43) in which studios and artists in different media worked together on prestige projects is exactly the sort of environment in which the case would have been produced and the detail of the narrative developed. This shift in taste towards greater elaboration of the narrative content allowed carvers of altarpieces, for example, to develop and elaborate on scenes and details whilst remaining within the overall constraints of convention. For example, the development of detailed depictions of the apocryphal death of Joseph, the husband of the Virgin Mary, began to appear chronologically situated in between relevant Biblical narratives. Likewise, an increased interest in and use of the liturgical and didactic function of these altar pieces within the celebration of the Mass led to greater narrative depictions and increased interest in allegorical representations of the elements of the Mass, with form and function informing each other (Jacobs 1998, pp. 62-63). This development in altarpiece carving also appears in the increased range of narrative interpretation in other areas of art such as mystery plays in which the core narrative is embellished and developed to engage the viewer and connect them with the action on stage.
While the case was clearly made for an English patron connected with the royal court, possibly Vicary, who would have instructed the artists, probably through a middle-man, of his wishes for the iconographical composition of the case, a number of curious features in the execution of the iconographical scheme point to the artists being unfamiliar with the contemporary court style. This may even point to a Netherlandish origin for the instrument case itself. One of the iconographical elements which has caused difficulties in dating since it first came to light in the 1920s is the royal arms on the front. These have dragon and greyhound supporters, which were officially supplanted by the dragon and unicorn in 1525, thus apparently giving a terminus ante quem for the piece. As Schroder points out, however, the use of Aldegrever prints as the side panels give a terminus post quem of 1527 (Schroder 2020, p. 177). The dragon is also heraldically incorrect, being gold (or) rather than red (gules) (Wickham 2002, p. 85). While Schroder claims that the greyhound continued in use as a royal supporter after 1525, it is unlikely that a piece of this quality and in other ways striking modernity would have used the antiquated arms, especially if it was produced in the ambit of the court. A piece commissioned from the Low Countries, on the other hand, may have been more likely to make such small technical errors in contemporary royal heraldry.
In addition to this, the iconography of the upper panel on the rear, depicting St George slaying the dragon, points to an origin outside England. The panel itself is of an inferior artistic quality to the larger Becket panel which forms the basis for the rest of this article. The engraving is stiff and mannered, with cartoonish elements in some of the facial details and the background. The piece may well have been based on a pre-existing pattern drawing, as there are very clear similarities of composition with the frontispiece woodcut to Alexander Barclay's 1515 Life of St George [ Figure 11]. However, on the instrument case St George does not bear his arms, the red cross on white background, that were also the arms of England, and which in the Barclay engraving appear prominently on both his chest and his shield. St George's breastplate cross is a clear feature of the 'George Noble' coin of 1526, as well as on the depictions of the saint on Henry VIII's own armour of 1514/15, linking the saint to the flag of the English nation. It is perhaps unlikely that a piece engraved in England would omit this detail, key to the promotion of national identity around the king and his realm, whereas Renaissance Continental depictions of the saint do not always show his red and white cross livery (Good 2009, pp. 52-94, 123-24;Riches 2000).
Becket in the Low Countries: Devotion, Myth, and Iconography
Even if the instrument case was produced in England, the form of the iconography points clearly to the rear panels being the work of a Flemish artist, and needs to be understood in the context of Continental, and particularly Flemish, developments in how Becket's martyrdom was portrayed in the later Middle Ages. Of particular relevance to a Low Countries origin either for the case itself or the engraver is an understanding of the
Becket in the Low Countries: Devotion, Myth, and Iconography
Even if the instrument case was produced in England, the form of the iconography points clearly to the rear panels being the work of a Flemish artist, and needs to be understood in the context of Continental, and particularly Flemish, developments in how Becket's martyrdom was portrayed in the later Middle Ages. Of particular relevance to a Low Countries origin either for the case itself or the engraver is an understanding of the development of a distinct Becket iconography in Flemish manuscript miniature paintings of the fifteenth century. Art-historical studies of Becket, particularly in English, have tended to focus on high medieval images from England, France, and Italy, the places where his cult flourished most strongly in the early years. For most of the medieval iconography of the martyrdom cult of St Thomas, the 'standard' view was for it to be depicted from the side, with the knights rushing in from stage left and Becket kneeling before an altar, his head or neck being sliced by one of the swords and (in some images) his mitre or skullcap, blood, skull and brains splattering to the ground (Borenius 1932, pp. 70-104;Duggan 2020). This is the scene which appears on the Limoges caskets which circulated in thirteenth-century Europe and is the scene which appears in the earliest images of Becket's martyrdom, such as the early thirteenth-century manuscript BL Harley MS 5102 [ Figure 12]. Edward Grim, the clerk who both witnessed the murder and had his arm hewn by one of the knights while attempting to intervene, was variously depicted at the side of the altar or defending Becket (Gameson 2000). In later depictions, Grim's role became more passive, as the emphasis on Becket as an undefended martyr unexpectedly murdered whilst celebrating Mass became the trope, rather than someone cut down after a violent argument as relayed in the earliest hagiographies. It is possible that the elongated rectangle which formed the side piece of the typical reliquary casket encouraged this perspective in the depiction, in the same way that depictions of scenes within the Nativity of Christ, such as the Magi arriving at the court of Herod, or kneeling before the Christ-child, were likewise arranged to be viewed from the side. Such an arrangement adds to the sense of a narrative being 'read' from one side to the other, instilling the image with an, albeit brief, sense of the passage of time. The figures arrive and an event happens: an audience is sought, gifts are given, or knights attack. A particularly grisly early fifteenth-century example is in Canterbury Cathedral on a painted panel on the tomb of Henry IV, reproduced in the 1930s by E.W. Tristram [ Figure 13]. Here, the slice of skull and associated brain matter are clearly visible on the steps, and Edward Grim stands horrified and wounded behind the altar.
Such is the development of the form in English medieval religious art. Yet, the Continental developments of Becket iconography in the later Middle Ages have received far less attention, certainly in Anglophone historiography. An important study in this regard is the collection of essays Thomas Becket in Vlaandren, produced to accompany a 2000 exhibition of Becket's presence, both in person and in the form of his cult, in and around the Low Countries at Kortrijk Museum. There was certainly a marked devotion to Becket in the region. Many major abbeys possessed relics, including parts of Becket's sandal and some of his blood which was brought to Egmond Abbey before 1215, and the Abbey of St. Nicholas-des-Prés of Saint-Médard, near Doornik, claimed to hold Becket's chasuble as a relic. His blood-spattered surplice was given to the Abbey of Saint-Josse de Dommartin in Tortefontaine by a hermit shortly after the murder, which then became an important Becket pilgrimage destination in the Low Countries. A collection of miracles performed by Becket in association with the surplice relic was compiled at the abbey (Koldeweij 2000, p. 55;Smeyers 2000, p. 78;Verschatse 2000, pp. 108-11). In the later Middle Ages Canterbury was occasionally stipulated as a destination for penitential pilgrimages in archidiaconal courts in the region. While the numbers appear low-12 out of 3000 penitents at Antwerp were instructed to go to Canterbury between the fourteenth and early sixteenth centurieslonger-distance penitential pilgrimages were generally uncommon and Canterbury appears only slightly more frequently in comparable penances of the same period from York (Koldeweij 2000, p. 56; York Minster Library MS M2(1)f). The records of pilgrimages by the canons of Tournai Cathedral survive for the period 1330 to 1349, and show that, as might be expected, for the most part they engaged in multi-site tours of major French shrines. However, when they crossed the English Channel, as approximately fifteen canons did during that period, the only shrine they visited was Becket's at Canterbury. Numerous pilgrim souvenirs of Becket have been recovered in the Low Countries, and although we should be wary of assigning any bishop's head badge to the cult of Becket at Canterbury, the copious survival of small bells with CAMPANA THOMAS, and of badges identifying the figure as THOMAS OCCISUS (and even with the Dutch spelling of his name SANCTVS THOMAES VAN CANDELBE) point to a significant later medieval devotion in the region (Koldeweij 2000, pp. 64-69;Spencer 1998, p. 113).
arm hewn by one of the knights while attempting to intervene, was variously depicted at the side of the altar or defending Becket. (Gameson 2000) In later depictions, Grim's role became more passive, as the emphasis on Becket as an undefended martyr unexpectedly murdered whilst celebrating Mass became the trope, rather than someone cut down after a violent argument as relayed in the earliest hagiographies. It is possible that the elongated rectangle which formed the side piece of the typical reliquary casket encouraged this perspective in the depiction, in the same way that depictions of scenes within the Nativity of Christ, such as the Magi arriving at the court of Herod, or kneeling before the Christchild, were likewise arranged to be viewed from the side. Such an arrangement adds to the sense of a narrative being 'read' from one side to the other, instilling the image with an, albeit brief, sense of the passage of time. The figures arrive and an event happens: an audience is sought, gifts are given, or knights attack. A particularly grisly early fifteenthcentury example is in Canterbury Cathedral on a painted panel on the tomb of Henry IV, reproduced in the 1930s by E.W. Tristram [ Figure 13]. Here, the slice of skull and associated brain matter are clearly visible on the steps, and Edward Grim stands horrified and wounded behind the altar. Such is the development of the form in English medieval religious art. Yet, the Continental developments of Becket iconography in the later Middle Ages have received far less attention, certainly in Anglophone historiography. An important study in this regard is the collection of essays Thomas Becket in Vlaandren, produced to accompany a 2000 exhibition of Becket's presence, both in person and in the form of his cult, in and around the Low Countries at Kortrijk Museum. There was certainly a marked devotion to Becket in the region. Many major abbeys possessed relics, including parts of Becket's sandal and some of his blood which was brought to Egmond Abbey before 1215, and the Abbey of St. Nicholas-des-Prés of Saint-Médard, near Doornik, claimed to hold Becket's chasuble as a relic. His blood-spattered surplice was given to the Abbey of Saint-Josse de Dommartin in Tortefontaine by a hermit shortly after the murder, which then became an Becket was, of course, not the only English saint known to the citizens of later medieval Flanders, but he was by far the most important. The Becket story was embellished to include specifically Flemish legends. According to one which was recorded in the mid-sixteenth century Becket's murderers were condemned by the Pope to wander through Europe without their sense of taste or smell, never being allowed to rest until they had recovered both. At Cologne they finally tasted wine, and then in the Flemish city of Mechelen they smelt newly-baked bread. The 'brothers', as three of the knights are called in this account, built huts in the shadow of St Rumold's Cathedral in Mechelen and lived the rest of their lives there. On the outer wall of the church was a medieval inscription naming them and stating 'Thomam martyrium fecere subire beatum' ('those who are under here made Thomas a blessed martyr') (Borenius 1932, p. 186). Much as the citizens of London emphasised Becket's birth in their city, and invented a fabulous Middle Eastern origin for him to explain the dedication of the Hospital of St Thomas of Acre at the house where he was born, devotees throughout Europe could site the Becket legend in their own locality, or inflect and contextualise it within a regional hagiology (Jenkins 2020). While the presence of Becket's body and shrines at Canterbury formed an obvious focal point for the cult, the memory and form of Becket's murder and sainthood found new expressions wherever he Arts 2021, 10, 49 15 of 24 was celebrated in Christendom. Flanders was no exception, and in the fifteenth century produced some of the most innovative developments in Becket iconography.
period 1330 to 1349, and show that, as might be expected, for the most part they engaged in multi-site tours of major French shrines. However, when they crossed the English Channel, as approximately fifteen canons did during that period, the only shrine they visited was Becket's at Canterbury. Numerous pilgrim souvenirs of Becket have been recovered in the Low Countries, and although we should be wary of assigning any bishop's head badge to the cult of Becket at Canterbury, the copious survival of small bells with CAMPANA THOMAS, and of badges identifying the figure as THOMAS OCCISUS (and even with the Dutch spelling of his name SANCTVS THOMAES VAN CANDELBE) point to a significant later medieval devotion in the region. (Koldeweij 2000, pp. 64-69;Spencer 1998, p. 113). Most Becket miniatures produced in Flanders in the fifteenth century were intended for an English market. In the late-fourteenth and early-fifteenth centuries these were mostly part of mass-produced Books of Hours of the Sarum Rite shipped over for sale on the open market in England. As the fifteenth century progressed this trade decreased, and Flemish Becket miniatures were increasingly produced as part of commissions for wealthy English or Burgundian patrons. As Elizabeth Morrison has noted, the high demand for Flemish art throughout Europe gave the artists and craftsmen of the Low Countries ample opportunities to experiment and innovate in terms of layout and composition (Morrison 2006, p. 149). Throughout the period there are several Flemish idiosyncrasies to the iconography that depart often radically from Becket's early hagiographies, as laid out by Smeyers in an important chapter of the Becket in Vlaandren volume. Most fifteenth-century Flemish miniatures follow the standard iconography as detailed above, of knights on the left-hand side in the act of murdering Becket on the right, often at a quasi-isometric angle. In common with images of Becket's murder from many other countries and times, the miniatures differ from the written hagiography by depicting him in the act of celebrating Mass at a fully-furnished altar, often with a retable showing the Virgin with Child. This emphasised the quasi-Eucharistic nature of Becket's own sacrifice, making direct links Arts 2021, 10, 49 16 of 24 between him and Christ and highlighting his devotion to Mary. As noted above, Edward Grim, the clerk who tried to put himself between Becket and the knights, is usually featured in English, French and Italian depictions of Becket's murder, yet is only rarely present in Flemish miniatures. Furthermore, the earliest hagiographies and images are clear that Becket was hacked to death and the top of his skull chopped off with swords or, sometimes, axes, yet Flemish miniatures usually show him being stabbed in the cranium with a dagger. There is usually little blood, and the head wound often seems superficial with the skull not shown as fractured (Smeyers 2000, p. 80;Gameson 2020, pp. 152-62).
The odd choice of murder weapon in Flemish miniatures has not been explicated. Smeyers suggested that it may have owed a debt to earlier, lost, Flemish artworks. Yet, there may have been a textual basis for these idiosyncrasies. Flemish illustrators, notably the Master of the David Scenes who worked between 1490 and 1520, appear to have gone back to the text of Jacobus de Voragine's Golden Legend, the best-known collection of saints' lives in medieval Europe, as inspiration for iconographic innovation in the fifteenth and sixteenth centuries. They were able to break away from the standard iconographic patterns by depicting lesser-known (or not so commonly illustrated) events in the lives of the saints, or closely following the language of the Golden Legend to reimagine well-known scenes (Morrison 2006, pp. 149-60).
The variant Becket iconography found in Flemish art of the fifteenth century can be shown to broadly follow the sparse description of the murder in the Golden Legend. The Legend was available in Middle Dutch translation by the fifteenth century, and the most commonly available, including those printed from the fourth quarter of the century, closely followed the Latin text (De Voragine 1490, fo. 220v). Immediately preceding the martyrdom account is a story about how the Virgin had appeared to Becket and through miracles revealed that he should celebrate a Mass in her honour each day, as had a priest who he had unrightfully suspended. This in itself may have pointed towards setting the martyrdom in the context of a private Mass of the Virgin in later medieval Becket iconography. The martyrdom itself is not introduced with much context, simply with the statement that the king was unable to make Thomas change his stance on the rights of the church and so his knights (des conincs ridderen) arrived at the cathedral and asked where the archbishop was. Becket came to meet them, and they told him 'We come to kill you, you may no longer live' (Wi comen di doot te slaen en du en moghes niet langher leven). After commending himself and the cause of the Church to God, the Virgin, St Denis, and all the saints, he bent his head to the swords of his attackers, and they split his crown and spilt his brains on the pavement of the church (Do hi dit hadde ghesecht doe sloeghen hem die quaden mit sweerden int hoest. En si floeghen die heulighe crone vanden hoefde en sine harlene vielen opten pavimente vander kerken). In this account, as in the miniatures, Grim is not present. While the knights are stated to have swords (sweerden) the statement that they 'struck his holy crown' (si sloeghen die heulighe crone) may have led to the idea of a dagger splitting the top of the skull. The Middle Dutch does not give the same idea of the top of the head being severed as is present in Voragine's Latin praecidere, and the detached crown is almost never shown in Flemish art. Even if the Middle Dutch Golden Legend was not the direct source for some of the choices made in this regional Becket iconography, and the insistence of the text that the knights attacked Becket's crown does not explain why they are shown to first attack his body in scenes from the later-fifteenth century, the sparseness of the description of the murder in the most widely-available hagiographical account meant that, for artists going back to the text for inspiration, there was much scope for innovation and filling in of details.
The Iconography of Becket's Murder on the Barber-Surgeon's Instrument Case
Having outlined the peculiarly Flemish elements in later medieval depictions of Becket produced in the Low Countries, we now turn back to the instrument case to examine the highly unusual portrayal of the martyrdom scene engraved on its reverse. The style of the engraving has some stylistic parallels with the work of Hans Sebald Beham and the circle around Holbein, as discussed by Schroder (Schroder 2020, p. 177). The depiction and arrangement of figures in contemporary German and Low Countries art such as Sebald Beham's print 'Strife' and Heinrich Aldegrever's print of the Death of Absalom are echoed, albeit not directly copied, in the style of the instrument case's engraving. However, it is most likely that the engraver was drawing inspiration from a number of sources both literary and visual to compose his dramatic re-interpretation of the Becket story, rather than, as with the St George engraving above, closely following a pattern.
The depiction on the case is quite different from those we have discussed above. Only one of the murderers carries a sword, and is poised to strike on the upper left-hand side, but the initial death blows are now delivered from behind, at range, by men wielding pikes and spears. While a small incision may be present on Becket's crown his main wounds are stabs in his back. The 'knights' themselves are not in armour but are dressed as common foot soldiers. Edward Grim is present but either cowers or reels behind the altar. The scene takes an entirely different viewpoint, looking directly into the action from the perspective of the onrushing attackers. It is a moment frozen in time, a snapshot in the blink of an eye. This necessarily constrains how the figures can be arranged, and how their interactions can be interpreted. This perspective fundamentally alters the viewer's interaction with the image and its reading. The image is no longer a narrative to be read, it is an instantly arresting moment, with all the emotional immediacy of being the first on the scene. A deliberate narrative device is employed to draw us into the scene: the foot of the soldier on the left breaks the frame, breaching the 'fourth wall', allowing us entry into the unfolding events.
On an individual devotional level, the growth in affective piety, the increased appetite to imagine oneself as experiencing the events being depicted, not merely as an observer, stimulated the production of materials and tools which could guide the user into a deeper state of connection through the use of narrative detail and careful choice of viewpoint (McNamer 2010). The circulation and recycling of images between books of hours made in different countries and for different patrons as discussed by Rudy (Rudy 2016, pp. 167-80) suggests a rich mix of styles and representations were available for consumers to select from-they were not bound to the output of their own locality and artists were likewise able to draw influences from other countries and ateliers.
This extension out of the picture is a device heavily employed in other contemporary art output which is intended to engage the viewer directly, notably the heavily carved and deeply-recessed scene in Flemish altarpieces and contemporary retables. The threedimensional carved figures in these productions draw the viewer into the scene which may reach fifteen or more centimetres into the caisse, but in some (such as the Brougham altarpiece in Carlisle cathedral, dated c.1520 and Flemish) some limb or element of the front figures will reach beyond the boundary of the frame of the scene (Jacobs 1998, pp. 257-58;Sadler 2008, pp. 161-209). By reaching out to the viewer and adding a dynamic sense of movement which breaches the bounds of the scene's frame the 'fourth wall' is broken and we are permitted access into the action. Such novel elements or viewpoints were one of the ways in which Flemish artists defied expectations in the religious and devotional artworks of the Renaissance, and by presenting the viewer with a new interpretation forced them to engage more closely with and deeply consider the action and message of the piece (Morrison 2006, pp. 149-60). Elements that might seem to 'defy customary expectations, contravene traditional pictorial conventions, and pose contradictions', such as the unusual murder weapons and death blows at odds with the written Becket historiography, were designed to 'achieve a deepened appreciation of the character of the sacred and the mysteries of the faith' (Marrow 2006, p. 166).
This view of the murder from behind is comparatively rare and emerges in the late fifteenth century in manuscripts made in the Low Countries. A miniature from a manuscript made for William, Lord Hastings in the Low Countries in approximately 1480 has the same viewpoint as the scene on the Barber Surgeons' case and the same revised weaponry (Turner 1983;Backhouse 1996;Smeyers 2000, p. 91) [ Figure 14]. The figure of Grim is omitted, as was common in Flemish art. The four 'knights' are placed two either side, but significantly the two figures on the left are not dressed or armed as knights at all: they wear Mamluk-style helmets and they carry, respectively a spear and a battle axe. They are markedly more swarthy in their visage than the two on the opposite side and are the two most aggressively engaged in attacking Becket. The two figures on the right wear some armour: one wears a 'lobster pot' helmet but appears unarmed (or his weapon is held down out of sight), while the other appears to have only the leather headgear which was worn under a helmet and brandishes the sword he has clearly drawn from the empty scabbard on his left thigh. The sword, which it may be surmised is the weapon which has inflicted the small, bloodless cut visible on Becket's head (had it been the raised battle axe to the left the damage would have been catastrophic!) is portrayed awkwardly, angled over the shoulder of the knight. Given the skill with which the rest of the image is depicted, including the very fine portrayal of the drapery, limbs and boots, this awkward angle must be intentional and perhaps references the detail in the original hagiographies that the attacking knight's sword broke on hitting Becket's head. The anachronism of including this detail when the damage done is clearly not the skull-slicing event of the earlier accounts may suggest this detail was an important part of the narrative to this artist or commission, conjuring the idea of divine intervention in the breaking of the blade even if Becket's death was not prevented.
As noted above, Becket was increasingly shown not just at an altar but actively in the process of celebrating the Mass, which emphasised the Eucharistic similarities between his martyrdom for the rights of the Church, and Christ's death. In the earliest years of the cult, Becket's blood had been gathered from the floor of Canterbury Cathedral, mixed as a tincture with water, and provided to pilgrims to drink as a curative (Koopmans 2016;Jordan 2009). The overwhelmingly Eucharistic symbolism of this blood-drinking caused some theological unease, and by the fourteenth century, the 'Thomas Water' as it was known had been given a different interpretation by the monks as water from a nearby well that had changed its colour to that of blood, as well as to milk, at the time of the murder (Jenkins 2019, p. 37). In place of the emphasis on Becket's blood, as here, the Christological references seem to have moved to the fully-realised setting of the Mass. In the Hastings Hours and on the instrument case this Christological iconography is reinforced by the weapons in use: the lance or spear recalling the soldier's lance which pierced Christ's side on the cross. This in turn had the potential to change the nature of the attackers themselves. A lance is potentially a knightly weapon if used on horseback but the barbed pikestaff shown in the Hastings Hours and on the back of the instrument case is more commonly the weapon of a foot soldier. In being shown at Mass, the identity of the blood being spilled is deliberately dualised and the change of weaponry to include items specified in the crucifixion narrative is making an intentional comparison between the martyrdom of Becket and Christ.
The Mass setting of Becket's martyrdom also recalls depictions of the Mass of St Gregory. The later medieval desire for allegory and involvement had led to an increased interest in iconography of this miraculous Mass, and it was an important scene for artists developing ways in which the viewer is increasingly drawn into the narrative (Jacobs 1998, p. 67). In this story, St Gregory is kneeling before an altar celebrating the Mass. As he consecrates the wine to transubstantiate it into the very blood of Christ, the figure of Christ on the crucifix above the altar comes to life and steps down before the astonished Gregory to speak and to drop his own blood into the chalice. This is a reflection, a validation even, of the sanctity of Gregory as well as a vivid reminder of the miracle at each consecration. The story gained enormous currency across Europe and was a popular image for Books of Hours as well as missals. We may draw similarities between the figural arrangement of Becket's martyrdom in the Hastings Hours and the instrument engraving, and depictions of the Mass of St Gregory based on those of Israhel van Meckenem of c. 1480-1485 [ Figure 15]. In the Hastings Hours, Becket has a prominent cross on the back of his otherwise plain vestment, similar to the vestment shown on this engraving and its many imitators. including this detail when the damage done is clearly not the skull-slicing event of the earlier accounts may suggest this detail was an important part of the narrative to this artist or commission, conjuring the idea of divine intervention in the breaking of the blade even if Becket's death was not prevented. As noted above, Becket was increasingly shown not just at an altar but actively in the process of celebrating the Mass, which emphasised the Eucharistic similarities between his martyrdom for the rights of the Church, and Christ's death. In the earliest years of the cult, Becket's blood had been gathered from the floor of Canterbury Cathedral, mixed as a tincture with water, and provided to pilgrims to drink as a curative. (Koopmans 2016;Jordan 2009) The overwhelmingly Eucharistic symbolism of this blood-drinking caused some theological unease, and by the fourteenth century, the 'Thomas Water' as it was known had been given a different interpretation by the monks as water from a nearby The change of viewpoint for the depiction of Becket's murder on the instrument case, as in the Hastings Hours, fundamentally alters our relationship with it, from observer to participant. The viewpoint employed in this Becket image changes the role of the viewer: we are no longer a detached observer watching from the sidelines; we are part of the scene, we are running in with the knights. It may be that by making some of the figures clearly non-aristocratic the user/viewer of the case is subliminally invited to become one of the four knights/attackers, or to imagine how they in their own lives might undermine and attack the Church and Christ's sacrifice. The whole scene is simplified by presenting the attackers as the hired soldiers of an angry king, turning on the actions of the individual rather than as part of the struggle between the complex power structures of state and Church, as live an issue in the sixteenth as in the twelfth century. In the same way as fifteenth-and early-sixteenth century worshippers were encouraged towards a Arts 2021, 10, 49 20 of 24 metaphysical engagement with the wounds and torture of Christ on the cross through close mediation on graphic and hyper-real images of his wounds and suffering, in this image we are positioned so as to become one of those on scene, with the eternal possibility of participating in the murder or attempting to prevent it. On the instrument case Edward Grim is still physically present, but awkwardly placed and an ambiguous figure. Is he cowering behind a curtain at the far side of the altar, or is he reeling wounded, having attempted to interpose himself between Becket and the attackers? In having such an image engraved on the side of the case worn next to the body, the wearer is at one with the scene, forever on the point of saving the saint or being party to his death, forever invoking the saint in his work.
The Becket engraving is placed on the side of the case which would be worn against the wearer's body: the other side of the case is heavily decorated with coloured enamels, raised and embossed elements as well as the arms of the company and the royal arms, so must have been intended as the outward, visible side. By having the image against the body with the viewpoint of the wearer looking into the scene from the perspective of the agents there is an immediate connection between the wearer and the narrative. The duality of the wearer's role within the scene could be intended to remind them as a surgeon how thin the line was between healing and harm and how heavily they relied on the intervention of God and the saints to influence the outcome. The choice of St Thomas as the saint whose physical representation could act as aid and intercessor is an interesting one, suggesting (as was so often the motive for choosing) that the manner of his death, particularly the violent trepanning, may relate directly to the surgical uses of the instruments contained in the case. Of all the surgeon's interventions, the cutting of flesh and bone was the most dangerous with the ever-present risk of fatal infection. The complex correlation of iconographic details between the depiction of Becket's martyrdom and elements of Christ's sacrifice all centring around spilled and healing blood and the use of blades make this an unusually pertinent image for such an object. The rearrangement of the scene and the alteration of the viewpoint invest the wearer with a much more potent connection to the events depicted, placing him in the scene not parallel to it. Perhaps in this way the power of the martyrdom event and the power of the saint could be more directly felt and experienced, be more 'potent' and therefore more efficacious? The wearer becomes an agent in the event, immediately involved as soon as their eyes fall upon the scene, not glancing past from the side-lines.
The Becket iconography would have been unacceptable within a very few years of its creation. The order for Becket's image to be expunged across the realm was issued by Henry VIII in 1538, probably less than a decade after the engraving was made. Some compliance with the order may be discernible in the somewhat half-hearted cross hatching which covers the Becket figure. A regular series of crossing diagonal lines extends from his head to the bottom of his chasuble, and as far as the attackers on either side, but is not seen on any other part of the box. This points to a deliberate attempt to deface the figure of the saint, but as the figure is still plainly visible it hardly fulfils the royal proclamation's demand that his images should be 'plucked down' and 'razed'. The very limited efforts at defacement may indicate a concern not to damage this useful and important object, as there would be a danger of piercing or buckling the thin silver with any concerted pressure. However, it may also speak of a desire by the owner (and perhaps even subsequent owners) to continue to access the benefits of association with a saint whose healing powers were famous across Europe and whose demise had been at the edge of a blade. The image could feasibly have been enamelled over, or otherwise occluded, but it may also have been saved by its position on the back of the case, known only to the wearer. The plainer leather carrying-case was a further level of concealment. Finally, the radical reimagining of the scene, while still clearly depicting the murder of Becket, may have meant that at a casual glance it could easily be overlooked as such given the visual cues to both Christ's crucifixion and the Mass of St Gregory. The beauty of the piece, its continued functionality Arts 2021, 10, 49 21 of 24 as a piece of precision-made medical kit, and the integral nature of each of its decorative components to the whole, together may have ensured its survival.
Conclusions
In the course of this article, we have argued that the instrument case needs to be understood in terms of both its form and its function. The contents and use of the case are not only reflected in its external decorative schema but explicated and given significance by it. To understand the unusual Becket iconography, it was necessary to view the manufacture and artistry of the case in the context of the highly innovative milieu of late medieval and Renaissance Flemish art. This in turn helps us to 'de-centre' Becket's cult from the traditional focal point of the shrine at Canterbury. The iconography and cult of England's premier medieval saint did not simply radiate out from England, but took on a life of its own across Europe in ways which, as this instrument case shows, were fed back and adopted at the highest levels of English society. In terms of thinking about other Becket objects and material culture, from 'pilgrim souvenirs' to the highest-status items such as these, historians and art historians should be encouraged to widen their focus beyond the shores of England.
This functional yet beautiful object was created to serve a very specific purpose and the needs of its user. Getting away from the idea of patron and artist and thinking instead about the artistic and iconographic context of its creation and, just as importantly, its use, we should not see this unusual depiction of Becket's martyrdom as simply a latedeveloping iconographical variant, but in terms of the 'pictorial syntax' of how the image was designed and received in potentially new ways (Marrow 2006, p. 174). Radical changes in the viewpoint, the framing, the depictions of the murderers and the murder weapons were not innovations for their own sake. As noted, they were part of a more affective piety drawing the user/viewer into the scene, but they also linked directly to, and nuanced the understanding of, the function of the case which bore the image. Becket was particularly associated with healing. Yet, by placing particular emphasis on the Mass setting and the blood sacrifice of his martyrdom, the image makes connections with the contents of the box: lancets, probes, and other cutting tools which would themselves spill blood. Rather than show Becket with a head wound, his death is reimagined along Christological lines with long stabbing instruments inflicting precise wounds, again linking to the lancets and 'spaters' also shown on the front of the case. The viewer is invited into the scene by the breaking of the boundary by one of the attackers' feet-is the surgeon invited to think about how he might save Becket from these potentially non-fatal wounds to the back, rather than showing him with an irreparably shattered cranium? The artist sought to re-orient the image of the martyrdom in a way that would force a new engagement with its meaning on the part of the viewer. Following this cue, we have shown how the image itself sheds new light on the dynamism and regionality of Becket's cult on the eve of the Reformation.
Conflicts of Interest:
The authors declare no conflict of interest.
1
The reference in the Oxford English Dictionary corpus, cited by Schroder, to a 'plaster box... and the cysars therin' from a late 15th century English will is misdated, and is actually from 1591 (Young 1890, p. 530). | 2021-09-09T20:50:51.652Z | 2021-07-26T00:00:00.000 | {
"year": 2021,
"sha1": "6a66339ebcf6055e747ad05ef9b2637b849ce484",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2076-0752/10/3/49/pdf",
"oa_status": "GREEN",
"pdf_src": "Adhoc",
"pdf_hash": "95c372deaf99ffe482307b20ffe05f9c7a93bb41",
"s2fieldsofstudy": [
"Art",
"History"
],
"extfieldsofstudy": [
"Art"
]
} |
258577031 | pes2o/s2orc | v3-fos-license | The Strengthening Government Policies on Mineral and Coal Mining to Achieve Environmental Sustainability in Indonesia, Africa and Germany
,
Introduction
The development policy in the management of natural resources is based on the obligation to preserve the environment and achieve the goals of sustainable development 1 .This is especially important in the era of local autonomy where countries have the authority to develop natural resources to attract foreign exchange and local income 2 .Therefore, natural resources should be managed properly to have a long-term local and national economic impact to achieve this objective, it is necessary to implement the mandate outlined in Article 33, paragraph (3) of the 1945 Constitution of Indonesia, which stipulates equipment crossing these rural roads 9 .Furthermore, many accidents are caused due to road damage resulting from mining activities and the threat of landslides around post-mining areas excavated into unsustainable dead land.Health problems also arise due to the effects of dust, fueling complaints among residents that are against the mining activities 10 .Mineral and coal mining activities also contravened the mining management laws initiated by business actors.Additionally, there was still low supervision from the community around the area 11 .
The ideal approach to natural resource management policy is based on the principles of sustainable development, encompassing the economic, ecological, and social integration pillars. 12Previously, the 2014 Local Government Law recognized local governments as authorized entities for mining management.However, corruption in business licensing, involving both local and private political leaders, has become a pervasive issue. 13urrently, the supervision of mineral and coal mining is delegated to the Provincial Government.However, due to variations in the resources owned by each province and the extent of their supervision, the management of mineral and coal mining activities remains suboptimal 14 .As reported by the East Kalimantan Mining Advocacy Network (MAN), the continued operation of 151 illegal mines in four regions of East Kalimantan is a cause for concern, leading to growing indifference among local governments as the issuance of mining permits is centralized by the Central Government. 15ina Pane and Adam Muhammad Yanis in their previous research Reconstruction of Fair Mining Policies in Lampung Province revealed that the management of mineral and coal mining must be supported by policies that are environmentally sound, because mining activities have the possibility of uncontrolled environmental damage while efforts that can be made by local governments is to pay attention to permits for managers, provide guidance, supervision, so that mining activities can be controlled and not damage the environment 16 .Mohammad Jamin and friends while in his research The Impact of Indonesian Mining Industry Regulations on the Protection of Indigenous Peoples revealed BESTUUR ISSN 2722-4708 Vol.11, No.1, August 2023, pp.95-120 Suwari Akhmaddhian et.al (The Strengthening Government Policies on Mineral and Coal Mining…) that the need for mining regulations that reflect the recognition and protection of indigenous peoples' rights by standardizing reclamation and post-mining management is needed to provide implementation guidelines that suit the needs of the community 17 .This study aims to analyze regulations for the supervision of mining activities in Indonesia and in comparison with African countries and Germany, mining management must be carried out in layers of supervision starting at the national, provincial, district and even village levels so that loopholes for mining management abuse will be reduced and it is hoped that in the end mining management must prioritize environmental sustainability in accordance with the mandate of the Indonesian constitution.So based on the description above, the authors analyze and reconstruct government policies in supervising coal mineral mining to achieve environmental sustainability
Research Method
This research is based on doctrinal legal research on legal theory (concepts, rules, and principles) regarding mining governance in South Africa, Germany and Indonesia.This research is explanatory (explaining the law), hermeneutic (interpretation, argumentation), and evaluative (analyzing whether rules work in certain situations, or whether they are in accordance with the desired moral framework, legal principles, and societal goals) Part of the analysis is related to the research question using the supporting disciplines, namely law and the environment.The author provides a comparison of rules, cases, principles, and the conceptual framework of legal doctrine between South Africa, Germany and Indonesia.This research elaborates the research problem within a theoretical framework using relevant legal data, especially normative and authoritative sources.Normative sources include texts of laws, agreements, general principles of mining and environmental law, and the like.Authoritative sources are in the form of case law and scientific legal writings (literature).This research will be conducted using a problem-based approach: gathering facts, identifying legal issues, analyzing problems to find potential solutions, and arriving at tentative conclusions.
Government Policy in Supervision of Mineral and Coal Mining
There are two kinds of law rule state conceptions.The first conception is the law state, a rule concept in the sense of rechtsstaat.The second concept is a rule of law state in the sense of the law rule.Rechtsstaat is known in Continental European states, developed by Immanuel Kant, Paul Laband, Julius Stahl, and Fichte.Meanwhile, the rule of law in Anglo-Saxon states are pioneered by A.V. Dicey in England 18 .Julius Stahl stated that the rule of law should have four foundations.These include the protection of human rights, division of power, government based on legislation, and a state administrative court.A rule of law in the sense of the law rule should have at least three characteristics.The necessary characteristics include the upholding of the law supremacy, equality before the law, as well as guarantees and self-protection mechanisms for due process of law rights 19 .According to 17 Mohammad Jamin, Abdul Kadir Jaelani, and Reza Octavia Kusumaningtyas, 'The Impact of Indonesia ' s Mining Industry Regulation on the Protection of Indigenous Peoples ', Hasanuddin Law Review, 1.1 (2023) William G. Andrews, the pillars of constitutionalism as a rule of law state includes agreement on common goals or ideals, the rule of law as the foundation of government or state administration, the form of institutions, and state administrative procedures. 20 1) states that everyone has the right to live in physical and spiritual prosperity.Also, everyone has the right to a place to live, a good and healthy environment, and health services.Article 33 (2) highlights the production sectors important for the state and affecting the livelihood of the people controlled by the state.( 3) Earth, water, and the natural wealth contained therein shall be controlled by the state and used for people's prosperity.( 4) The national economy is organized based on democracy, togetherness, fair efficiency, sustainability, environmental perspective, and independence.Additionally, this democracy ensures a balance of progress and national economic unity.
Law Number 4 of 2009 concerning Mineral and Coal Mining regulates the supervisory authority in Article 6 (f) Granting Mining Business Permits (IUP), coaching, community conflict resolution, and supervision of mining businesses in cross-provincial areas and sea areas of more than 12 miles from the coastline.(g) Granting of IUP, guidance, resolution of community conflicts, and supervision of mining businesses in cross-provincial areas and sea areas more than 12 miles from the coastline.(h) Granting of IUP, guidance, resolution of community conflicts, and supervision of production operations mining business with a direct environmental impact across provinces and sea areas of more than 12 miles from the coastline.(r) Guidance and supervision of post-mining land reclamation.3) is authorized to a) monitor, b) ask for information, c) make copies of documents and necessary notes, d) enter a certain place, e) conduct photography, f) make audio-visual recordings, g) take samples, h) check equipment, i) check the installation and transportation means, and j) stop certain violations.(2) Environmental supervisory officials coordinate with civil servant investigators.(3) The person in charge of a business or activity cannot obstruct an environmental supervisory official's duties.(4).The article reads that Granting Business Permits entails allowing business actors to start and run their businesses or activities accompanied by coaching and supervision in mineral and coal mining.According to Article 2 (l), the delegation includes a) Supervising the delegated Business Licensing.Article 2 paragraph (6) states that the supervision in paragraph ( 5) (b) is implemented on a) Good Mining engineering principles and b) Mining business governance.Moreover, Article 2 paragraph (9) states that the results of supervision referred to in paragraph ( 8) violate the good mining techniques and management principles in paragraph (7).The Governor must follow up through a) Coaching or b) Administrative sanctions.Based on Article 2 paragraph (11), the delegation referred to in paragraph ( 1 8) is related to administrative sanctions stated in paragraphs (1) to ( 7), including a) written warning, b) temporary suspension of business activities, and c) license revocation.Based on paragraph (9), administrative sanctions referred to in paragraph (8) are given by the Minister or Governor according to their authority. 28gulation of the Minister of Energy and Mineral Resources Number 7 of 2020 concerning Methods of Granting Areas, Permits, and Reporting on Mineral and Coal Mining Business Activities regulates the supervision in Article 68 (1).This article states that holders of IUP Production Operations for processing and refining must a) prepare and submit the Work Plan and Annual Budget to the Minister or Governor by their authority to obtain approval, b) submit periodic written reports on the Annual Work Plan and Budget and the implementation of Mining business activities conducted, c) obtain approval to use foreign workers from the agency in charge of manpower affairs, d) obtain approval from the approval of the Annual Work Plan and Budget for changes in investment and financing sources, including changes in paid-up and issued capital.According to Article 68 26 , c) production and marketing, d) finance, e) mineral and coal resource conservation, environmental management, reclamation, and post-mining, f) occupational safety as well as health and mining operations, g) use of goods and services, mastery, development, and application of technology, as well as domestic engineering and design capabilities, h) mineral and coal data management, I) mining technical manpower development, j) local community development and empowerment, k) other mining business activities that concern the public interest, and l) quantity, type, and quality of mining business results.Paragraph (3) states that supervision is performed by Local Devices responsible for mineral and coal mining affairs, while (4) reads that technical supervision is implemented by the Mining Inspector and mine supervisor appointed by the Governor.Additionally, paragraph (5) further provisions regarding the supervision and control stated in paragraph (1) are regulated in a Governor's Regulation. 32ticle 74 (1) of Jambi Province Local Regulation Number 11 of 2019 concerning Mineral and Coal Mining Management states that mineral and coal mining management is supervised regarding the resulting business activities.In the same article, (2) reads that the supervision referred to in paragraph (1) is implemented by supervisory officials appointed by the Governor and Mining Inspector functional officials.Paragraph (3) states that the supervision conducted by the supervisory official referred to in paragraph (2) includes a) marketing, b) finance, c) mineral and coal data management, d) mining technical manpower development, e) local community development and empowerment, f) other mining business activities that concern the public interest, g) IUP management, h) quantity, type, and quality of mining business results, and i) management of post-mining reclamation by the provisions agreed by the initiator in the post-mining management plan document.Furthermore, Article 76 (1) states that the Governor in carrying out affairs under the authority of the provincial region may assign district or city and village local governments based on co-administration.Paragraph (2) reads that the task of assisting the district or city and village local government is regulated by a Governor Regulation. 33ticle 148 (1) of Regulation of Yogyakarta Number 39 of 2022 concerning the Implementation of Mining Business Activities for Metal Minerals, Non-Metal Minerals, Certain Types of Non-Metal Minerals, and Rocks states that the Governor could form a Coordinating Team for Supervision of Business Licensing.This is based on the risk of business permits to supervise the mining sector delegated by the central government.In this integration, the central government comprises a) a technical recommendation provider, b) a permit issuer, c) mining supervision elements, d) law enforcement, e) local government infrastructure policy, and f) governance and the Civil Service Police Unit.Paragraph (2) states that the Coordinating Team for Supervision of Risk-Based Business Licensing referred to in paragraph (1) must coordinate with the Coordinating Team for Supervision of Risk-Based Business Licensing.Additionally, paragraph 3) reads that the establishment of the Risk-Based Business Licensing Supervision Coordinating Team referred to in paragraph (1) is stipulated by Governor Decree. 34
The Strengthening Government Policies in Germany to Achieve Environmental Sustainability
The history of mining in Germany is very long, mining dates back to the 13 th century. 35oal mining in Germany has been a major economic driver in supporting German BESTUUR ISSN 2722-4708 Vol.11, No.1, August 2023, pp.95 industrialization for centuries, coal mining has played an important role in Germany's socio-economic history 36 .It not only boosted its industrialization but also supported its recovery after World War II 37 .Mineral resources played an important role in the European economy and the sustainable supply of mineral resources played such an important role that various policies were made for example, mining, development and trade policies, environmental protection, and securing land and mineral wealth for the sustainability of future generations 38 .The energy transition, decarbonization and the future of manufacturing will not end the import dependence of the German national economy, but will most likely result in a shift in dependence on the rapidly increasing need for new raw materials while domestic resources can contribute to but not fully meet the growing demand 39 .Germany implements policy instruments suitable for ending coal-fired power generation at minimum cost to achieve national climate targets.Climate change mitigation fuels the complexity demonstrated by the controversial issue of phasing out coal power 40 .Greenhouse gas emissions have been stagnant in Germany despite increasing use of renewable energy.This makes the government's energy transition seem inconsistent and triggers discussions about phasing out coal 41 .Germany, Finland, Britain, Portugal and Greece were the first European countries to introduce and develop their own mineral policies.Germany is one of the countries in the world to have set a goal of climate neutrality by or before 2050 in its national legislation, with a particular focus on reducing greenhouse gas emissions in energy, buildings, transport, industry and agriculture 42 .as a pioneer in climate change policy.Coal phasing out of energy sector German Bundestag 2020 creates legal basis for coal phasing out 2038, then revised to 2030 by new coalition government in September 2021 43 Germany's Federal Constitutional Court ruled that the 2019 Federal Climate Change Act, which has served as the central climate policy reference point for phasing out coal, should be amended and tightened.framing European climate politics and providing critical discourse analysis of the European Green Deal.The rapid transition to low-carbon development around the world has been contested by discourses that aim to recognize the inseparability of social and ecological concerns 44 .. Environmental factors are described as the main cause of forest destruction, nature conservationists also blame the forestry sector.Forest management practices are identified as key instruments that contribute to solutions and social responsibility and the consequences of forest destruction are ignored 45 .
The cessation of coal use as a whole is no exception due to the clean performance discourse and their success in delegitimizing coal as a climate-damaging energy source 46 .For thousands of years, mining has been a source of not only great economic wealth, but also social and environmental concerns.Mining innovations that are used, or that will be used in the future, can contribute to achieving the SDGs in Europe, the concept of innovation does not only describe the synergies between SDGs, but also trade-offs or imbalances between each SDGs 47 .The movement of raw materials can be one of the most challenging tasks in open pit mining, with the transportation of trucks being the largest factor in mining costs and generating large greenhouse gas emissions.Continuous conveyor installations as opposed to trucks are a real alternative as they reduce dead loads, reduce greenhouse gas emissions and even reduce costs in many cases.Transport in mines in mines, there is a continuous substitution of technology that has not yet been adopted by the German quarrying industry 48 .In Germany, the shutdown of this industry poses major challenges, which have been overcome through a process of regional structural transformation that continues to date, Germany's plan to remove coal from its energy matrix also poses major challenges that have increased due to the current energy crisis in Europe.In the case of Colombia, the global trend to reduce coal consumption will definitely affect its national finances in the medium term 49 .Communities wanted transparent and understandable information, felt positive about mine water treatment and opposed mine flooding in general 50 .
Implementation of the SDGs can link sustainable mining to green recovery, drive better environmental performance, improve circular economy, inform decision-making, and drive innovation and capacity growth 51 .Sustainability in the mining and raw materials sector is a BESTUUR ISSN 2722-4708 Vol.11, No.1, August 2023, pp.95-120 Suwari Akhmaddhian et.al (The Strengthening Government Policies on Mineral and Coal Mining…) key target on the EU Green Deal agenda.objective of providing a tool to evaluate and rank global risk factors that may affect the development of sustainable mining 52 .. Innovative and sustainable development based on the Triple Helix Model (THM), as well as the concept of Open Innovation (OI) and Environmental, Social and Governance (ESG) principles were identified as opportunities for its sustainable development.The combination of these solutions should enable the sustainable development of the industry, safeguarding its economic and social interests and reducing its negative impact on the environment.The use of new clean technologies in the operation and burning of coal should reduce the emission of harmful substances into the environment 53 .
Sustainable mining in Germany can be linked to legal system theory, namely legal structure, legal substance, and legal culture.The legal structure responsible for supervising and regulating sustainable mining activities in Germany, namely the Ministry of Environment, Nature Conservation and Safety is responsible for the development and implementation of environmental, nature conservation and nuclear safety policies in Germany.BMUB also pays attention to sustainable mining and promotes the development of environmentally friendly technologies in the mining sector; The Federal Institute for Geosciences and Natural Resources is an independent research institute under the auspices of BMUB and is responsible for research and development of mineral and energy resources in Germany; The Federal Mining Commission is responsible for issuing mining permits and overseeing mining activities in Germany; The Commission for the Environment and Nature Conservation is responsible for research and development related to the environment and nature conservation in Germany.UBA also provides advice to the government regarding sustainable environmental and nature conservation policies; Local Government Institutions are local government institutions responsible for supervising mining activities and management of mineral resources in their respective areas; The German Mining Industry Association is the organization representing the mining industry in Germany, including mining companies, mining equipment manufacturers and related service providers.The German Mining Industry Association plays a role in promoting sustainable mining activities in Germany and contributes to the development of technology and innovation to ensure responsible and sustainable mining operations 54 .
Following are some of the regulations related to sustainable mining in Germany, namely the German Federal Mining Act, which is the basic regulation of mining law in Germany.This Act provides a legal framework to regulate mining activities in Germany and protect the environment and public health; The German Environmental Protection Act i.e. regulations provide the legal framework to prevent and mitigate the negative impacts of industrial activities on the environment and human health.This law covers regulation of pollutant emissions and waste, management of hazardous and toxic materials, and control of noise and vibrations; German Nature Protection Act, namely regulations protecting and Metallurgy and Exploration, 38.5 (2021), 1831-43 https://doi.org/10.1007/s42461-021-00453-4conserving nature and wildlife in Germany, including the protection of biodiversity, flora and fauna; German Government Regulations on Underground Mining, namely regulations governing the technical regulations for underground mining activities in Germany; German Government Regulations on Open Pit Mining, namely regulations governing the technical regulations for open pit mining activities in Germany; The Mining Transparency Initiative is a global legal framework that provides open access to communities about contracts and payments between mining companies and governments, to prevent corruption and ensure fair and sustainable resource management 55 .The legal culture in Germany greatly influences the way mining regulations are implemented.The government and people in Germany are very aware of the importance of the environment and sustainability, so mining regulations are very strict to protect the environment and public health.This is reflected in the implementation of strict environmental standards and stringent environmental monitoring 56 .That analysis of sustainable mining in Germany shows that there is a close relationship between legal system theory and the success of regulating sustainable mining.An effective legal structure that pays attention to environmental and public health perspectives and strict legal substance to regulate mining activities can create a better legal culture and strengthen public awareness of the importance of sustainable mining.
The Strengthening Government Policies in Africa to Achieve Environmental Sustainability
Based on historical documentation and pictorial representations, it is said that mining in African countries has been going on for centuries, such as diamond mining in southern Africa 57 .Large-scale extractive industries contribute significantly to revenues in mineralrich African countries, little is known about their effectiveness, and approaches can better align their contributions to the sustainable development of local communities and the environment around their mining sites 58 .Development challenges facing mineral-rich Africa, although blessed with abundant mineral wealth, the continent continues to be plagued by rampant poverty and struggles to industrialize 59 .Mining companies around the world have been major drivers in the creation of mining towns as has been the case in Australia, Canada and South Africa 60 .
BESTUUR ISSN 2722-4708 Vol.11, No.1, August 2023, pp.95 Minerals contribute to the economy and social welfare of the people of African countries, the impact of mining can result in a dire situation for the economy and security of the country including the social welfare of the people.The positive benefits for the community from mining are increased direct employment and improved quality of life while the negative impacts of mining are increased migration, inadequate infrastructure and poor services.need to build a better foundation and understanding of the positive and negative impacts of mining 61 .Government policies that are not critical in Africa towards mining and development require a more careful approach to be followed by alternative approaches for safe and comfortable mining activities 62 .Technically mining companies must switch to technology that can produce sustainable products so that technology confirms that companies are ready to switch to sustainable business models 63 .Department of Mineral Resources and Energy (DMRE) in South Africa by reforming the Mineral and Petroleum Resources Development Act with the aim of pro-environmental management of mining thereby creating opportunities for historically disadvantaged people to benefit from South Africa's mineral resources. 64e Sustainable Development Goals (SDGs) have been adopted by countries and corporations around the world, including large mining companies.Operationalization of SDGs in the mining industry by defining and measuring a series of SDGs indicators for mining host communities.This new barometer of SDGs for the mining industry focuses on communities at mining locations.The SDGs opportunity exists to increase mineral industry monitoring to promote inclusive socio-economic development 65 .Mining management reform needs national and regional political support so that ongoing reforms will fit the existing agenda 66 .Sustainable mining reform can be linked to broader development initiatives, such as achieving the SDGs 67 .The country and peacebuilding strategy increasingly features natural resource governance reforms that seek to ensure that natural resource management is legal, transparent and beneficial for lasting peace and development 68 .Mineral-rich African countries should optimize the benefits derived from BESTUUR ISSN 2722-4708 Vol.11, No.1, August 2023, pp.95-120 Suwari Akhmaddhian et.al (The Strengthening Government Policies on Mineral and Coal Mining…) emerging Earth Observing technologies and related spatial data to measure the mining sector's contribution to the SDGs 69 .
The legal structure in the implementation of mining activities in Africa, namely the Ministry of Mines is responsible for monitoring and regulating mining activities at the national level, the Environmental Management Agency is responsible for monitoring and regulating mining activities that have an impact on the environment; The Occupational Health and Safety Supervisory Agency is responsible for monitoring and regulating occupational health and safety at mine sites; The Mining Council provides advice and recommendations to the government in making decisions related to policies and regulation of mining activities; and the Local Community Committee which plays a role in involving local communities in decision-making related to mining activities and strengthening corporate social responsibility 70 .The legal substance that regulates mining activities in Africa, namely the National Environmental Management Act, 1998, South Africa regulates environmental management and protection of natural resources related to mining activities; The Mining Act, 2010, Tanzania governs the management of mineral resources and exploration rights, as well as the requirements for mining permits; Mining and Quarrying Safety and Health Regulation, 2017, Nigeria provides guidelines and requirements relating to occupational health and safety in mining activities; Code Minier, 2015, Mali regulates requirements and obligations for mining companies, including environmental protection and occupational health and safety; Mining and Minerals Policy, 2013, Zambia regulates and establishes a policy framework for the management of mineral resources in Zambia, including environmental and occupational health and safety requirements.These regulations are an example of how governments in Africa are strengthening regulations and requirements relating to mining activities, with the aim of ensuring that these activities are carried out in a sustainable manner, respecting the environment and the rights of local communities 71 .
Environmentally friendly practices are possible in mining activities, even in small companies.These practices can enhance the image of mining companies and help them achieve a better environmental balance.Even if companies that have adopted environmental practices are satisfied with the results, there is still a long way to go; therefore support from government and professional associations seems necessary to encourage the adoption of these practices 72 .That analysis of sustainable mining in African countries shows that there is a close relationship between legal system theory and the success of managing sustainable mining.An effective legal structure in paying attention to environmental and public health perspectives and strict legal substance to regulate mining activities have not been able to create a better legal culture and strengthen public awareness of the importance of sustainable mining.
The Strengthening Government Policies in Indonesia to Achieve Environmental Sustainability
Laws and regulations should be formed based on public needs.Nonet and Selznick demonstrate this approach through examples of good due process.This concept allows the procedural regularity of decision-making from established legal rules.The ideal responsive law demands a more flexible interpretation that views the rule of law as bound to specific problems and contexts.Therefore, responsive law could be interpreted as a concept initiated to meet certain demands.In this regard, the law is made more responsive to urgent social circumstances and needs, as well as to social justice issues 73 .This means that Responsive Law is a theoretical model offering something beyond the procedural process.The Law is a facilitator that recognizes public desires and finds public commitment to realizing substantive justice.The characteristic of Responsive Law is the shift of emphasis from rules to principles and goals.Furthermore, it prioritizes the will of the public as a legal objective. 74awrence M. Friedman stated every legal system comprises three sub-systems, including legal substance, structure, and culture.Legal substance includes material as outlined in statutory regulations.The legal structure concerns implementing institutions, their authority, and personnel or law enforcement officials.Meanwhile, legal culture concerns the behavior of society. 75These three elements influence law enforcement in a society or state, which synergize to achieve justice.Sustainable development is a conscious and planned effort that integrates environmental, social, and economic aspects into a development strategy 76 .The goal is to ensure environmental integrity as well as the safety, capability, welfare, and quality of present and future generations' life.Preservation of environmental functions encompasses efforts to maintain the continuity of the environment carrying capacity.In this regard, natural resource management should meet the ideal prerequisites.This requires protecting natural resources and the surrounding community's welfare.Additionally, the social community must support natural resource utilization activities 77 .
Government policies in the Supervision of Mineral and Coal Mining can now be dissected with a legal system analysis knife from Lawrence M. Friedman, while the results of the analysis are: Legal substance Legislation related to Government Policy in Supervision of Mineral and Coal Mining is, since Law Number 23 of 2014 concerning Regional Government comes into force, the authority for permits, guidance and supervision of districts/cities is handed over to the province, the authority must still be given because some provincial areas are very large, making it difficult for the supervision that needs to be carried out by the province, therefore changes are needed to Presidential BESTUUR ISSN 2722-4708 Vol.11, No.1, August 2023, pp.95 The legal structure, which is related to Government Policy in the Supervision of Mineral and Coal Mining, while the legal structure in the supervision of mineral and coal mining currently only consists of 2 (two) institutions, namely the Directorate General of Mineral and Coal, Ministry of Energy and Mineral Resources and Mining and Energy Services of the Provincial Government, while the area of Indonesia is the largest archipelagic country after the United States with a total of 13,465 islands, a land area of 1,922,570 km2 and a water area of 3,257,483 km2 with different geographical locations 78 .Therefore, it is necessary to have district/city governments that can assist in supervising the management of mineral and coal mining throughout Indonesia.The legal culture that in Supervision of mining activities has been regulated in Law Number 4 of 2009 concerning Mineral and Coal Mining Article 113 paragraph 4) The temporary suspension as referred to in paragraph (1) letter c can be carried out by a mine inspector or carried out based on community requests to the Minister, governor or regent/mayor in accordance with their authority.In the elucidation of Article 113 Paragraph (4) The community's request contains an explanation of the condition of the environmental carrying capacity of the area associated with mining activities.The community through the Mining Advocacy Network (JATAM) has carried out a supervisory function based on its duties and functions as an institution that has contributed to increasing legal awareness related to monitoring mining management.
The analysis shows an imbalance in the supervision of mineral and coal mining.The first imbalance relates to legal substance.Since enacting Law Number 23 of 2014 concerning Local Government, the authority for permits, guidance, and supervision for districts or cities has been handed over to the province.This authority should be given because some provincial areas are very large, hampering supervision by the provinces.Therefore, it is necessary to amend Presidential Regulation Number 55 of 2022 concerning the Delegation of Granting Business Permits in the Mineral and Coal Mining Sector.This amendment authorizes districts or cities to supervise mining activities as part of the assistance task mandated in Article 91 of Law Number 23 of 2014 concerning Local Government.The legal structure in supervising mineral and coal mining only comprises the Directorate General of Mineral and Coal, the Ministry of Energy and Mineral Resources, as well as the Mining and Energy Service of the Provincial Government.This happens despite Indonesia being the largest archipelagic state after the United States.Indonesia has 13,465 islands, a land area of 1,922,570 km2, and a water area of 3,257,483 km2 with different geographic locations 79 .Therefore, district and city governments should help supervise the mineral and coal mining management throughout Indonesia.The community's legal culture has been formed with the existence of the JATAM institution.This institution has executed a supervisory function based on its duties and functions to help increase the legal awareness related to mining management supervision.The nature and purpose of supervision could be preventive or repressive.Preventive supervision 78 Kamaruddin and others. 79Muhamad Iko Kersapati and Josep Grau-Bové, 'Geographic Features Recognition for Heritage Landscape Mapping -Case Study: The Banda Islands, Maluku, Indonesia', Digital Applications in Archaeology and Cultural Heritage, 28.January (2023), 1-12 https://doi.org/10.1016/j.daach.2023.e00262BESTUUR ISSN 2722-4708 Vol.11, No.1, August 2023, pp.95-120 Suwari Akhmaddhian et.al (The Strengthening Government Policies on Mineral and Coal Mining…) entails preventing inappropriate mining activities through monitoring ministries, provinces, districts, and cities to avoid environmental damage.Meanwhile, repressive supervision is implemented by law enforcement officials regarding unpermitted activities.There is an urgency to strengthen government policies regarding the supervision of mineral and coal mining.The goal is to realize environmental sustainability through amendments to Presidential Regulation Number 55 of 2022 co ncerning the Delegation of Granting Business Permits in the Mineral and Coal Mining Sector.This necessitates granting authority to districts and cities to carry out mineral and coal mining activities.Therefore, the role of district or city governments in realizing environmental sustainability should be consistent with local services as agencies overseeing the environment.
Conclusion
The management of natural resources should be conducted in a sustainable and environmentally responsible manner, as prescribed in the Constitution of Indonesia.As mineral and coal mining involve non-renewable resources found on the earth, it is imperative that efforts are made to minimize negative environmental impacts associated with such exploration.This necessitates the implementation of comprehensive and collaborative supervision among the central, provincial, and district or city governments.However, the current government policies regarding the supervision of mineral and coal mining fall short of optimal standards.The authority of district or city governments in supervising mineral and coal mining activities has been revoked by the Local Government Law Number 23 of 2014, leading to a gap in supervisory institutions at the district or city level.In light of this, this research recommends the improvement of policies through amendments to Presidential Regulation Number 55 of 2022 regarding the delegation of granting business permits in the mineral and coal mining sector.This would allow districts and cities to conduct mineral and coal mining activities, ensuring that their role in promoting environmental sustainability aligns with regional services considered as environmental oversight agencies.Meanwhile, mining management that is in accordance with environmental sustainability in Germany is very advanced so that it can be used as an example in mining management and in African countries mining management is moving towards management that pays attention to the environment.
research, construction, mining, processing, refining, transportation, sales, and post-mining.In the old Law of 2009 concerning Mineral and Coal Mining, Article 1(29) referred to a mining area (WP) is a region with mineral or coal potential.The area is not bound by government administrative restrictions and is part of the national spatial planning.Article 1(32) of Law Number 4 of 2009 concerning Minerals and Coal states that WPR is part of the WP where mining business activities are carried out.Moreover, Article 1 (6) explains that a mining business is a mineral or coal exploitation activity encompassing investigation, exploration, feasibility research, construction, mining, processing, refining, transportation, sales, and post-mining.Law Number 3 of 2020 on amendments to Law Number 4 of 2009 concerning Mineral and Coal Mining divides the mining business into Mineral mining and Coal mining.Mineral Mining is classified into 1) Radioactive, including Radium, Thorium, and Uranium, 2) Metals, such as Gold and Copper, 3) Non-Metals such as Bars and bentonite, and 4) Rock, including Andesite, Clay, Urug Soil, Excavated Gravel, and Urug Sand. 24Law Number 11 of 2020 concerning Job Creation in Article 162 states that people obstructing the activities of IUP holders, Special, People's, or Rock Mining Permits that meet the requirements in Article 86F (b) and 136 paragraph (21) shall be imprisoned for one year or fined 100,000,000.00IDR.This means that Article 162 could reduce community participation and concern about mining in the surrounding environment.Government Regulation Number 55 of 2010 concerning Guidance and Supervision of the Mineral and Coal Mining Business Management Implementation regulates supervision in Article 13 (1).This article states that the Minister shall supervise the mining business management conducted by provincial and regency or city governments.Furthermore, Article 13 (2) states that Ministers, Governors, Regents, or Mayors shall supervise the implementation of mining business activities performed by IUP, IPR, or IUPK holders.Article 36 (1) reads that the Mining Inspector conducts supervision through a) Evaluating periodic or occasional reports, b) Periodic inspection or at any time, and c) Assessing the implementation of programs and activities.The authority for supervision by the district or city government was withdrawn from the local to the Provincial Government. 25 Government Regulation of Indonesia Number 22 of 2021 concerning the Implementation of Environmental Protection and Management in Article 272 regulates the control of environmental damage.Letter (h) is related to land resulting from mining business and activities.Paragraph (5) states that the standard criteria for Environmental Damage referred to in paragraphs 21 (f ) to (i) are stipulated in a Ministerial Regulation.Government Regulation Number 96 of 2021 concerning the Implementation of Mineral and Coal Mining Business Activities regulates supervision in Article 150 paragraph (4).This article states that in case of the circumstances referred to in paragraph (2), the suspension is granted based on a) The results of supervision carried out by the Minister, and b) A request from the community.Article 192 reads that the Minister could delegate ) cannot be sub-delegated to district or city local governments. 27Article 2 of the Minister of Energy and Mineral Resources Regulation Number 26 of 2018 concerning Implementation of Good Mining Principles and Supervision of Mineral and Coal Mining regulates a) Implementation of good mining principles, b) Supervising the Mining Business management, and c) Supervising the Mining Business activities implementation.In Article 45, the Minister and Governors are authorized to supervise the good mining engineering principles stated in Article 3 paragraph (2) (a), Processing and Purification technical principles in Article 4 paragraph (2) (a), and good mining service business technical principles in Article 5 paragraph (2) (a).Furthermore, Article 50 ( paragraph (2), the approval of the Annual Work Plan and Budget referred to in paragraphs (1) (a), (b), and (d) is granted after evaluating the mining business activity monitoring results from the previous year.29Local Regulation ofYogyakarta Number 1 of 2018 Concerning the Management of Metal Mineral, Non-Metal Mineral, and Rock Mining Businesses regulates supervision in Article 104 (1).The article reads that mining business activities are supervised in an integrated manner by a) River Basin Office, b) Mine Inspector, c) the Organization of Local Energy and Mineral Resources, d) Environmental Local Equipment Organization, and e) the Organization of local spatial layout.Local Regulation of Central Sulawesi Province Number 2 of 2018 concerning the Management of Mineral and Coal Mining regulates supervision in Article 5.According to this article, the Authority of the Provincial Government in managing mineral and coal mining is responsible for coordinating permits and supervising usage, and explosives in mining areas, as well as guiding and overseeing reclamation and post-mining. 30Local Regulation of West Kalimantan Province Number 3 of 2018 concerning Mineral Mining states in Article 67 (1) that the Mining Business is managed by the Governor and technically conducted by the Service.Paragraph (2) reads that organizing Mining Business management referred to in (1) requires a) reporting on the organization and execution of Mining Business activities under their authority at least once in six months to the Minister, b) managing Mineral and Coal Mining Business data, and c) preparing and determining a blueprint for community development and empowerment based on the Director General's consideration. 31Article 111 paragraph (1) of Local Regulation of South Kalimantan Province Number 5 of 2019 concerning Mineral and Coal Mining Management reads that mineral and coal mining management is supervised and controlled regarding business activities.According to paragraph (2), the supervision and control referred to in paragraph (1) include a) administration of permits related to the mining business, b) mining technical , 88-105 https://doi.org/10.20956/halrev.v9i1.4033 18Khudzaifah Dimyati and others, 'Indonesia as a Legal Welfare State: A Prophetic-Transcendental Basis', Heliyon, 7.8 (2021), e07865 https://doi.org/10.1016/j.heliyon.2021.e07865 19Blaise Kuemlangan and others, 'Enforcement Approaches against Illegal Fishing in National Fisheries Legislation', Marine Policy, 149.105514 (2023), 1-12 https://doi.org/10.1016/j.marpol.2023.105514 Suwari Akhmaddhian et.al (The Strengthening Government Policies on Mineral and Coal Mining…) Law Number 23 of 2014 concerning Local Government states that Government affairs are implemented by the Local Government and the Local People's Representative Council.This is based on the principle of autonomy and co-administration of the Indonesia Unitary State as referred to in the Law of Indonesia.Government Supervision of Mineral and Coal Mining is regulated in the Constitution, Laws, Government, Ministerial, Local, and Governor Regulations.Article 20A (1) of the 1945 Constitution states that the People's Representative Council has legislative, budgetary, and supervisory functions.Moreover, in Article 22D (3) The Local Representatives Council supervises the laws regarding local autonomy as well as regional formation, expansion, and merging.It also supervises central and local relations, management of natural and economic resources, state revenue and expenditure budget, taxes, education, and religion.The supervision results are conveyed to the People's Representative Council for consideration.Article 28H ( Suwari Akhmaddhian et.al (The Strengthening Government Policies on Mineral and Coal Mining…) use of explosives in mining areas according to their authority.(m) Fostering and supervising post-mining land reclamation.Furthermore, the supervisory authority possessed by the District and Cities Government in Article 8 of Law Number 4 of 2009 concerning Mineral and Coal Mining was canceled based on Law Number 23 of 2014 concerning Local Government.
by district/cities is conducted by the Governor representing the Central Government. Article 373 (1) states that the Central Government shall guide and supervise the implementation of provincial Local Government. (2) The Governor as the representative of the Central Government guides and supervises the district and city Local Government implementation. Furthermore, the Appendix to Law Number 23 of 2014
23ncerning Local Government Concurrent Division of Governmental Affairs between Central and Provincial Governments, and Local Regency or Cities, Distribution of Government Affairs in the Energy and Mineral Resources Sector, Mineral and Coal Sector states that no supervision authority is exercised by the Government Regency or City area.23 Suwari Akhmaddhian et.al (The Strengthening Government Policies on Mineral and Coal Mining…) the authority to appoint officials responsible for supervising Mining Business activities to Governor as the Central Government representatives. 26Presidential Regulation Number 55 of 2022 concerning the Delegation of Granting Business Permits in the Mineral and Coal Mining Sector regulates supervision in Article 1 paragraph 24Fatma Ulfatun Najicha and others, 'The Conceptualization of Environmental Administration Law in Environmental Pollution Control', Journal of Human Rights, Culture and Legal System, 2.2 (2022), 87-99 https://doi.org/https://doi.org/10.53955/jhcls.v2i3.55 25 Samuel J. Spiegel and others, 'Phasing Out Mercury?Ecological Economics and Indonesia's Small-Scale Suwari Akhmaddhian et.al (The Strengthening Government Policies on Mineral and Coal Mining…) Delia E. Bruno and others, 'Artisanal and Small-Scale Gold Mining, Meandering Tropical Rivers, and Geological Heritage: Evidence from Brazil and Indonesia', Science of the Total Environment, 715 (2020), 136907 https://doi.org/10.1016/j.scitotenv.2020.136907 27bang Hudayana, Suharko, and A. B. Widyanta, 'Communal Violence as a Strategy for Negotiation: Community Responses to Nickel Mining Industry in Central Sulawesi, Indonesia', Extractive Industries and Society, 7.4 (2020), 1547-56 https://doi.org/10.1016/j.exis.2020.08.012 28 Stephan Bose-o Reilly and others, 'Special Session 13 Preventing Tuberculosis and Lung Disease with Silica Dust Controls : The Case for Primary Prevention Special Session 14 Preventing and Managing Occupational Shoulder Disorders', Safety and Health atWork, 13 (2019), S27-28 https://doi.org/10.1016/j.shaw.2021.12.792 Suwari Akhmaddhian et.al (The Strengthening Government Policies on Mineral and Coal Mining…) 52 Muhammet Deveci, Emmanouil A Varouchakis, and others, 'Evaluation of Risks Impeding Sustainable Mining Using Fermatean Fuzzy Score Function Based SWARA Method', Applied Soft Computing, 2023, 110220 https://doi.org/10.1016/j.asoc.2023.110220 53Jarosław Brodny and Magdalena Tutak, 'Challenges of the Polish Coal Mining Industry on Its Way to Innovative and Sustainable Development', Journal of Cleaner Production, 375.July (2022) https://doi.org/10.1016/j.jclepro.2022.134061 54Jari Lyytimäki and others, 'Mining in the Newspapers: Local and Regional Media Representations of Mineral Exploration and Mining in Finland, Germany, and Spain', Mining, -120 Suwari Akhmaddhian et.al (The Strengthening Government Policies on Mineral and Coal Mining…) Regulation Number 55 of 2022 concerning Delegation of Granting Business Permits in the Mineral and Coal Mining Sector which authorizes districts/cities to supervise the implementation of mining activities as part of co-administration mandated in Article 91 of Law Number 23 of 2014 concerning Regional Government. | 2023-05-10T15:09:28.521Z | 2023-05-02T00:00:00.000 | {
"year": 2023,
"sha1": "55baf8fa5974212db376ebd9457b7d33bed9e0cf",
"oa_license": "CCBY",
"oa_url": "https://jurnal.uns.ac.id/bestuur/article/download/71279/pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "11d6b63e871c3357eada2970c6c898a60a42fbe8",
"s2fieldsofstudy": [
"Political Science"
],
"extfieldsofstudy": []
} |
248506013 | pes2o/s2orc | v3-fos-license | Exotic heavy hadrons with a three-body nature
In this talk we present a summary of our latest results on the investigation of three-body systems with explicit/hidden charm and with explicit/hidden strangeness. To be more concrete, in case of explicit strangeness quantum number, we pay attention on the $K D D $ and $K D\bar D^*$ systems, where, in the former, a charm $+2$, isospin $1/2$ and strangeness $+1$ state is obtained with a mass around 4140 MeV, while in the latter, a $K^*$ state, with hidden charm, and a mass close to 4307 MeV is found. In case of hidden strangeness, the $DK\bar K$ system is studied, while in the hidden charm and no strangeness quantum numbers sector, the $ND\bar D^*$ system is investigated. In the former a meson $D$ with mass around 2900 MeV is found to be generated, while in the latter several $N^*$ and $\Delta^*$ states with hidden charm are obtained with masses about $4600$ MeV. All these states constitute predictions of our model and future experimental confirmation of them would be relevant to elucidate the properties of the strong interaction in the presence of heavy quarks.
Introduction
Within the present theory of Quantum Chromodynamics, nature allows the existence of exotic mesons and baryons, i.e., those whose quantum numbers can be reproduced with more than a ¯ seed for mesons and for baryons. Indeed, the formation of such hadrons has been confirmed experimentally, with some recent findings being those of the LHCb and BESIII collaborations related to the formation of multi-quark states with hidden charm, with and without strangeness [1][2][3]. Inside the category of exotic hadrons, particularly interesting are those with several units of charm and bottom quantum numbers. Such research would definitely be crucial in shedding light on the working of the strong interaction in the presence of heavy quarks. However, it is still not clear whether the properties of the exotic mesons and baryons found can be understood in terms of compact tetraquarks or as molecules of hadrons. And lot of theoretical and experimental efforts are being presently put to clarify this issue.
In this talk we summarize the results we found for several three-body systems involving heavy mesons, producing explicit and hidden charm as well as explicit and hidden strangeness. In particular, we report our findings for the , ¯ * , ¯ and ¯ * systems.
Formalism and Results
The solution of the Faddeev equations [4] to study three-body systems turns to be a quite useful method to investigate the formation of exotic mesons and baryons with a molecular nature. Considering as kernel for the Faddeev equations the two-body -matrices obtained from the resolution of the Bethe-Salpeter equation in its on-shell factorization form [5], several three-body systems have been studied in the last years and generation of mesons and baryons with a molecular nature has been claimed [6][7][8].
The advantage of this procedure is that not only the Bethe-Salpeter equation where is the kernel and is a loop function of two hadrons, becomes an algebraic equation. The Faddeev equations where , = 1, 2, 3 is the two-body -matrix describing the interaction of particles ( ), ≠ ≠ and is a three-body loop function, become algebraic equations too under certain approximations [6][7][8]. In this way, once the kernel in Eq. (1) is obtained, the calculation of the three-body -matrix for the system, = 1 + 2 + 3 , gets tremendously simplified, even if a large number of coupled channels need to be considered.
In this approach, the kernel in Eq. (1) is determined by using effective Lagrangians based on the relevant symmetries for the problem under investigation, like the chiral and heavy quark symmetries of the strong interaction [9,10]. The solution of Eq. (1) within coupled channels reveals the generation of two-hadron molecular states. In particular, the , interactions form the state * 0 (2317) [11][12][13][14]. The ¯ * system in isospin 0 generates (3872) while in isospin 1 the (3900) is formed [15][16][17][18][19]. The ¯ interaction give rise to 0 (980) [5], while the / * system originates Λ (2595) [20][21][22]. Considering these ingredients, the solution of Eq. (2) for the , and coupled channel system shows the generation of a state with isospin 1/2, charm +2, strangeness +1 and mass around 4140 MeV [23]. Interestingly, this state arises when * 0 (2317) is formed in one of the subsystems. As a consequence of this, the three-body state obtained can decay to two-body channels like , * and * through triangular loops, producing a small width. These decay mechanisms were investigated in Ref. [24], finding a total width for the state of 2-3 MeV. Preliminary search for a state with charm +2 and mass around 4100 MeV has been recently conducted by the Belle collaboration [25] and no clear signal was found, however, more precise data are necessary to get better information about the existence of such a state. In case of the ¯ * system, a * meson with mass around 4307 MeV is obtained from the resolution of Eq. (2). In this case, the three-body state is generated when the system in isospin 0 forms * 0 (2317) and the ¯ * system produces (3872) in isospin 0 and (3900) in isospin 1 [26]. Since in our formalism (3900) can be considered as a state generated from the ¯ * and / coupled channels, it can decay to / [27], producing a width for (3900) of around 30 MeV. When implementing this width into our three-body calculation, the * (4307) state obtains a width of around 18 MeV. In view of the nature found for this three-body state, * (4307) can decay to two-body channels too, like / * (892),¯ ,¯ * and¯ * * , through triangular loops. These decay widths were determined in Ref. [28], finding ∼ 7 MeV for / * (892), ∼ 0.5 for¯ * , * * and ∼ 1 MeV for¯ . The fact of having an important (3900) molecular component in the wave function of * (4307) indicates that the state can also easily decay to a channel like / . In this way, the reconstruction on the / invariant mass in processes involving these particles in their final states can be a promising way of observing this * (4307) [29]. For the ¯ system, by solving Eq. (2), a meson with mass around 2900 MeV and width of 55 MeV is found [33] when the ¯ system generates 0 (980). A state with such an internal structure can decay to two-body channels like * , * ¯ , * 0¯ , out of which, the largest decay width comes from (2900) → * 0 (2317)¯ [34]. Structures around 3000 MeV have been observed by the LHCb collaboration in the * and invariant masses [35] and could correspond to the (2900) predicted [34].
In the hidden charm and null strangeness sector, the study of the ¯ * system reveals formation of several narrow * and Δ * states with masses around 4400 − 4600 MeV and a positive parity. In this case, the states are obained when the ¯ * system generates (3872) and (3900) and the / * system cluster as Λ (2595) [36]. The investigation of two-body decay channels like / and¯ ( * ) Σ , which is a consequence of the nature found for these three-body states, can be useful in finding signals for such exotic hadrons.
Conclussions
In this talk we have presented our latest results for the generation of exotic mesons and baryons as a consequence of the three-body dynamics involved in several systems constituted by heavy mesons, like , ¯ * , ¯ and ¯ * . In particular, in the system we find a state around 4140 MeV with charm +2 and strangeness +1. The study of the ¯ * and ¯ * systems reveals the formation of Kaon and Nucleon/Delta resonances with hidden charm, respectively, with masses in the region of 4000-4500 MeV. In the hidden strangeness sector, a meson with mass about 2900 MeV is obtained as a consequence of the dynamics involved in the ¯ system. | 2022-05-04T01:15:51.049Z | 2022-05-02T00:00:00.000 | {
"year": 2022,
"sha1": "8da8c9264dd4ad3be687b5996f661b3eda9b4918",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "8da8c9264dd4ad3be687b5996f661b3eda9b4918",
"s2fieldsofstudy": [
"Physics",
"Education"
],
"extfieldsofstudy": [
"Physics"
]
} |
259750492 | pes2o/s2orc | v3-fos-license | PEDAGOGICAL CONDITIONS FOR THE FORMATION OF AN EFFECTIVE INFORMATION AND LEARNING ENVIRONMENT IN HIGHER EDUCATION INSTITUTIONS
. The article deals with the information and learning environment as a component of integral training system of the educational system in higher education institutions. The main function of the information and learning environment, which is ensuring the coordinated functioning of subjects of educational activity, has been established. The concept of «effective information and learning environment» has been introduced. It is
consideration its hierarchical links to other components of the educational process in higher education institutions. On the basis of conducted analysis of scientific works it has been found out that the process of the effective use of the ILE in higher education institutions needs the improvement of all its components (motivational, content, didactic, technological and managerial).
The research goal. The goal of our study is to find out the pedagogical conditions for the formation of an effective information and learning environment in higher education institutions, to develop and verify experimentally a methodological system for the improving of IT competence of research and teaching staff.
THE THEORETICAL BACKGROUNDS
Theoretical and methodological justification of the organization and conducting of distance learning in the conditions of continuous pedagogical education was carried out in modern regulatory documents [10], [11] and research of some scientists [13] - [24].
The following approaches are considered to be dominant in our study: holistic, competence-futurological, andragogical and narrative-digital.
Interpretation of information and learning environment as a subject of educational interaction in the multi-subjective educational space, that is as a network subject of the educational process is a conceptual basis for developing and theoretical justification of the singled out pedagogical conditions. At the same time, we keep to the statement regarding the multi-subjective educational paradigm, which is considered as open, self-developing and selforganizing, leading to a radical change in the behaviour and relationships of participants in the educational process. The principle of teaching concerning the inseparability of the organism from its environment is also decisive: «An organism is impossible without an external environment that supports its existence; therefore, the scientific definition of an organism should also include the environment that affects it. Since the existence of an organism is impossible without the latter, disputes about what is more important in life, either the environment or the body itself, do not make any sense» [27, p. 58]. The statement of synergetics about the non-additivity of the whole to the sum of its parts was also taken into account. The complex use of the mentioned approaches will provide the integral functioning of the educational system in higher education institutions and get the increase of the positive result, its emergence manifestation.
The position of foreign scientists [17] was also taken into consideration. According to this position the process of forming the students' information and communication competence should be built in two directions: 1) «learning-to-use» which refers to the acquisition of skills in the use of ICT and digital technologies for personal needs and professional activity, 2) «using-to-learn» which focuses on ways of integrating infocommunications and digital technologies into the educational process, improving the effectiveness of acquiring basic competences through the use of these technologies in future professional activities.
METHODS
To achieve the abovementioned objectives, a number of methods have been used, namely: theoreticalcomparative analysis to find out different views on the issue, identify areas of study, identification of pedagogical conditions for the effective functioning of ILE; modellingto develop a model of a methodological system for the improving of IT competence of research and teaching staff in higher education institutions; constructingto develop a content component of the methodological system, criteria apparatus for the research; systematization and generalizationto formulate conclusions and recommendations for improving the educational process with the aim of improving ILE functioning in higher education institutions.
empiricalgeneralization of pedagogical experience, scientific observation, interviews, content analysis, questionnaires in order to determine the state of implementation of the issue in practice and to develop the content of experimental teaching methodology; pedagogical experiment, which provided verification of the effectiveness of the proposed methodology, methods of expert evaluation to identify didactic quality of the developed experimental materials.
Experimental research has been carried out on the basis of Ternopil V. Hnatiuk National Pedagogical University (TNPU) and Sumy A. Makarenko State Pedagogical University.
Effectiveness of the proposed methodology was checked during the forming experiment. Forming experiment lasted for two years (2019 -2020 and 2020 -2021 academic years) in the process of future teachers professional training. The number of scientists and teachers of higher education institutions who participated in the experiment was 292 and the number of students comprised 433.
FINDINGS
We understand an effective ILE as mutually coordinated functioning of all subjects of educational activity (students, ILE, research and teaching staff) and the formation of an effective ILE as the ensuring of mutually coordinated functioning of all subjects of educational activity with the aim to increase the quality of education. We interpret pedagogical conditions for the effective functioning of ILE as the complex of motivational, content, didactic, managerial and technological resources and initial principles, creation and realization of which will contribute to the improvement of the process of future specialists training.
We conducted a survey of both students and research and teaching staff with the aim to identify the pedagogical conditions for the effective functioning of ILE in higher education institutions (Questionnaire «Implementation of distance learning from the student's point of view» [25] and «Implementation of distance learning from the teacher's point of view» [26]) Students' answers to the question «Do you know the themes for your individual work?» proved that out of 440 respondents they are familiar to 35.5%, 45.5% are partially acquainted with them, and for 19.1% they are unknown. The quality of methodological support developed by teachers for independent work was rated by 438 students. Out of them 50.9% consider it to be excellent, 17.6% think that it is good, 22.8% consider it to be satisfactory, and 8.7% as unsatisfactory. The level of equipment of the classrooms with necessary tools for independent work was rated by 437 students. 20.8% of them evaluated it as excellent, 46.7 as good, 22.7% as satisfactory, and 9.8% as unsatisfactory. Herewith, out of 432 respondents, 12.7% do their independent work in the library, 18.1% in the classroom, 4.6% in the Internet cafe, 90.3% at home, 1.4% in the dormitory.
Results of the conducted questionnaire proved, that the readiness of electronic courses to ensure independent work, out of 432 students 9.3% evaluated as excellent, 48.1% as good, 25.5% as satisfactory, and 9.3% as unsatisfactory. Herewith, 36.8% of respondents give preference to printed resources and 63.2% prefer electronic resources. Search service Internet networks for independent work are often used by 86.5% of respondents, 11.6% use them sometimes, 1.9% of respondents use them rarely (430 answers).
Answering the question «How do you assess the level of readiness of the university for the introduction of distance learning?», out of 434 respondents, 34.3% consider it to be high, 52.3%medium, and 13.4%low. Out of 431 respondents, 50.6% have a positive attitude to distance learning, 29.7%neutral, and 19.7%negative. Herewith, 29.9% rated their readiness for distance learning (432 responses) as high, 56.3% as medium, and 13.9% as low. There are the following things that hinder the introduction of distance learning at the university: unsatisfactory state of the IT structure in the university (19.0%), teachers' nonreadiness to work remotely (online) (44.4%), students' non-readiness to work remotely (online) (39.8%), insufficient number of educational resources (46.9%), low quality methodological support (29.4%) (405 responses).
The results of teachers' survey showed that they use the following services for online classes: Google Meet (42.4%), BigBlueButton (16.9%), Zoom (79.7%), Viber (1.7%) (59 respondents). Herewith, 49.2% of teachers use the Moodle mobile application during their work with the electronic server, and 50.8% do not use it.
Answering the question «What digital tools do you use when conducting online classes?» 317 teachers said as follows: Google documents -62.1%, Google questionnaires -17.4%, Interactive whiteboards -15.5%, presentations -91.2%. According to the teachers, the use of such tools ensures the effective functioning of the ILE in higher education institutions. The level of use of educational resources of the electronic course server (Moodle) by students (160 respondents) was assessed as high by 43.8% of them, satisfactory -49.4%, unsatisfactory -6.8%.
The conducted analysis of the information support of the educational process shows that teachers, first of all, use educational and methodological complexes of disciplines (EMCD) in electronic format (text files, presentation files for lectures, methodological recommendations for practical and laboratory work, and independent work of students). Students are often offered electronic textbooks, but as the analysis of their format and content shows they are, in the main, electronic analogue of printed editions (mostly *pdf format), though they can be found on the web sites of departments, cloud storages, university repositories and they are placed there as electronic textbooks. Conversations with teachers prove that quite often users refer to digitized versions of paper books as electronic textbooks. Such attitude cannot be regarded as correct, since electronic textbook is not only an ordinary text that is given in a definite succession, but also additional determinants (for example, hypertext links), which are not common for a paper textbook.
Thus, the analysis of the practice of organization and support of the educational process in the training of students, normative requirements, scientific sources referring the outlined problem and our own pedagogical experience allowed us to single out such interconnected pedagogical conditions of the effective functioning of the ILE ; existence of vividly structured system of information and learning environment as the subject of the educational interaction with taking into consideration its hierarchical links to other components of the educational process in higher education institutions; purposeful raining of students for the fluent operation of information and communication tools; improvement of the IT competence of research and teaching staff in the implementation of teaching methods based on modern information and computer technologies.
We will reveal the essence of the identified pedagogical conditions for the formation of an effective ILE in higher education institutions.
The first pedagogical condition is the existence of vividly structured system of information and learning environment as the subject of the educational interaction with taking into consideration its hierarchical links to other components of the educational process in higher education institutions. We will consider it on the example of the structure of the ILE of distance (online) learning in TNPU ( Figure 1) It is an e-learning portal of full value that combines administrative resources such as an information portal, a service for assessment the quality of training and educational programs, a service for free choice of disciplines, resources for final certification and conducting entrance examinations in the foreign language to the master's study program.
Integration with online communication services ensures transparent creation and joining the planned online events. The backup service ensures the preservation of data of the previous day, which allows, in case of unpredicted situations, resume the work quickly. Services that are deployed in the domain elr.tnpu.edu.ua are fully under the control of the university and the centre of distance learning, and the external services, which are being used, only provide flexibility and new opportunities in the implementation of electronic (distance) learning. The basis of the educational environment is the LCM Moodle educational resource management system with installed additional modules for integration at the system level of services for conducting online meetings, in particular Google Meet, Zoom, BigBlueButton.
The second pedagogical condition is a purposeful training of students for the fluent operation of information and communication tools. It is realized by developing and implementing student's individual educational trajectory, taking into account the study of mandatory and elective educational components. The purpose of their including in the curriculum is the formation of IT competence. In addition, a combination of formal, nonformal and informal education is desirable.
Taking into consideration activity of students in the network environment, in particular, in social networks and the use of various servers of Internet resources, it can be stated that students are ready to use ICT. This conclusion is confirmed by the results of a survey of students regarding their readiness for online learning.
Since I and II conditions have been more or less formed in modern higher education institutions, we will focus more on the details of implementation of the III pedagogical condition for the effective functioning of the ILE in higher education institutions. Namely: improvement of IT competence of research and teaching staff in the implementation of teaching methods based on modern information and computer technologies. For this purpose, we have developed a methodological system for improving the IT competence of research and teaching staff.
Conducted analysis of literature sources and practical work of modern higher education institutions proved that teachers can improve their qualification according to this direction in formal, non-formal and informal education. For example, attending trainings, workshops, webinars, interactive courses on the use of ICT, digital technologies in educational activities, engaging in self-education (active independent research and study of Internet resources, participation in conferences, preparation of scientific papers, etc.). However, foreign scientists [28] believe that the IT competence of a teacher as his basic competence evolves most effectively in the process of developing and implementing a holistic model of further training while studying modern ICT and digital technologies in order to use them in the organization of the educational process (planning, didactic tools, knowledge control and diagnostics of the levels of formation of ICC of those who study). This incited us to develop the methodological system for improving of the IT competence of research and teaching staff.
The following approaches are considered to be dominant in its construction: holistic, competence-futurological, andragogical and narrative-digital. The basis of competencefuturological approach constitutes the idea of the expediency of methodological system modelling based on the combination of key competences of the 21 st century «4 С» (critical thinking, creativity, collaboration, communication) with digital competence and the ability to predict the possibilities of developing a modern learning environment in the future. The necessity of this very approach is caused by the entry of modern life into the so-called «regime with exacerbations» that demands projecting all the components of the model taking into consideration possible specificness of their modification in the future educational process according to the probable needs of professional activity.
Andragogical approach to methodological system modelling emphasizes that it is based on the andragogical principles of learning: independent learning priority; cooperative activity principle; the principle of relying on life experience; individualization of education; systematic learning; contextuality of learning; the principle of updating learning results; the principle of elective education; the principle of development of educational needs; the principle of reflectivity.
Narrative-digital approach makes it possible to apply digital narratives in the methodological system which represent integrated combination of the narrative (narration) and information and communication technologies. Development and use of digital narratives in the practice of teachers' professional activity takes an important place in the realization of this approach.
In the research we heeded the necessity to follow the most important methodological principles of cognition, namely holistic and systematic approach to the object of study. They demand to consider the problem not in isolation, but in the context of realization of the holistic educational process in modern higher education institutions. That is why we checked the possibility of the realization of the defined third condition through the development and implementation of the methodological system for improving the IT competence of research and teaching staff . It consists of goal-oriented, operational and content, activity-reflective and result-oriented components. Its system creative factor is the aim to increase the quality of mutually coordinated functioning of the subjects of educational activity. The basis of the methodological system is the introduction of a scientific and methodological seminar on the topic «The Use of Activity Components and Online Communications of The Moodle System» [29], which is considered an element of non-formal education of teachers. The final result is further training of research and teaching staff from different fields of knowledge on the plane of studying modern information and computer technologies in order to be able to use them in their professional activity.
The main intentions and ideas of the authors, which were supposed to be materialized during the construction of a methodological system for improving the IT competence of research and teaching staff, were previously discussed with students and scientific and pedagogical workers at the International Scientific and Practical Conference «Training of Physics, Chemistry, Biology and Natural sciences Teachers in the context of the requirements of the New Ukrainian School» (2020 -2022), round tables, webinars, in individual conversations. Thus, the previous adaptation of conceptual ideas of the third pedagogical condition implementation into real educational process was carried out.
Based on the theoretical analysis of the essence of the problem, the educational needs of students and teachers, we identified eight topics, the study of which should be included in the cognitive component of the suggested system, namely: 1. To assess the quality of the proposed training classes, an integrated criterion of «didactic quality» was used, which was determined by the method of expert assessments. We were prompted to choose it by taking into account the following concepts of the modern theory of the formation of the content of education: 1) It is necessary to evaluate the effectiveness and correctness of new ideas, methods and principles, first of all, theoretically; 2) The age-old experience of constructing the content of the basics of science shows that the expert method is the main method for selecting material, namely, the opinions of scientists-specialists [30, p. 326]. A group of experts was formed to conduct the research, which included scientists and lecturers of pedagogical higher education institutions from different regions of Ukraine, who agreed to participate in the examination. We deliberately chose a non-homogeneous expert group in terms of composition. It allows taking into account more fully existing opinions on the compliance of the proposed content with the needs and real conditions of practice in teaching and the current state of development of ICT. The quality of experts was high, as all of them were sufficiently characterized by such important features as: 1. Competence, that is they possessed a stock of necessary knowledge, which allowed them to create their own model of the issue under consideration based on the received information; to synthesize extraordinary conclusions. Their field of activity, specialization, and scientific interests border on the field to which the issue under analysis belongs. A group of specialists especially competent in the field of the studied issue (21 people) was selected out of the total number of experts. It comprised Informatics lecturers who have gained a scientific degree and teaching experience of more than 10 years, and those who participated in defining the main items of the scientific methodological seminar.
Indicators, according to which the main topics of classes had to be assessed, were agreed with this group of experts. As a result of collective discussion, the «weight» (K) of each of the six selected indicators was determined. The results are presented in Table 1. Table 1 The weight of indicators of the didactic quality of classes Possibility to reveal and apply the effective information and learning environment in higher education institutions based on the existing material and technical support 10 2 Significance for the holistic educational process organization 25 3 Significance for the organization of interactive pedagogical interaction of participants in the educational process 25 4 Accessibility for perception 10 5 Expediency to use during future teachers professional training 20 6 Correspondence to the life experience of research and teaching staff and students 10 The examination was conducted in May 2019. The core content of classes was evaluated according to the integrated criterion «didactic quality» and also on the basis of «multi-factor ranking». The criterion of «didactic quality» was defined as the degree of correspondence of each class, submitted to the examination, to the totality of the mentioned indicators.
Invited experts were informed about the objective of the experiment and the rules of its conducting. They were given the information concerning general approaches to solving the problem. After that each expert individually filled in the questionnaire, which included a list of factors that were assessed. The questionnaires were studied and analyzed. Processing of the grades given by experts was carried out using statistical methods, which were based on the principle that the expert can be considered as a measuring device, the indicators of which have random and systematic errors.
The results of the expert assessment convincingly showed the possibility and expediency of including the suggested methodological system of the constructed content of training classes in the cognitive component. According to the experts, they are available on the whole, for perception by research and teaching staff and are important for improving the quality of their IT competence.
We will characterize the stages of the experimental implementation of the methodological system for the IT competence of research and teaching staff of higher education institutions: preparatory, organizational and methodological, procedural, reflexive and analytical (Figure 2).
Figure 2. Stages of the implementation of methodological system
Preparatory stage involves studying the needs of the subjects of educational interaction (students, teachers, the network subject of the educational process) in relation to the (ensuring) of the effective functioning of the ILE by means of questionnaires and analysis of educational information products (electronic courses).
For ensuring the revealed needs we concentrated on the detection of the effective tool for improving the IT competence of research and teaching staff. Research criterion apparatus was also defined.
Organizational and methodological stage involved defining priorities in educational activity and conducting organizational actions on studying the suggested course as a means of continuing education for the formation of IT competence of research and teaching staff.
Procedural stage involved conducting an online scientific and methodological seminar with teachers of higher education institutions. In order to study the content of classes independently, practice practical skills, video recordings of all classes were disposed at the bank of video lectures in TNPU.
Reflexive and analytical stage of our research involved the analysis of the results of experimental training in terms of objective (number of developed electronic courses, level of course content, activity of useworking time in Moodle) and subjective (diversity of used platforms, types and kinds of tasks, self-analysis of the level of readiness to conduct distance learning by the research and teaching staff, analysis of the level of teachers' readiness to organize distance learning by students) indicators.
Comparative analysis of the results of introduction of the suggested methodological system for improving the IT competence of research and teaching staff of higher education institutions according to the determined objective and subjective indicators convincingly proves its effectiveness. So, as of December 2019 (before experimental training), 2085 electronic courses were developed in all and after the completion of the experimental work there became 2889 electronic courses, that is, the number increased by 804. The number of active courses increased by 126. Their number was 1322 before the experiment and after the experimental training their number increased to 1567.
In general, the teachers noted the following difficulties and shortcomings that arose during the experimental activity: technical ones, related to the unstable Internet connection; weak technical equipment of students with means of communication, which reproduces their video presence and reduces the level of communication; electricity cuts, technical problems during online classes; the study of certain topics referring the teaching methods of disciplines is not effective in the conditions of distance learning; lack of strong personal motivation among students, insufficient level of knowledge to prepare presentations for students; it is difficult to establish feedback with students during discussions, the teacher's interference into the process of performing laboratory work is limited or impossible, etc.
The results of checking the efficiency of the suggested methodological system for the improving of the IT competence of research and teaching staff according to subjective indicators are shown in Table 2. Table 2 The level of the subjects of educational activity readiness for distance learning before (I examination) and after (II examination) of the forming experiment (160 respondents) Data from Table 2 demonstrate that the teachers assessed the level of their readiness to distance learning after the conducting of forming experiment much higher. Thus, the number of respondents with a high level of readiness increased by 2.5%, and those with a satisfactory level by 4.0%. The number of teachers who assessed their level of readiness as low decreased by 6.5%.
The fact that students assess the level of teachers' readiness to distance learning much lower than the teachers themselves turned to be rather interesting. The contrast between the levels of readiness, obtained on the basis of teachers' self-analysis and students' assessment is the following during the first examination: for the I level -29.2%, for the II + 25.1% and for the III level + 1.9%. During the second examination the contrast is as follows: for the I level -27.5%, for the II + 26.5% and for the III level + 1.9%. However, the tendency to improve the level of teachers' readiness to conduct distance learning according to students' assessment is maintained. Thus, the number of respondents who assessed the teachers' readiness at a high level increased by 2.2%, and at a satisfactory level by 4.5%. The number of teachers whose level of readiness was assessed by students as low decreased by as much as 8.7%.
The indicator of the level of students' activity during online classes has also increased: high by 2.5%; satisfactory by 2.3%. The low level decreased by 3.8%. The introduction of the suggested methodological system for the improving teachers' IT competence was also reflected on the level of educational resources of the electronic course server (Moodle) use by the students. Thus, the number of students who are on the low level decreased by 21.4%, and those on the high and satisfactory levels increased accordingly by 17.2% and 14.4%.
The given results allow to come to the conclusion about the efficiency of the suggested methodological system for the improving teachers' IT competence, in particular, and the expediency of constructing ILE in higher education institution taking into consideration singled out pedagogical conditions, in general.
CONCLUSIONS AND PROSPECTS FOR FURTHER RESEARCH
To solve the issue of an effective information and learning environment formation qualitatively it is reasonably to consider it as a component of a holistic educational system in higher education institutions. The main function of information and learning environment is to ensure mutually coordinated functioning of the subjects of educational activity (students, teachers and ILE) as a network subject of the educational process.
Qualitative accomplishment of the main function is possible while implementing the complex of pedagogical conditions. Namely: existence of vividly structured system of information and learning environment as the subject of the educational interaction with taking into consideration its hierarchical links to other components of the educational process in higher education institutions; purposeful training of the students for the fluent operation of information and communication tools; improvement of the IT competence of research and teaching staff in the implementation of teaching methods based on modern information and computer technologies.
The mechanism of implementation of the third pedagogical condition is represented by the methodological system for improving the IT competence of research and teaching staff of higher education institutions. Its application allows ensuring of more effective mutually coordinated functioning of the subjects of educational activity within a holistic system of a higher education institution.
It has been established that to assess the content quality of the methodological system it is appropriate to use the «didactic quality» criterion according to the following indicators: possibility to reveal and apply the effective information and learning environment in higher education institutions based on the existing material and technical support; significance for the holistic educational process organization; significance for the organization of interactive pedagogical interaction of participants in the educational process; accessibility for perception; expediency to use during future teachers professional training; correspondence to the life experience of research and teaching staff and students. To assess the quality of teachers' IТ competence formation within non-formal education it is appropriate to use subjective and objective indicators.
Organization of the educational process within non-formal education for the formation and improvement of teachers' IT competence allows improving the level of its formation considerably and increasing the quality of mutually coordinated functioning of the subjects of educational activity.
The formation of a high-quality information and learning environment in higher education institutions taking into account all the pedagogical conditions identified in the research, allows increasing the quality of providing educational services to students.
The prospects for further study consist in the modelling of the content and activity contents of continuous pedagogical education based on the combination of the key competencies of the 21st century «4 C» with digital competence and the ability to predict the possibilities of developing a modern learning environment in the future. | 2023-07-12T06:35:16.735Z | 2023-06-30T00:00:00.000 | {
"year": 2023,
"sha1": "d5692fb17dc83209b439d1e13da73d05e37bee64",
"oa_license": "CCBYNCSA",
"oa_url": "https://journal.iitta.gov.ua/index.php/itlt/article/download/5153/2138",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "bff80d038ecfa033f9a2f6b0060dcabfe41777f5",
"s2fieldsofstudy": [
"Education"
],
"extfieldsofstudy": []
} |
1509258 | pes2o/s2orc | v3-fos-license | Physics, Cosmology and Experimental Signatures of a Possible New Class of Superluminal Particles
The apparent Lorentz invariance of the laws of physics does not imply that space-time is indeed minkowskian. We consider a scenario where Lorentz invariance is only an approximate property of equations of matter above a certain distance scale, and superluminal sectors of matter exist related to new degrees of freedom not yet discovered experimentally. The new particles would not be tachyons: they may feel different minkowskian space-times with critical speeds much higher than c (speed of light) and behave kinematically like ordinary particles apart from the difference in critical speed. Superluminal particles may provide most of the matter at cosmic scale, and be mainly dark. We present a discussion of possible theoretical, cosmological and experimental consequences of such a scenario, with particular emphasis on problems related to the identification of dark matter.
The apparent Lorentz invariance of the laws of physics does not imply that spacetime is indeed minkowskian. We consider a scenario where Lorentz invariance is only an approximate property of equations describing a sector of matter at a given scale and superluminal sectors of matter exist related to new degrees of freedom not yet discovered experimentally. The new particles would not be tachyons: they may feel different minkowskian space-times with critical speeds much higher than c (speed of light) and behave kinematically like ordinary particles apart from the difference in critical speed. Superluminal particles may provide most of the matter at cosmic scale, and be mainly dark. We present a discussion of possible theoretical, cosmological and experimental consequences of such a scenario, with particular emphasis on problems related to the identification of dark matter.
Relativity and sine-Gordon solitons
In textbook special relativity, minkowskian geometry is an intrinsic property of space and time. However, a look to various dynamical systems would suggest a more flexible approach to the relation between matter and space-time. Lorentz invariance can be viewed as a symmetry of the equations of motion, in which case no reference to absolute properties of space and time is required. In a two-dimensional galilean space-time, the equation: with α = 1/c 2 o and c o = critical speed, remains unchanged under "Lorentz transformations" leaving invariant the squared interval ds 2 = dx 2 − c 2 o dt 2 , so that matter made with solutions of equation (1) would feel a relativistic spacetime even if the real space-time is actually galilean and if an absolute frame exists in the underlying dynamics beyond the wave equation. A well-known example is provided by the solitons of the sine-Gordon equation, obtained taking in (1): F (φ) = (ω/c o ) 2 sinφ . A two-dimensional universe made of sine-Gordon solitons plunged in a galilean world would feel a two-dimensional minkowskian space-time with the laws of special relativity. Information on any absolute rest frame would be lost by the solitons.
1-soliton solutions of the sine-Gordon equation are known to exhibit "relativistic" particle properties, e.g.
where v is the soliton speed and E o its rest energy, so that everything looks perfectly "minkowskian" even if the basic equation derives from a galilean world with an absolute rest frame. Similarly, in the real world, the speed of light c could be just the sectorial critical speed of a part matter (the "ordinary" particles), instead of a universal critical speed deriving from absolute geometric properties of space and time as usually stated in relativity theory.
Superluminal particles
If Lorentz invariance is only an approximate property of equations describing a sector of matter above a given distance scale, and absolute frame (the "vacuum rest frame") can exist without contradicting the minkowskian structure of the space-time felt by "ordinary" particles (those with critical speed equal to c). Then, c will not necessarily be the only critical speed in vacuum: for instance, superluminal sectors of matter may exist related to new degrees of freedom not yet discovered experimentally. Such particles would not be tachyons: they may feel different minkowskian space-times with critical speeds c i ≫ c (the subscript i stands for the i-th superluminal sector), and behave kinematically like "ordinary" particles apart from the difference in critical speed. A superluminal sector of matter can be built as follows.
Ordinary free particles in vacuum usually satisfy a dalembertian equation, such as the Klein-Gordon equation for scalar particles: where the coefficient of the second time derivative sets c , the critical speed in vacuum (speed of light). Given c and the Planck constant h , the coefficient of the linear term in φ sets m , the mass . To study solutions of the wave equation, we consider the conserved observables: and get plane wave solutions from which we can build position and speed operators 1 . In the non-relativistic limit, it can be checked that m is indeed the inertial mass 2 . With the conservative choice of leaving the Planck constant unchanged, superluminal sectors of matter can be generated replacing in the above construction the speed of light c by a new critical speed c i for the i-th superluminal sector. All previous concepts remain valid, leading to particles with positive mass and energy which are not tachyons. For inertial mass m and critical speed c i , the new particles will have rest energies 2 : which, for a given inertial mass, are much higher than the rest energies of "ordinary" particles. This generalization of the Einstein equation implies in particular that: a) in accelerator experiments, very high energies can be required to produce superluminal particles; b) cosmic ray events originating from superluminal particles can release very high energies.
A scenario with several critical speeds in vacuum
In what follows, we shall consider a scenario with several sectors of matter: a) the "ordinary" sector, made of "ordinary" particles with a critical speed equal to the speed of light c ; b) one or more superluminal sectors, where particles have critical speeds c i ≫ c in vacuum, and each sector is assumed to have its own Lorentz invariance with c i defining the metric. If the standard minkowskian space-time is not a compulsory framework, we can conceive fundamentally different descriptions of space and time 3 . Spacetime can, for instance, be galilean, or minkowskian with an absolute critical speed C ≫ c . Another possibility, requiring an absolute origin, would be to consider a SU (2) spinorial space-time where time would correspond to the spinor modulus (a SU (2) scalar, positive definite and therefore setting an arrow of time), and the three space dimensions would originate from the tangent hyperplane to the S 3 hypersphere of constant modulus in the C 2 (topologically equivalent to R 4 ) spinor space. In this tangent hyperplane, the three independent directions correspond to the three SU (2) generators and therefore define a vector representation of SU (2) .
Even if each sector has its own "Lorentz invariance" involving as the basic parameter the critical speed in vacuum of its own particles, interaction between two different sectors will break both Lorentz invariances. The concept of mass, as a relativistic invariant, will become approximate and sectorial. In our approach, the vacuum is a material medium as suggested by recent results in particle physics, and the Michelson-Morley result is not incompatible with the existence of some "ether" defining an absolute local rest frame (the "vacuum rest frame"). If superluminal particles couple weakly to ordinary matter, their effect on the ordinary sector will occur at very high energy and short distance, far from the domain of successful conventional tests of Lorentz invariance. The actual structure of space and time will be found only by going beyond the above wave equations to deeper levels of resolution, similar to the way high-energy accelerators explore the inner structure of "elementary" particles.
Our scenario is far from being the first case in which several critical speeds coexist in a medium. In a perfectly transparent crystal close to zero temperature, two critical speeds exist: the speed of sound and the speed of light.
Dynamics and cosmology
Mass mixing between particles from different sectors may occur and, although very weak, be more significant for very light particles (e.g. photons, neutrinos...). Since the graviton is an "ordinary" gauge boson, associated to ordinary Lorentz invariance, it is not expected to play a universal role in the presence of superluminal particles. Assuming that each superluminal sector has its own Lorentz metric g [i]µν ([i] for the i-th sector), with c i setting the speed scale, we may expect each sector to generate its own gravity with a coupling constant κ i and a sectorial graviton traveling at speed c i . "Gravitational" interactions between different sectors will be weak and concepts so far considered as very fundamental (e.g. the universality of the exact equivalence between inertial and gravitational mass) will become approximate sectorial properties.
If superluminal sectors couple to ordinary matter, they are expected to release "Cherenkov" radiation (e.g. spontaneous emission of particles whose critical speed is lower than the speed of the particle) in vacuum when they move at a speed v > c . Thus, superluminal particles will be eventually decelerated to a speed v ≤ c . The nature and rate of "Cherenkov" radiation in vacuum will depend on the superluminal particle and can be very weak in some cases. In accelerator experiments, this "Cherenkov" radiation may provide a clean signature allowing to identify some of the produced superluminal particles.
If each sectorial Lorentz invariance is expected to break down below a critical distance scale k −1 i , k −1 o for the ordinary sector, where the k i and k o are critical wave vector scales, we can expect 4 the appearance of critical temperatures T o and T i defined by: defining phase transitions in field theories, as well as in the very early Universe. These singularities seem to prevent conventional extrapolations to a Big Bang limit. Above T o , the Universe may have contained only superluminal particles and dynamical correlations have been able to propagate much faster than light. This invalidates standard arguments leading to the so-called "horizon problem" and "monopole problem". Conventional Friedmann equations will not hold in the new scenario, and the need for inflation is far from obvious. In the above considered spinorial space-time, the Big Bang limit can possibly be related to the absolute origin in the spinor space. In this approach, it seems impossible to set a "natural time scale" based on extrapolations (e.g. to Planck time) from our knowledge of the low energy sector. Arguments leading to the "flatness" and "naturalness" problem, as well as the concept of the cosmological constant and the relation between critical density and Hubble's "constant" (one of the basic arguments for ordinary dark matter at cosmic scale) should be reconsidered. Superluminal particles may have played a cosmological role leading to substantial changes in the "Big Bang" theory. They can be very abundant and even provide nowadays most of the (dark) matter at cosmic scale, therefore leading the present evolution of the Universe.
Superluminal particles and dark matter identification
If superluminal particles are very abundant, they can, in spite of their expected weak coupling to "ordinary" gravitation, produce some observable gravitational effects. It is not obvious how to identify the superluminal origin of a collective gravitational phenomenon, but signatures may exist, e.g. in gravitational collapses or if it were possible to detect superluminal gravitational waves. We do not in general expect concentrations of superluminal matter to follow those of ordinary matter, but this is not excluded, e.g. in the presence of coupled gravitational singularities involving several sectors. If astrophysical concentrations of superluminal particles produce high-energy particles, cosmic rays may provide a unique way to detect such objects 5 . Direct detection of particles from superluminal matter around us, e.g. in underground and underwater detectors, should not be discarded 3 . At very high energy, they can escape the Greisen-Zatsepin-Kuzmin cutoff 6 and be at the origin of the highest-energy events. At lower energies, they can produce detectable signals.
Superluminal primaries
High-energy superluminal particles can be produced from acceleration, decays, explosions... in astrophysical objects made of superluminal matter, or from "Cherenkov" emission in vacuum by particles with higher critical speed. They can reach the earth and undergo collisions inside the atmosphere, producing many secondaries like ordinary cosmic rays. They can interact with the rock or with water near some underground or underwater detector, coming from the atmosphere or after having crossed the earth. Contrary to neutrinos, whose flux is attenuated by the earth at energies above 10 6 GeV , superluminal particles will in principle not be stopped by earth at these energies. Such primaries can release most of their energy in inelastic collisions, and rather high energies (with momentum transfer of the order of the incoming momentum) in elastic scattering. Low-energy superluminal particles can also produce detectable events. At v ≃ c (after "Cherenkov" deceleration in vacuum), superluminal primaries can produce recoil protons and neutrons in the GeV range and inelastic events of higher energies. Such events would be detectable, e.g. in Cherenkov detectors for neutrino astronomy, even at very small rates. In cryogenic detectors, unconventional recoil spectra (e.g. indicating an escape velocity much above 10 −3 c) can be a signature for superluminal dark matter.
Ordinary primaries
Annihilation of pairs of slow superluminal particles, releasing very high kinetic energies from the relation E = mc 2 i , can be a source of high-energy ordinary and superluminal cosmic rays. Decays and "Cherenkov" radiation in vacuum can produce similar effects. Thus, ordinary cosmic rays can be produced anywhere and not just near astrophysical objects made of ordinary matter. | 2014-10-01T00:00:00.000Z | 1996-10-12T00:00:00.000 | {
"year": 1996,
"sha1": "5e6ee3bdb39b64dcae43ed2c0a33c55e186e5f80",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "003d4a7aa98feffeb713b18b003373cb1013df0c",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
115141667 | pes2o/s2orc | v3-fos-license | Numerical Study of Simultaneous Multiple Fracture Propagation in Changning Shale Gas Field
Recently, the Changning shale gas field has been one of the most outstanding shale plays in China for unconventional gas exploitation. Based on the more practical experience of hydraulic fracturing, the economic gas production from this field can be optimized and gradually improved. However, further optimization of the fracture design requires a deeper understanding of the effects of engineering parameters on simultaneous multiple fracture propagation. It can increase the effective fracture number and the well performance. In this paper, based on the Changning field data, a complex fracture propagation model was established. A series of case studies were investigated to analyze the effects of engineering parameters on simultaneous multiple fracture propagation. The fracture spacing, perforating number, injection rate, fluid viscosity and number of fractures within one stage were considered. The simulation results show that smaller fracture spacing implies stronger stress shadow effects, which significantly reduces the perforating efficiency. The perforating number is a critical parameter that has a big impact on the cluster efficiency. In addition, one cluster with a smaller perforating number can more easily generate a uniform fracture geometry. A higher injection rate is better for promoting uniform fluid volume distribution, with each cluster growing more evenly. An increasing fluid viscosity increases the variation of fluid distribution between perforation clusters, resulting in the increasing gap between the interior fracture and outer fractures. An increasing number of fractures within the stage increases the stress shadow among fractures, resulting in a larger total fracture length and a smaller average fracture width. This work provides key guidelines for improving the effectiveness of hydraulic fracture treatments.
Introduction
The Changning shale gas field in Sichuan Basin is well known as the main shale gas production area in China, and has begun commercial production since 2012.With less than a decade of production, there is still much to learn about the most efficient way to produce shale gas.With the increasing practical experience of hydraulic fracturing, the economic shale gas production from this field is able to be optimized and gradually improved [1].The production of single shale-gas wells has been continuously improved and the average daily production has increased from 11.1 × 10 4 m 3 to 28 × 10 4 m 3 .However, the average well production rate and estimated ultimate recovery (EUR) are significantly lower than shale gas production in North America [2][3][4].
Energies 2019, 12, 1335 2 of 13 The technique of multi-stage hydraulic fracturing is the key to develop unconventional gas reservoirs [5][6][7][8].The production in the Haynesville shale demonstrates that one of the most effective ways to increase production is to maximize the number of fracture initiation points along the lateral.Because of the limited drainage radius of the created fractures, the well production increases while the spacing between each perforated cluster interval decreases.The recent completions in the Haynesville shale show that many operators are completing wells with tighter cluster spacing than previously attempted, and this trend has continued [9].However, when the completions for increasing the number of clusters in one stage are used in the Changning field, the well production is not significantly increased.
The complex fracture geometry is often generated and predicted in shale gas reservoirs rather than simple planar fractures through advanced fracture diagnostic and microseismic monitoring results [10].One usually considers that increasing the perforation clusters in one stage can generate a similar number of fractures after hydraulic fracturing.However, production logging and tracer detection demonstrated that not all fractures along the horizontal wellbore can effectively propagate [11][12][13][14][15]. Fracturing fluids and proppants do not enter into each cluster evenly.Some clusters have a large proportion of the intended liquid and proppant and generate "super" fractures, resulting in other clusters accepting very little fluid to grow.As a result, the difference of shale gas production between different clusters along the lateral is very big.The data of the production logs from the wells of the Sichuan Basin with 4-5 perforation clusters in one stage indicated that some clusters may be ineffective and do not contribute to production.However, the further optimization of the fracture design requires a better understanding of the effects of engineering parameters on simultaneous multiple fracture propagation.The fracture propagation model has been widely used in unconventional reservoirs for the completion design, such as the Permian Basin and Eagle Ford Shale [16][17][18].Through the modeling research, optimization strategies have been achieved, which support the improvement of single well production.The optimal fracture design can materially increase the effective fracture number and enhance the well productivity.However, the rock mechanics parameters used in the simulation are the actual data of Changning, which are significantly different from US fields.The minimum horizontal stress and the Young's modulus are the different parameters used for this specific field.The minimum horizontal stress gradient of Changning is 0.0249 MPa/m, while the minimum horizontal stress gradient of US fields is 0.0199 MPa/m.The Young's modulus is about twice as much as that in US fields.Consequently, the multistage fracturing completion of the Changing field should be optimized to achieve a high cluster efficiency and increase the opportunities to distribute fluid and proppant evenly across all targeted clusters.In this paper, based on the complex fracture propagation model (XFRAC) and the Changning field data, a series of case studies were performed to investigate the effects of multiple engineering parameters on multiple fracture propagation.The fracture spacing, perforating number, injection rate, fluid viscosity and number of fractures within one stage were studied.For a deeper understanding of the complex physics related to simultaneous multiple fracture propagation and evaluating the uniformity of the fracture length, three perforation clusters in a stage were simulated, and the deviation of the normalized fracture length was calculated.The description of the model is presented in the following section.
Methodology
A complex fracture propagation model, developed by Wu [19], was used to simulate simultaneous multiple fracture propagation in shale gas formation.The rock deformation and fluid flow were iteratively coupled in the model.The rock deformation was modeled by a simplified 3D displacement discontinuity method [20].The shear and normal displacement discontinuities were calculated for each fracture element.The normal displacement discontinuity is the opening of fractures, and the shear displacement discontinuity is used to predict the fracture propagation path at each time step.A non-planar fracture geometry will be induced if the shear displacement discontinuity is nonzero.To improve the computation efficiency, the simplified displacement discontinuity method eliminated the discretization in the vertical (fracture height) direction.The solution of the method can be made explicit as follows: where i and j represents elements i and j, N is the total element number, D j n is a normal displacement discontinuity on element j, and D j sL is a shear displacement discontinuity on element j. σ i sL and σ i nn are given traction boundary conditions.The distribution of pressure along the fracture path can be computed by the fluid flow model, which can provide these tractions.The constitutive model is based on the assumption of the plane-strain and elastic deformation.A ij nn,sL is the coefficient matrix that can give the normal stress at element i because of a shear displacement discontinuity at element j.A ij nn,nn represents the normal stress at element i induced by an opening displacement discontinuity at element j.Analogous meanings can be attributed to A ij sL,sL and A ij sL,nn .The detailed derivation of the model can be found from the work by Wu [19].
The fluid flow in the shale gas wellbore and each fracture are fully coupled, similar to the electric circuit network.The flow rate of every fracture is similar to the current, and the pressure is analogous to the electric potential.We applied Kirchoff's first and second laws to compute the flow rate distribution among every fracture within a stage.The total volumetric injection rate, Q T , is given, and the injection rates into each fracture, Q i , are dynamically calculated by the model.The wellbore storage effect was ignored in the model.The sum of the injection rates of all the fractures is equal to the total injection rate, Kirchoff's second law described the continuousness of the pressure along the horizontal wellbore, considering the pressure drop of the wellbore friction and the perforation friction [21].The sum of the pressure in the first element of a fracture branch, perforation friction pressure drop, and wellbore friction pressure drop together is equal to the pressure in the wellbore heel.The equation is given by: where p o is the total pressure of the wellbore heel, p w,i is pressure of the first element of the fracture, p pf,i is the pressure loss of the perforation friction pressure loss, and p cf,i is the pressure loss of the horizontal wellbore.The identification number of the fracture branches is represented by 'i'.The pressure drop of the perforation friction can be calculated by a function of the square of the flow rate and perforation friction.The lubrication theory was applied to describe the fluid flow in the fracture and the associated pressure drop.The model assumed that the fracture is a slot between parallel plates.Multiple fracture propagation has been simulated by the model and compared with a numerical model [22] to benchmark the accuracy of capturing the physical process of stress shadow effects.
Base Case
In this section, we demonstrate the phenomenon of uneven fracture growth and how to facilitate a more uniform fracture propagation.The base case has three fractures propagating simultaneously in a single stage (Figure 1), which has a uniform cluster spacing of 23.3 m.All parameters were selected from the Longmaxi formation of the Changning shale gas field in China and are listed in Table 1.We assume that one perforation cluster induces only one hydraulic fracture.Hence, the perforation-cluster spacing is the same as the initial-fracture spacing.The effects of the natural fractures and near-wellbore Energies 2019, 12, 1335 4 of 13 tortuosity are not taken into account.It is assumed that the reservoir is homogeneous in regard to slight differences of the in-situ stress state and rock mechanical properties.The final fracture geometry and flow volume distribution of the base case are shown in Figures 2 and 3, respectively.Because of the strong stress shadow effects, the middle fracture is much shorter, while the two exterior fractures are much longer.The average percentage of the flow rate into every cluster is 33%.The middle fracture only received 19.6%, which is much less than the intended percentage, while the exterior fractures received about 40.2% of the total fluid.The stress shadow effects and the friction pressure drop along the wellbore result in the curves of the interior and exterior fractures diverging.Based on the base case, we modified the values of the fracture spacing, perforating number, injection rate, fluid viscosity and number of fractures within the stage to analyze how these factors affect the effectiveness in promoting a uniform fracture growth.These parameters were changed one at a time from the base case.
Energies 2019, 01, x FOR PEER REVIEW 4 of 13 selected from the Longmaxi formation of the Changning shale gas field in China and are listed in Table 1.We assume that one perforation cluster induces only one hydraulic fracture.Hence, the perforation-cluster spacing is the same as the initial-fracture spacing.The effects of the natural fractures and near-wellbore tortuosity are not taken into account.It is assumed that the reservoir is homogeneous in regard to slight differences of the in-situ stress state and rock mechanical properties.The final fracture geometry and flow volume distribution of the base case are shown in Figures 2 and 3, respectively.Because of the strong stress shadow effects, the middle fracture is much shorter, while the two exterior fractures are much longer.The average percentage of the flow rate into every cluster is 33%.The middle fracture only received 19.6%, which is much less than the intended percentage, while the exterior fractures received about 40.2% of the total fluid.The stress shadow effects and the friction pressure drop along the wellbore result in the curves of the interior and exterior fractures diverging.Based on the base case, we modified the values of the fracture spacing, perforating number, injection rate, fluid viscosity and number of fractures within the stage to analyze how these factors affect the effectiveness in promoting a uniform fracture growth.These parameters were changed one at a time from the base case.selected from the Longmaxi formation of the Changning shale gas field in China and are listed in Table 1.We assume that one perforation cluster induces only one hydraulic fracture.Hence, the perforation-cluster spacing is the same as the initial-fracture spacing.The effects of the natural fractures and near-wellbore tortuosity are not taken into account.It is assumed that the reservoir is homogeneous in regard to slight differences of the in-situ stress state and rock mechanical properties.The final fracture geometry and flow volume distribution of the base case are shown in Figures 2 and 3, respectively.Because of the strong stress shadow effects, the middle fracture is much shorter, while the two exterior fractures are much longer.The average percentage of the flow rate into every cluster is 33%.The middle fracture only received 19.6%, which is much less than the intended percentage, while the exterior fractures received about 40.2% of the total fluid.The stress shadow effects and the friction pressure drop along the wellbore result in the curves of the interior and exterior fractures diverging.Based on the base case, we modified the values of the fracture spacing, perforating number, injection rate, fluid viscosity and number of fractures within the stage to analyze how these factors affect the effectiveness in promoting a uniform fracture growth.These parameters were changed one at a time from the base case.
Effect of Fracture Spacing
With ultralow matrix permeability, one of the most effective ways to increase shale gas production is to optimize the number of fracture initiation points along the lateral.However, stress shadow effects can result from overly closely spaced fractures, resulting in an inefficient completion.Hence, we investigated three different fracture spacing effects on the fracture geometry and compared this with the base case.Each stage consists of three clusters, and the fracture spacings are 10 m, 15 m, 23.3 m, and 30 m (Figure 4), respectively.The simulation results show that the stress shadow effects increase with the decreasing fracture spacing, resulting in two longer outer fractures and a shorter middle fracture, as shown in Figure 5 and Table 2.The non-uniform fracture growth will significantly reduce the perforation efficiency.This is because larger stress shadow effects would increase the flow resistance of the middle fracture; less fluid enters into the middle fracture.
Effect of Fracture Spacing
With ultralow matrix permeability, one of the most effective ways to increase shale gas production is to optimize the number of fracture initiation points along the lateral.However, stress shadow effects can result from overly closely spaced fractures, resulting in an inefficient completion.Hence, we investigated three different fracture spacing effects on the fracture geometry and compared this with the base case.Each stage consists of three clusters, and the fracture spacings are 10 m, 15 m, 23.3 m, and 30 m (Figure 4), respectively.The simulation results show that the stress shadow effects increase with the decreasing fracture spacing, resulting in two longer outer fractures and a shorter middle fracture, as shown in Figure 5 and Table 2.The non-uniform fracture growth will significantly reduce the perforation efficiency.This is because larger stress shadow effects would increase the flow resistance of the middle fracture; less fluid enters into the middle fracture.
Effect of Fracture Spacing
With ultralow matrix permeability, one of the most effective ways to increase shale gas production is to optimize the number of fracture initiation points along the lateral.However, stress shadow effects can result from overly closely spaced fractures, resulting in an inefficient completion.Hence, we investigated three different fracture spacing effects on the fracture geometry and compared this with the base case.Each stage consists of three clusters, and the fracture spacings are 10 m, 15 m, 23.3 m, and 30 m (Figure 4), respectively.The simulation results show that the stress shadow effects increase with the decreasing fracture spacing, resulting in two longer outer fractures and a shorter middle fracture, as shown in Figure 5 and Table 2.The non-uniform fracture growth will significantly reduce the perforation efficiency.This is because larger stress shadow effects would increase the flow resistance of the middle fracture; less fluid enters into the middle fracture.
Effect of Perforating Number
Perforation friction is a function of perforation density.The base case has a uniform perforation design with 16 perforations for each cluster.In this subsection, three different cases were investigated: two of them increase to 20 and 24 perforations for each cluster respectively, and another case uses only 12 perforations for each cluster.Figure 6 and Table 3 illustrate that fractures grow more nonuniformly with the increasing perforation density for each cluster.The larger the perforation density, the shorter the middle fracture and the longer the two outer fractures.In addition, it can be found that 12 perforations per cluster for three clusters in one stage are the optimal design in the Changning shale gas field.While the perforation density for each cluster increases from 12 to 24, the length of the middle fracture is reduced by 150% and the width is reduced by 25%, which significantly decreases the cluster efficiency.
Effect of Perforating Number
Perforation friction is a function of perforation density.The base case has a uniform perforation design with 16 perforations for each cluster.In this subsection, three different cases were investigated: two of them increase to 20 and 24 perforations for each cluster respectively, and another case uses only 12 perforations for each cluster.Figure 6 and Table 3 illustrate that fractures grow more non-uniformly with the increasing perforation density for each cluster.The larger the perforation density, the shorter the middle fracture and the longer the two outer fractures.In addition, it can be found that 12 perforations per cluster for three clusters in one stage are the optimal design in the Changning shale gas field.While the perforation density for each cluster increases from 12 to 24, the length of the middle fracture is reduced by 150% and the width is reduced by 25%, which significantly decreases the cluster efficiency.
Effect of Injection Rate
The injection rate is another important factor for affecting hydraulic fracturing treatments.Additionally, we kept other parameters the same as the base case and investigated the effects of different injection rates on the fracture geometry (Figure 7 and Table 4).Four injection rates were considered: 10 m 3 /min, 12 m 3 /min, 14 m 3 /min and 16 m 3 /min.Since the injection time for all of the cases was the same, more fluid volume was injected for the larger injection rate.Figure 7 shows that a more uniform fracture geometry was achieved for the larger injection rate.The simulation results demonstrate that a higher injection rate is better for promoting a uniform fluid volume distribution and an even growth for each cluster in the Changning shale gas field.This is because that larger injection rate can mitigate stress shadow effects and generate a higher perforation friction pressure drop.Consequently, the injection rate of the Changning shale gas field should be increased to improve the cluster efficiency.
Effect of Injection Rate
The injection rate is another important factor for affecting hydraulic fracturing treatments.Additionally, we kept other parameters the same as the base case and investigated the effects of different injection rates on the fracture geometry (Figure 7 and Table 4).Four injection rates were considered: 10 m 3 /min, 12 m 3 /min, 14 m 3 /min and 16 m 3 /min.Since the injection time for all of the cases was the same, more fluid volume was injected for the larger injection rate.Figure 7 shows that a more uniform fracture geometry was achieved for the larger injection rate.The simulation results demonstrate that a higher injection rate is better for promoting a uniform fluid volume distribution and an even growth for each cluster in the Changning shale gas field.This is because that larger injection rate can mitigate stress shadow effects and generate a higher perforation friction pressure drop.Consequently, the injection rate of the Changning shale gas field should be increased to improve the cluster efficiency.
Effect of Fluid Viscosity
We studied the effects of different fluid viscosities on the fracture geometry, and the simulation results are shown in Figure 8.The other parameters are the same as those of the base case.The three different viscosities of the injection fluid are, respectively, 2.0 mPa•s, 10 mPa•s and 24 mPa•s.An injection fluid with a larger viscosity created a higher fluid pressure within the fracture and a wider fracture width (Figure 8 and Table 5).A higher fluid pressure generated stronger stress shadow effects, resulting in a larger variation of fluid distribution between perforation clusters.Figure 8 illustrates that the length of the middle fracture is reduced by 40.6%, and the width increases by 76%, when the fluid viscosity increases from 2.0 mPa•s to 24 mPa•s.For that reason, the viscosity of the injection fluid should be decreased to 2.0 mPa•s in the Changning shale gas field.
Effect of Fluid Viscosity
We studied the effects of different fluid viscosities on the fracture geometry, and the simulation results are shown in Figure 8.The other parameters are the same as those of the base case.The three different viscosities of the injection fluid are, respectively, 2.0 mPa•s, 10 mPa•s and 24 mPa•s.An injection fluid with a larger viscosity created a higher fluid pressure within the fracture and a wider fracture width (Figure 8 and Table 5).A higher fluid pressure generated stronger stress shadow effects, resulting in a larger variation of fluid distribution between perforation clusters.Figure 8 illustrates that the length of the middle fracture is reduced by 40.6%, and the width increases by 76%, when the
Effect of Number of Fractures Within the Stage
The number of fractures within the stage is an important factor of hydraulic fracturing treatments, which is the most effective way to increase production in the Haynesville [14].Therefore, we studied the impacts of different cluster numbers on the fracture geometry in a single stage.Under the condition of the fixed stage length of 70 m, the cluster number of four cases is, respectively, 2, 3, 4 and 5, while the other parameters are the same as the base case.The simulation results are shown in Figure 9 and Table 6.They illustrate that as the cluster number within the stage increases, the cluster spacing decreases, and the stress shadow effects increase, leading to a longer total fracture length and shorter average fracture width.The optimal number of clusters in a single stage needs to be determined in combination with the production simulation and economic evaluation.However, according to the simulation results, if more than 4 clusters within the stage are used, one needs to
Effect of Number of Fractures Within the Stage
The number of fractures within the stage is an important factor of hydraulic fracturing treatments, which is the most effective way to increase production in the Haynesville [14].Therefore, we studied the impacts of different cluster numbers on the fracture geometry in a single stage.Under the condition of the fixed stage length of 70 m, the cluster number of four cases is, respectively, 2, 3, 4 and 5, while the other parameters are the same as the base case.The simulation results are shown in Figure 9 and Table 6.They illustrate that as the cluster number within the stage increases, the cluster spacing decreases, and the stress shadow effects increase, leading to a longer total fracture length and shorter average fracture width.The optimal number of clusters in a single stage needs to be determined in combination with the production simulation and economic evaluation.However, according to the simulation results, if more than 4 clusters within the stage are used, one needs to utilize the intrastage diversion techniques [23,24] to enhance cluster efficiency in the Changning shale gas field.
Energies 2019, 01, x FOR PEER REVIEW 10 of 13 utilize the intrastage diversion techniques [23,24] to enhance cluster efficiency in the Changning shale gas field.
Discussions
In order to evaluate the effects of the fracture spacing, perforating number, injection rate, fluid viscosity, and number of fractures within the stage on the fracture geometry in the Changning shale gas field, we defined a deviation of the normalized fracture length [25].This sequence can indicate the main controlling factors for the effectiveness of fracture treatments.First, according to the basic input parameters, the average fracture length in the example is calculated.Then, we calculated the deviation of the three fractures.In the same way, we calculated the maximum and minimum deviation corresponding to the maximum and minimum values of each uncertain parameter.Finally,
Discussions
In order to evaluate the effects of the fracture spacing, perforating number, injection rate, fluid viscosity, and number of fractures within the stage on the fracture geometry in the Changning shale gas field, we defined a deviation of the normalized fracture length [25].This sequence can indicate the main controlling factors for the effectiveness of fracture treatments.First, according to the basic input parameters, the average fracture length in the example is calculated.Then, we calculated the deviation of the three fractures.In the same way, we calculated the maximum and minimum deviation corresponding to the maximum and minimum values of each uncertain parameter.Finally, we sorted each uncertain parameter according to the maximum and minimum deviation values.Based on the sorting result and the deviation of the normalized fracture length, the Tornado plot (Figure 10) was obtained.The x-axis is the calculated deviation of the normalized fracture length and represents the effects of uncertain parameters on the uniformity of the fracture growth.The order of uncertain parameters in the y-axis was determined by the absolute difference between the maximum and minimum deviation of the normalized fracture length.The green bar represented a positive effect and the black bar represented a negative effect.The middle mark represented the deviation of the normalized fracture length of the base case.The Tornado plot shows that the number of fractures within the stage is the most important parameter for affecting the fracture geometry in the Changning shale gas field.A larger variation of fracture geometry will be created with either the increasing number of fractures, the decreasing flow rate, the increasing perforating number or the increasing fluid viscosity.The fracture spacing has a relatively smaller impact on the fracture geometry.It should be mentioned that the spatial variations of the stress state, natural fractures and near wellbore tortuosity are not considered in this study, but will be examined in our future work.Therefore, we should improve the number of fractures in the stage with the intrastage diversion techniques.In addition, 16 m 3 /min of flow rate, 12 perforations for each cluster and an injection fluid with 2.0 mPa•s are better for improving the effectiveness of the stimulation treatments in the Changning shale gas field.
Energies 2019, 01, x FOR PEER REVIEW 11 of 13 we sorted each uncertain parameter according to the maximum and minimum deviation values.
Based on the sorting result and the deviation of the normalized fracture length, the Tornado plot (Figure 10) was obtained.The x-axis is the calculated deviation of the normalized fracture length and represents the effects of uncertain parameters on the uniformity of the fracture growth.The order of uncertain parameters in the y-axis was determined by the absolute difference between the maximum and minimum deviation of the normalized fracture length.The green bar represented a positive effect and the black bar represented a negative effect.The middle mark represented the deviation of the normalized fracture length of the base case.The Tornado plot shows that the number of fractures within the stage is the most important parameter for affecting the fracture geometry in the Changning shale gas field.A larger variation of fracture geometry will be created with either the increasing number of fractures, the decreasing flow rate, the increasing perforating number or the increasing fluid viscosity.The fracture spacing has a relatively smaller impact on the fracture geometry.It should be mentioned that the spatial variations of the stress state, natural fractures and near wellbore tortuosity are not considered in this study, but will be examined in our future work.Therefore, we should improve the number of fractures in the stage with the intrastage diversion techniques.In addition, 16 m 3 /min of flow rate, 12 perforations for each cluster and an injection fluid with 2.0 mPa•s are better for improving the effectiveness of the stimulation treatments in the Changning shale gas field.
Conclusions
We applied a complex fracture propagation model to simulate multiple fracture propagation in the Changning shale gas field.The effects of the fracture spacing, perforating number, injection rate, fluid viscosity, and number of fractures within the stage on the fracture geometry were investigated based on field data from the Longmaxi shale formation in the Changning shale gas reservoir.The following conclusions can be drawn from this study: (1) The main factors for controlling the cluster efficiency in the Changning shale gas field are the cluster numbers, the perforation density, the injection rate, and the liquid viscosity.
(2) Hydraulic fracture treatments with more than four clusters per stage, a lower injection rate, larger perforating number, larger viscosity fluid, and closer fracture spacing can result in an increasing gap between the inner fracture and outer fractures, and they will likely exhibit a bad production performance.
(3) This study provides a better understanding of the way to appropriately optimize a hydraulic fracturing treatment design which can increase the effective fracture number and promote the shale gas well performance in Changning.
Conclusions
We applied a complex fracture propagation model to simulate multiple fracture propagation in the Changning shale gas field.The effects of the fracture spacing, perforating number, injection rate, fluid viscosity, and number of fractures within the stage on the fracture geometry were investigated based on field data from the Longmaxi shale formation in the Changning shale gas reservoir.The following conclusions can be drawn from this study: (1) The main factors for controlling the cluster efficiency in the Changning shale gas field are the cluster numbers, the perforation density, the injection rate, and the liquid viscosity.(2) Hydraulic fracture treatments with more than four clusters per stage, a lower injection rate, larger perforating number, larger viscosity fluid, and closer fracture spacing can result in an increasing gap between the inner fracture and outer fractures, and they will likely exhibit a bad production performance.(3) This study provides a better understanding of the way to appropriately optimize a hydraulic fracturing treatment design which can increase the effective fracture number and promote the shale gas well performance in Changning.
Figure 1 .
Figure 1.Three transverse fractures with a uniform spacing of 23.3 m in a single stage.
Figure 2 .
Figure 2. Three transverse fractures propagating simultaneously in a single stage.
Figure 1 .
Figure 1.Three transverse fractures with a uniform spacing of 23.3 m in a single stage.
Figure 1 .
Figure 1.Three transverse fractures with a uniform spacing of 23.3 m in a single stage.
Figure 2 .
Figure 2. Three transverse fractures propagating simultaneously in a single stage.Figure 2. Three transverse fractures propagating simultaneously in a single stage.
Figure 2 . 13 Figure 3 .
Figure 2. Three transverse fractures propagating simultaneously in a single stage.Figure 2. Three transverse fractures propagating simultaneously in a single stage.
Figure 3 .
Figure 3. Percentage of total flow volume entering into each perforation cluster.
Energies 2019 , 13 Figure 3 .
Figure 3. Percentage of total flow volume entering into each perforation cluster.
Figure 5 .
Figure 5. Effects of the different values of fracture spacing on the fracture geometry: (a) 10 m; (b) 15 m; (c) 23.3 m; and (d) 30 m.
Figure 10 .
Figure 10.Rank of five uncertain parameters on the deviation of the normalized fracture length.
Figure 10 .
Figure 10.Rank of five uncertain parameters on the deviation of the normalized fracture length.
Table 1 .
Input parameters for simulation cases in this study.
Table 1 .
Input parameters for simulation cases in this study.
Table 1 .
Input parameters for simulation cases in this study.
Table 2 .
The results of fracture length affected by different cluster spacings.
Table 3 .
The results of fracture length affected by different perforating numbers.
Table 4 .
The results of fracture length affected by different injection rates.
Table 4 .
The results of fracture length affected by different injection rates.
Table 5 .
The results of fracture length affected by different fluid viscosities.
Table 6 .
The results of fracture length affected by different number of fractures within the stage. | 2019-04-13T04:59:44.458Z | 2019-04-08T00:00:00.000 | {
"year": 2019,
"sha1": "0a58ad7e49357e5d55186851ab93da06651a0f7a",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1996-1073/12/7/1335/pdf?version=1554716828",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "0a58ad7e49357e5d55186851ab93da06651a0f7a",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Engineering"
]
} |
211550014 | pes2o/s2orc | v3-fos-license | Rational design of novel coumarins: A potential trend for antioxidants in cosmetics
Coumarins are well-known for their antioxidant effect and aromatic property, thus, they are one of ingredients commonly added in cosmetics and personal care products. Quantitative structure-activity relationships (QSAR) modeling is an in silico method widely used to facilitate rational design and structural optimization of novel drugs. Herein, QSAR modeling was used to elucidate key properties governing antioxidant activity of a series of the reported coumarin-based antioxidant agents (1-28). Several types of descriptors (calculated from 4 softwares i.e., Gaussian 09, Dragon, PaDEL and Mold2 softwares) were used to generate three multiple linear regression (MLR) models with preferable predictive performance (Q2LOO-CV = 0.813-0.908; RMSELOO-CV = 0.150-0.210; Q2Ext = 0.875-0.952; RMSEExt = 0.104-0.166). QSAR analysis indicated that number of secondary amines (nArNHR), polarizability (G2p), electronegativity (D467, D580, SpMin2_Bhe, and MATS8e), van der Waals volume (D491 and D461), and H-bond potential (SHBint4) are important properties governing antioxidant activity. The constructed models were also applied to guide in silico rational design of an additional set of 69 structurally modified coumarins with improved antioxidant activity. Finally, a set of 9 promising newly design compounds were highlighted for further development. Structure-activity analysis also revealed key features required for potent activity which would be useful for guiding the future rational design. In overview, our findings demonstrated that QSAR modeling could possibly be a facilitating tool to enhance successful development of bioactive compounds for health and cosmetic applications.
INTRODUCTION
Free radicals (or oxidants) are highly reactive molecules containing an unpaired electron, which are generated as by-products of physiological processes and intracellular pathways (Valko et al., 2007;Winyard et al., 2005). These oxidants are well-known for their harmful potential and deleterious effects in cellular components (i.e., DNA, proteins and lipids). In normal condition, these radicals are scavenged/neutralized by antioxidant defense mechanism (i.e., endogenous antioxidant molecules and antioxidant enzymes) to prevent cellular oxidative damages. However, the shift of oxidative balance occurs in a condition whereby radicals are overproduced or antioxidant defense mechanism is depleted. This situation leads to excessive accumulation of free radicals and oxidative stress. Oxidative damage involves in pathogenesis and progression of many chronic and aging diseases (i.e., cancer, diabetes mellitus, neurodegenerative diseases, and cardiovascular diseases) (Valko et al., 2007;Winyard et al., 2005). Furthermore, free radicals have been recognized as one of the factors contributing to aging skin (Bogdan Allemann and Baumann, 2008). Antioxidant compounds have been well-recognized for their wide-ranging health applications, especially in cosmeceutical area. Currently, an addition of antioxidants as active ingredients in cosmetics and personal care products has been widely documented (Kusumawati and Indrayanto, 2013;Lupo, 2001). Therefore, discovery of novel potent antioxidant compounds, both from chemical synthesis (Prachayasittikul et al., 2009a;Subramanyam et al., 2017;Worachartcheewan et al., 2012) and naturalderived sources (Elansary et al., 2018;Krishnaiah et al., 2011;Prachayasittikul et al., 2008Prachayasittikul et al., , 2009bPrachayasittikul et al., , 2013Wongsawatkul et al., 2008), has been noted to be an attractive research area, especially in cosmetic applications (Kusumawati and Indrayanto, 2013;Lupo, 2001).
Coumarins, known as benzopyrones, are natural secondary metabolites bearing fused benzene and α-pyrone rings (Witaicenis et al., 2014). Natural-derived coumarins are found in a wide range of plants (Lee et al., 2007;Rodríguez-Hernández et al., 2019;Saleem et al., 2019;Venditti et al., 2019). Coumarins displayed a variety of biological activities including antimicrobial (Arshad et al., 2011), antioxidant (Erzincan et al., 2015), anticancer (Nasr et al., 2014), and anti-inflammatory (Witaicenis et al., 2014) activities. Although synthetic coumarins were banned for oral products due to their potential toxicities, they are attractive for topical uses due to their high skin penetrating property (Stiefel et al., 2017). Additionally, coumarins are widely used as fragrance ingredient in cosmetics and personal care products because of their sweet herbaceous scent (Ma et al., 2015;Stiefel et al., 2017). Antioxidant property and protective effects against skin photo-aging of coumarins have also been remarked in cosmetic area (Kostova et al, 2011;Lee et al., 2007). Previously, a set of synthesized coumarin derivatives containing 2-methylbenzothiazolines, sulphonamides, and amides were reported to exhibit antioxidant activity with IC50 values range of 0.024-2.888 mM (Khoobi et al., 2011;Saeedi et al., 2014). However, deeper understanding of structure-activity relationships (SAR) and mechanism of action is still necessary for an effective rational design of coumarin-based antioxidant agents (Kostova et al., 2011).
Computational approaches have been widely recognized to facilitate and increase success rate of drug development (Nantasenamat and Prachayasittikul, 2015;Prachayasittikul et al., 2015a). Quantitative structure-activity relationship (QSAR) modeling is an in silico method to reveal the relationship between chemical structures of the compounds and their biological activities. QSAR modeling provides useful findings such as key features, properties, or moieties that are required for potent activity, which would benefit further rational design of the related compounds. Currently, success stories of QSARdriven rational design of several classes of promising lead compounds have been documented for anticancer agents (Prachayasittikul et al., 2015b), aromatase inhibitors (Prachayasittikul et al., 2017), and sirtuin-1 activators (Pratiwi et al., 2019). In cosmetic area, QSAR modeling has been employed to improve understanding towards SAR of tyrosinase inhibitors (Gao, 2018;Khan, 2012).
Accordingly, this study aims to construct QSAR models to elucidate SAR of a set of antioxidant coumarin derivatives (1-28, Figure 1) originally reported by Khoobi et al. (2011) and Saeedi et al. (2014). Herein, QSAR mod- els were constructed using multiple linear regression (MLR) algorithm to clearly demon-strate the linear relationship along with insight SAR analysis. In an attempt to find a robust and validating QSAR models, chemical descriptors were generated using different four softwares (i.e., Gaussian 09, Dragon, PaDEL and Mold 2 softwares) to increase a variety of represented physicochemical properties. Consequently, an additional set of structurally modified compounds were rationally designed based on key findings of the constructed models, and their antioxidant activities were predicted to reveal the promising ones with potential for further synthesis and development.
Data set
A data set of twenty-eight coumarinbased antioxidants (1-28, Figure 1) was retrieved from the literature (Khoobi et al., 2011;Saeedi et al., 2014), in which their antioxidant activities are presented in Table 1. All tested compounds were evaluated by 1,1-diphenyl-2-picryhydrazyl (DPPH) assay (detailed methodology is provided in original literatures (Khoobi et al., 2011;Saeedi et al., 2014)). The activity was denoted as an IC50 value (mM) which indicates concentration of the compound which can inhibit 50 % of the generated DPPH radicals in experimental setting. As a part of data pre-processing, the unit of IC50 values was converted from mM to M, and the IC50 values were further transformed into pIC50 (−log IC50) by taking the negative logarithm to base 10 as shown in Table 1. The compound with high pIC50 (low IC50) represented the high antioxidant activity. A schematic workflow of QSAR model development is provided in Figure 2.
Molecular structure optimization
Molecular structures of the coumarin derivatives were constructed by GaussView (Dennington et al., 2003), which were subjected to geometrical optimization by Gaussian 09 (Revision A.02) (Frisch et al., 2009) at the semi-empirical level using Austin Model 1 (AM1) followed by density functional theory (DFT) calculation using Becke's threeparameter hybrid method and the Lee-Yang-Parr correlation functional (B3LYP) together with the 6-31 g(d) basis.
Descriptor calculation and feature selection
The physicochemical properties (i.e., quantum chemical and molecular descriptors) were generated by different calculating softwares including Gaussian 09, Dragon, version 5.5. (Talete, 2007), PaDEL, version 2.20 (Yap, 2011) and Mold 2 , version 2.0 (Hong et al., 2008) softwares. The calculated descriptors as numerical values could be used to represent properties of the compounds, and were further used as predictors (X variables) for QSAR model construction. List of calculated descriptors are shown as follows.
An additional set of molecular descriptors was calculated by PaDEL software to give 1,444 1D and 2D descriptors, and Mold 2 software to generate 777 descriptors by encoding the 2D chemical structure information. Before the calculation, the molecular structures were saved to *.smi and then converted to *.mol files using OpenBabel version 2.3.2 (The Open Babel Package 2015). The *.mol files were used as the input data for calculation by PaDEL and Mold 2 softwares.
Descriptors selection was performed to filter a set of important informative de-scriptors from a whole set of descriptors. Feature selection was initially performed by stepwise multiple linear regression (MLR) using SPSS statistics 18.0 (SPSS Inc., USA) followed by determination of intercorrelation using Pearson's correlation coefficient using cutoff value of |r| ≥ 0.9. Any pairs of descriptors with |r| ≥ 0.9 were defined as highly correlated predictors, and one of them was excluded.
Data splitting
The data set of coumarin derivatives (1-28) was randomly selected, in which 85 % (23 compounds) of the original data set was used as the training and the leave one-out cross-validation (LOO-CV) sets, and 15 % (5 compounds) was used as the external set. The training set was employed to generate the QSAR models, whereas LOO-CV and external sets were used to evaluate the models. LOO-CV method was performed for internal validation by excluding one sample out from the whole data set to be used as the testing set while the remaining N−1 samples were used as the training set (Prachayasittikul et al., 2014). This sampling process was repeated iteratively until every sample in the data set was used as the testing set. The external sets were used to validate the models.
Multivariate analysis
QSAR models were generated using the MLR according to the equation 1.
where Y is the antioxidant activity (pIC50), B0 is the intercept, and n B are the regression coefficients of the descriptors n X . The MLR method was performed using Waikato Environment for Knowledge Analysis (Weka), version 3.4.5 (Witten et al., 2011).
Molecular descriptors selection
Chemical structures of the compounds and their antioxidant activities (Table 1) were used for construction of predictive models. The compounds were geometrically optimized with semi-empirical method AM1 followed by DFT/B3LYP/6-31 g(d) basis using Gaussian 09 to obtain lower-energy conformers. The optimized compounds were extracted to obtain 13 quantum chemical descriptors. These compounds were subsequently used as input files for calculating an additional set of 3,224 molecular descriptors (0D-3D) using Dragon software. The calculated descriptors with constant values and multi-collinearity were determined and removed to give a final set of 1,489 descriptors. In addition, original molecular structures of compounds were saved as *.smi file format and were converted into *.mol files using OpenBabel version 2.3.2. These *.mol files then were used as input files for descriptors calculation using Mold 2 and PaDEL softwares to obtain sets of 777 Mold 2 2D descriptors and 1,444 PaDEL 0D-2D descriptors, respectively. Consequently, feature selection was performed to select a set of informative descriptors for the whole calculated set. Tables 2 and 3, respectively. Furthermore, the intercor- Smallest absolute eigenvalue of Burden modified matrixn 2 / weighted by relative Sanderson electronegativities 2D (Burden modified eigenvalues) MATS8e Moran autocorrelation -lag 8 / weighted by Sanderson electronegativities 2D (Autocorrelation)
SssCH2
Sum of atom-type E-State: -CH2-2D (Atom type electrotopological state) relation matrix between pair of molecular descriptors was performed using Pearson's correlation coefficient (r) (Supplementary Tables 1-3). Cutoff value of |r| ≥ 0.9 was used to determine the intercorrelation. The results showed that there was no intercorrelation within a set of selected descriptors as displayed by low |r| values 0.9, which suggested that each descriptor was independent from other descriptors. Finally, a set of 14 selected descriptors was further employed to construct 3 QSAR models (according to types of software used to calculate descriptor values) for predicting antioxidant activity of the coumarin derivatives.
QSAR models
Descriptors obtained from these softwares have been demonstrated for their successful QSAR modeling such as antioxidant (Alisi et al., 2018;Rastija et al., 2018), antimicrobial (Alyar et al., 2009;Basic et al., 2014;Podunavac-Kuzmanović et al., 2009), anticancer (Sławiński et al., 2017;Suvannang et al., 2018) and antiviral Saavedra et al., 2018;Worachartcheewan et al., 2019) activities. Herein, three models were separately constructed based on the types of key descriptors (i.e., model 1 Dragon descriptors, model 2 Mold 2 descriptors, and model 3 PaDEL descriptors). A set of 14 selected informative descriptors (as independent variables, Table 2) and antioxidant activities (pIC50 values as dependent variables) of the studied compounds were included in the data sets for construction of QSAR models using Eq. (1). Before building the models, the data set of coumarin derivatives (1-28) was split into training, LOO-CV, and external sets. The training set was used to construct the model using MLR algorithm whereas both LOO-CV and external sets were utilized for validating the constructed models. Compounds 1, 6, 15, 21 and 27 were randomly selected to be used as external sets, while the remaining 23 compounds in the data sets (i.e., 2-5, 7-14, 16-20, 22-26 and 28) were employed as training set. As a result, three QSAR models (models 1-3) were successfully constructed for predicting antioxidant activities (pIC50 values) of the studied coumarin analogs. where NTr, NLOO-CV and NExt are the number of compounds of training, LOO-CV and external sets. R 2 Adj is the adjusted R 2 .
Four molecular descriptors calculated from Dragon software were used as predictors to construct QSAR model 1 as shown in Eq.
(2). Statistical parameters indicating predictive performance of the model are summarized in Table 4 In overview, three constructed models provided satisfactory results as indicated by their statistical parameters such as R 2 , Q 2 , RMSE, F ratio and PRESS values. The R 2 and Q 2 of the obtained QSAR models were considered as acceptable values when R 2 >0.6 and Q 2 >0.5 (Golbraikh and Tropsha, 2002;Nantasenamat et al., 2010). These parameters of all constructed models were in acceptable range ( (Frimayanti et al., 2011;Rastija et al., 2018). The statistical (Table 4) and graphical ( Figure 3) results showed that the QSAR models (models 1-3) gave a reliable agreement of the experimental and the predicted antioxidant values. Furthermore, the plots of experimental activity and residual values (Figures 3b, 3d and 3f) displayed the distribution of residuals on both sides of the zero values indicating that there are no systemic error in the models (Jalali-Heravi and Kyani, 2004). Therefore, the QSAR models 1-3 could be possibly used and reliable for predicting the antioxidant activity of coumarin derivatives. Considering the correlation coefficient (Q 2 ) of external set, it was shown that the Dragon descriptors gave the highest quality of the prediction for external test set (model 1: Q 2 Ext = 0.952) followed by the PaDEL descriptors (model 3: Q 2 Ext = 0.885) and the Mold 2 descriptors (model 2: Q 2 Ext = 0.875).
Structure-activity relationship (SAR)
Regression coefficient values of the key descriptors (as independent variables) in QSAR models define the degree or weight of their influence on dependent variables ( To gain insights into SAR, coumarin derivatives (1-28, Figure 1) are categorized into 3 groups according to their core structures (i.e., thiazole group I (1-9 and 18), sulfonamides group II (10-17) and amides group III (19-28) for effective SAR analysis. Thiazoles group I (1-9 and 18) showed antioxidant activity (Table 1) with pIC50 range of 2.539-4.612. The most potent and the least potent compounds of benzothiazoles group I were 5 (pIC50 = 4.612) and 6 (pIC50 = 2.539), respectively. Among group II compounds (10-17), compound 11 was the most active (pIC50 = 3.180), and 14 was the least active compound. For group III of amides 19-28, compound 21 displayed the most potent activity (pIC50 = 3.027) and compound 19 exhibited the lowest activity with pIC50 of 2.640.
According to the significant descriptors in models 1-3, secondary (sec-) amine, polarizability, electronegativity and H-bond displayed positive effect in the antioxidant activity. This is noted in the most potent coumarin 5 bearing sec-amine (part of aromatic thiazole), and 7-OH group (on the coumarin ring) with Hbond and polarizability properties. On the other hand, tertiary (tert-) amine 6 without 7-OH group exerted the lowest activity among the coumarin derivatives 1-28. This could be implied that the sec-amine (-NH-) and OH as H-bond and polarizing group are important for the better activity.
It should be noted that the most potent modified compounds had higher values of Hbonding descriptor (SHBint4 = 2.048-16.903, Supplementary Table 7) when compared with their parent compounds (SHBint4 = 0.000-7.5875, Table 3). Thus, SHBint4 might be the important descriptor in governing the potent antioxidant activity.
CONCLUSION
Understanding SAR is important for improving bioactivities and pharmacokinetic properties in development of potent and safe cosmetic products. Herein, a set of coumarin derivatives (1-28) with antioxidant activity was used to construct three QSAR models (1-3) using three different descriptor types and MLR method. Results of statistical evaluation showed that three generated QSAR models provide good reliability and comparable predictive performance (Q 2 LOO-CV = 0.813-0.908; RMSELOO-CV = 0.150-0.210; Q 2 Ext = 0.875-0.952; RMSE Ext = 0.104-0.166). In addition, good correlation obtained from model prediction suggests that the selected significant descriptors were shown to be good representatives for revealing correlation between chemical structures of the compounds (i.e., nArNHR, H-bonding, polarizability, van der Waals volume and electronegativity properties) and their antioxidant activities. An application of the constructed models was demonstrated by rationally designed an additional set of 69 structurally modified coumarins based on key descriptors, in which their antioxidant activities were predicted using the obtained QSAR models (1-3). Most of the rationally designed compounds displayed more improved antioxidant activity when compared with their parents. Particularly, the top three newly designed compounds (5h, 4g and 3n) showing high H-bonding (SHBint4) descriptor values which may play part in governing the most improved antioxidant activity. Finally, a set of newly designed promising coumarin analogs were highlighted for their potential to be further developed as potent antioxidants. Insights SAR findings also provided beneficial guidelines for the rational design of novel coumarin-based compounds with potent antioxidant effect for cosmetic applications.
Supplementary information
Supplementary information is available on the EXCLI Journal website. | 2020-02-28T03:47:02.935Z | 2020-02-26T00:00:00.000 | {
"year": 2020,
"sha1": "a7986b181578da96dacd50838396bd6eb7869f75",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "b0398a4d6ded42b9b7d523215b27bfd773c3d0ad",
"s2fieldsofstudy": [
"Chemistry",
"Biology"
],
"extfieldsofstudy": [
"Chemistry",
"Medicine"
]
} |
233843604 | pes2o/s2orc | v3-fos-license | The Effectiveness of Software As Learning Media To Detect And Reduce Misconception In Stoichiometry Material
In chemistry, an abstract material could be perceived by three levels of representations that are macroscopic, sub-microscopic, and symbolic. However, many students have misconceptions due to the difficulty of shifting between the three levels of representations. Misconceptions should be reduced or prevented as early as possible because it will be resistant and difficult to change. Stoichiometry tends to abstract concepts that were challenging for some students that usually lead to misconceptions. In this study, these misconceptions were detected using a four-tier diagnostic instrument and were reduced using a conceptual change text strategy. The instrument and strategy was presented in the form of software named Stoichiometry Reconstruction, which was made by PHP programming language supported by XAMPP application because it has two functions as an assessment and learning tool. Software must be said effective before it is used by the student. This study aims to know the effectiveness of software to detect and reduce misconceptions in stoichiometry material. This study used Research and Development method. The result of this study shows that software is effective to detect and reduce misconceptions. It is proved by the average percentage of misconception shift to understand the concept about 80,13%, which is categorized as effective.
INTRODUCTION
Chemistry represents an abstract material because it deals with reactions and an atomic constituent of compounds that cannot be observed [1]. It consists of three representation levels, namely macroscopic, sub-microscopic, and symbolic. The moving between macroscopic, sub-microscopic, and symbolic is very important in teaching chemistry. However, many students have misconceptions due to the difficulty in understanding the moving between the three representation levels [2]. The misconception is the viewing of a concept that is different from the expert believing. It should be reduced or prevented as early as possible because it is resistant and difficult to change [3]. There are several concepts in chemistry material that have a high percentage of misconceptions. Most concepts that have a high percentage of misconceptions are coming from stoichiometry material. The existing experiment results prove it stated that from 73 pupils of Grades XI in SMA Negeri 1 Sukoharjo have misconceptions in stoichiometry material with a percentage about 40.46% in chemical equation concept, 38.36% in relative atomic or molecular mass concept, and 53,77% in mole concept [4]. The misconception in stoichiometry concepts must be reduced because stoichiometry is the important basic concept of analytical chemistry.
Misconceptions can be detected by the diagnostic and non-diagnostic test. One example of a nondiagnostic test is the essay. This test is less effective because it needs many times [5]. Therefore the diagnostic test is believed as an effective way to detect misconceptions. There are several examples of a diagnostic test, such as two-tier diagnostic test [6], three-tier diagnostic test [7], and four-tier diagnostic test [8]. The four-tier diagnostic test represents the best instrument to detect misconceptions because it can hold all the strengths provided with a three-tier diagnostic test and truly assess misconceptions free of errors and lack of knowledge [9]. This instrument is the modifications of a three-tier diagnostic test. The modifications were located in the second and fourth-tiers, namely the confidence levels [10]. The test contains four tiers: questions with several options, confidence levels of the answer in the questions, reasons for the answer in the first tier, and the confidence levels of the reasons. Based on all tiers' results, the student's understanding of a concept was classified as understanding, did not understand, and misconceptions [11]. In this way, students can immediately know about their understanding of a concept to be dissatisfied with their understanding if it is classified as misconceptions or did not understand the concept. This condition is suitable to use in changing misconceptions.
According to Posner, four conditions have to create to change the misconceptions. These conditions are people must be in dissatisfaction with their existing conceptions, a new concept must be intelligible, a new concept must appear initially plausible, and a new concept should suggest the possibility of a fruitful research program. The strategy that is appropriate for cheating these conditions is the conceptual change text (CCT) strategy. CCT is a strategy for reducing misconceptions using the text to show the differences between the scientific conception and the reader's conception [12]. In this condition, cognitive conflict will happen to reconstruct the new concepts in the reader's mind.
The new concept is easier to explain if the information served in visual and verbal [13]. This statement is supported by the dual coding theory based on the Paivio. The information that is served in visual and verbal increases the usage of working memory. People will easier process new information on their mind when it is served in both visual and verbal. Basic computer multimedia such as software can present information in both visual and verbal simultaneously. The software can present pictures to help teachers in explaining the abstract concept of chemistry material. The software also has two functions as an assessment tool and learning tool. Therefore, software is suitable to detect misconception using four-tier diagnostic instruments and reduce misconceptions based on the CCT strategy.
Based on the background that has already explained, this study aims to determine the effectiveness of software to detect and reduce misconceptions on stoichiometry material. This purpose can be reached by answering the research question: "how is the effectiveness of software to detect and reduce misconceptions on stoichiometry material?".
MATERIALS
This study used the Research and Development (R&D) method written by Sugiono [14]. There are 10 stages in this method, namely 1) potentials and problems, 2) data collection, 3) product design, 4) design validation, 5) design revision, 6) product trial, 7) product revision, 8) trial use, 9) product revision, and 10) wide production. The software named Stoichiometry Reconstruction was made PHP programming language supported by XAMPP application. It has been validated by the experts and revised based on their comments. It has been categorized as valid with a content validity percentage of 85,37% and a construct validity percentage of 76,67%. Thus, this study only discussed the results of product trials at the sixth stage to determine the effectiveness of software, while the seventh to the tenth stages were not carried out.
The effectiveness of software was analyzed from the shift in student's conceptions when working on diagnostic test. The test used four-tier diagnostic test instrument, which consisted of four-tiers. The fourth tiers are concept question, believing of question answer, reason, believing of reason. The questions consisted of the definition of molar mass concept, definition and application of percent composition by mass concept, and definition and characteristics of limiting reactant concept. The result of student's answer classified as misconception (M), understand the concept (U), or did not understand the concept (DU) based on Table 1. Misconceptions (M) [11] The data collecting in this study uses four-tier diagnostic test. The test was doing twice as pretest and postest. Pretest was doing before passing the reduction part in by the software while posttest was doing after that. The pretest and post-test results were classified based on Table 1, so that we get the initial and last student's conception. The data were compared to know the conception shifts. Both the initial conception and conception shift results were analyzed. There are several conception shifts, first conception shift from misconception to understanding the concept (M-U), second from misconception to did not understand the concept (M-DU), third from did not understand the concept to misconception (DU-M), and fourth from did not understand the concept to understand the concept (DU-U). M-U and DU-U represent a positive shift, while M-DU and DU-M represent a negative shift. The cognition conflict data supported the conception shift data from the student's anwer when they passed the second stage of the reduction part. In this part, student are given three questions about the cognition conflict that may be happened in their mind. The effectiveness of software is analyzed by calculating the number of M-U then coverting it to percent using the formula below. The percentage is then interpreted based on Table 2. The software is said effective if its effectiveness percentage ≥ 61% [16].
RESULT AND DISCUSSION
This study had conducted from September 2019 to February 2020, located in SMA Negeri 1 Gedangan. The subject of this study is 15 students. These students were selected based on the results of a diagnostic test that has already done before. The diagnostic test used four-tier diagnostic test instrument. Then the results are presented in the form of percent. Students who have the highest percentage of misconception were selected as the subjects of this study.
Students have to try the software using a personal computer that has already connected to a server via school WiFi. According to their concept classification of four-tier in Table 1 after they did pretest, the software detected their initial conceptions. The pretest results on stoichiometry material can be seen in Figure 1 to Figure 3.
FIGURE 1. Pretest Results of Molar Mass Concept
There are 5 questions about a molar mass concept that has to answer by students in the pretest. Figure 1 shows that all of the students hold misconceptions (M) in answering question number 1,3, and 5. For question number 2, there are 12 students hold misconceptions (M), 2 students have understood the concept (U), and 1 student did not understand the concept (DU). While in question 3, there are 13 students who hold misconceptions (M), 1 student has understood the concept (U), and 1 student did not understand the concept (DU).
FIGURE 2. Pretest Results of Percent Composition by Mass Concept
There are also 5 questions about percent composition by mass concept in the form of four-tier diagnostic test format which is the same as in the molar mass concept. Figure 2 shows that there are 10 While for question 5, there are 12 students hold misconceptions (M), 1 student has understood the concept (U), and 2 students did not understand the concept (DU). Generally, the number of students who have understood this concept are many more than the first concept, while the number of students who hold misconceptions is less more than the first concept.
FIGURE 3. Pretest Results of Limiting Reactant Concept
There are also 5 questions about limiting reactant concept in the form of four-tier diagnostic test format, which is the same as in both concepts. Figure 3 shows that there are 13 students hold misconceptions (M), no one has understood the concept (U), and 2 students did not understand the concept (DU). All of the students hold misconceptions (M) in answering question number 2. For question number 3, there are 11 students hold misconceptions (M), no one has understood the concept (U), and 4 students did not understand the concept (DU). For question number 4, there are 11 students hold misconceptions (M), 1 student has understood the concept (U), and 3 students did not understand the concept (DU). While for question number 5, there are 14 students who hold misconceptions (M), no one has understood the concept (U), and 1 student did not understand the concept (DU). Based on the data, the number of students who have understood the concept was less than both concepts before. It means that many students hold misconceptions and do not understand this concept.
Students identified misconception and did not understand the concept have to pass the reduction part based on CCT strategy. There are four stages in CCT strategy, first showing the initial conception, second making cognitive conflicts, third making equilibration condition, fourth reconstructing the new concept [15]. In the first stage, students have presented their diagnostic test results. Then in the second stage, students have presented the statements that may be suitable for their misconceptions. If students believed that it is the true statement, they would be presented the right explanation about these statements. In the third stage consists of a complete explanation of molar mass concept, while the fourth stage consists of questions based on the explanation in the third stage to construct their new concept. After passing all of the stages, students are asked to do posttest for knowing their conception changes. It was the conception shift data that used to determine the effectiveness of the software. This data can be seen in Table 3 to Table 5. According to the conception shifts, the average percentage of M-U shift in limiting reactant concept is the biggest one. It is caused by the animation presented in the third stage of CCT part to help students in understanding the abstract concept. The animation is used to explain the abstract material [17]. Whereas in the molar mass and percent composition concept, most of the information is explained in texts. The information is better presented in both pictures and texts because it can make students understand the concept more easily than presented only. Thus, the average percentage of M-U shift in stoichiometry material is 80,13%. This percentage is in the range of 61%-81% with the effective category. So that it means that software is categorized as effective to be used to detect and reduce misconceptions in stoichiometry material.
CONCLUSION
Based on the results of this study, software named Stoichiometry Reconstruction is categorized as effective to detect and reduce misconceptions in stoichiometry material. It is proved by the average percentage of M-U shift about 80,13% which is in the range of 61%-81% with the effective category. This percentage is obtained from the average M-U shift percentage of each concept that is 77,59% in the molar mass concept, 71,12% in the percent composition by mass concept, and 90,67% in the limiting reactant concept. | 2021-05-07T00:02:58.707Z | 2021-03-07T00:00:00.000 | {
"year": 2021,
"sha1": "55b0e57f9a0128bbc80f2b6ba9e34285a1f5a57e",
"oa_license": "CCBYSA",
"oa_url": "https://journal.uii.ac.id/IJCER/article/download/15967/pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "446bb8ac4b3358ea7bcfc24bde52e65623aec7ec",
"s2fieldsofstudy": [
"Chemistry",
"Computer Science",
"Education"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
252093193 | pes2o/s2orc | v3-fos-license | Research trends and hotspots of exercise for Alzheimer’s disease: A bibliometric analysis
Objective Alzheimer’s disease (AD) is a socially significant neurodegenerative disorder among the elderly worldwide. An increasing number of studies have revealed that as a non-pharmacological intervention, exercise can prevent and treat AD. However, information regarding the research status of this field remains minimal. Therefore, this study aimed to analyze trends and topics in exercise and AD research by using a bibliometric method. Methods We systematically searched the Web of Science Core Collection for published papers on exercise and AD. The retrieved data regarding institutions, journals, countries, authors, journal distribution, and keywords were analyzed using CiteSpace software. Meanwhile, the co-occurrence of keywords was constructed. Results A total of 1,104 papers were ultimately included in accordance with our specified inclusion criteria. The data showed that the number of published papers on exercise and AD is increasing each year, with papers published in 64 countries/regions and 396 academic journals. The Journal of Alzheimer’s Disease published the most papers (73 publications). Journals are concentrated in the fields of neuroscience and geriatrics gerontology. The University of Kansas and the United States are the major institution and country, respectively. The cited keywords show that oxidative stress, amyloid beta, and physical exercise are the research hotspots in recent years. After analysis, the neuroprotective effect of exercise was identified as the development trend in this field. Conclusions Based on a bibliometric analysis, the number of publications on exercise and AD has been increasing rapidly, especially in the past 10 years. “Amyloid beta,” “oxidative stress,” and “exercise program” trigger the most interest among researchers in this field. The study of exercise program and mechanism of exercise in AD is still the focus of future research.
Introduction
Alzheimer's disease (AD) is an age-related neurodegenerative disorder caused by damage to brain neurons (Alzheimer's Association, 2022). Hallmark pathological changes include neuritic extracellular amyloid plaque and neurofibrillary tangles (Barthelemy et al., 2020). AD is characterized by cognitive decline and behavioral changes with memory, language, and thinking problems as the initial symptoms (Dubois et al., 2016(Dubois et al., , 2018. Progressive symptoms will continue to affect activities of daily living. The rate in which AD progresses varies from person to person. An estimated 55 million people living with dementia worldwide were reported by Alzheimer's Disease International in 2021, and patients with AD are expected to increase to 78 million by 2030 (Alzheimer's Disease International, 2021). AD accounts for the largest proportion of patients with dementia, and the proportion of those over 60 years old is about 65% (Jia et al., 2020). AD is a fatal illness, and the average survival time of AD is reported to vary from 4 to 8 years for patients aged 65 years and older (Larson et al., 2004). Globally, given the large population of patients with AD and the harmfulness of this disease, AD has become a considerable health and economic burden, calling for more effective measures to control this disease.
To date, AD is treatable but not curable. Current interventions are mostly aimed at slowing down the progression of AD, reducing its symptoms, and improving quality of life. Exercise is considered an important lifestyle modification that can help delay the beginning of cognitive deterioration and improve the quality of life of patients with AD (Lautenschlager et al., 2008;Petersen et al., 2018). Numerous studies have found that aerobic exercise, resistance exercise, and cognitive-physical exercise can help patients with AD in many aspects, such as improving cardiovascular fitness, attenuating neuroinflammation, and supporting the brain clearance of Aß peptides (Karssemeijer et al., 2017;Morris et al., 2017;Sun et al., 2018;Ribaric, 2022). Given the increase in the number of publications about the use of exercise in treating AD, identifying research trends and hotspots is highly significant. However, a quantitative analysis of this research theme has not yet been conducted.
Bibliometric analysis is a scientific method for constructing a co-occurrence network of research themes by using quantitative statistics (Hicks et al., 2015). The software tool, CiteSpace, is used to show a visual map of bibliometric results, such as journals, authors, institutes, keywords, citations, popular topics, and frontiers. Some reviews on the use of exercise to treat AD with different emphases have been published; however, a comprehensive and visualized analysis remains lacking (Karssemeijer et al., 2017;Ribaric, 2022). Therefore, we conduct a bibliometric analysis of exercise and AD research to reveal the dynamic development in this field. This study helps provide a comprehensive understanding of this topic and guide future research direction.
Search strategy
The data used in this study were collected from the Science Citation Index Expanded (SCI-E) of the Web of Science Core Collection database. The search strategy was as follows: TS = (exercise OR sport OR "physical activity" OR training OR running OR swimming OR dance OR walking OR yoga OR "tai chi" OR pilates OR qigong OR liuzijue OR wuqinxi OR yijinjing OR baduanjin) AND TS = Alzheimer * . The search strategy identified papers with these words mentioned in their title, abstract, author keywords, or keywords plus. Only articles and reviews were included as document types. The time span was from inception to June 30, 2022. All papers from the search were preliminarily included, and we screened all the papers by reading the title, abstract and author keywords, and excluded irrelevant literature (e.g., "computer running, " "speech training, " and "common training library"). Discrepancies were observed via discussion. Figure 1 shows the flow diagram of the publications screening process.
Analytical tool
CiteSpace (Chaomei, 2006) is a visual analysis application developed by Dr. Chen Chaomei of Drexel University. It is based on theory of citation analysis, which has been applied by many scholars worldwide. CiteSpace is well recognized for transforming quantitative literature data into visual maps and networks to provide key information, including research trends, popular topics, and distribution of countries. Cluster and time-zone views are included in CiteSpace's visualization. Visual networks have been confirmed significant in research trends and key points. We can see different nodes and links in various CiteSpace visualization knowledge maps. Nodes represent different key points, countries, institutions, and journals. The larger the nodes, the greater number of occurrences or citations in this field. Different colors represent various years. A relatively early time is represented by coldhued nodes, while a relatively late time is represented by warm-hued nodes. The centrality of a node indicates the importance of a node's status in a network. In CiteSpace, a node with a purple ring is considered a pivotal point with high centrality (Chen, 2017). Microsoft Excel (2019) Flow diagram of the publications screening process. was used to generate a graph of the trends in annual publications and citations.
Publication trends
A total of 1,104 publications met our inclusion criteria. Researchers are paying more and more attention to AD each year. Consistent with this, despite some minor fluctuations but with an overall upward trend for the number of studies on exercise intervention in AD (Figure 2). The volume of published literature can be broadly divided into three periods : 1987-2000, 2000-2006, and 2006-2021. The number of publications in the first period was low, while the number of publications in the second and third periods increased significantly. The maximum number of relevant publications (n = 135) was reached in 2021. A considerable increase occurred in the second period, while the largest increase was recorded in the third period, with a number of 122 additional pieces of literature from 2006 to 2020. The citations of literature have largely increased every year, with the most significant increase occurring from 2016 to 2021. The large numbers of publications and literature citations in recent years have indicated the attention given by scholars to this area.
Analysis of countries/regions and institutions
The literature came from 64 countries/regions. In accordance with Table 1, among all the countries/regions where literature was published, the top 3 in terms of number of publications are as follows: the United States (424 publications), China (137 publications), and Brazil (86 publications). Among these countries/regions, the top three in terms of citations to literature are the United States, Australia, and Canada. Meanwhile, the countries with the highest average citations per item are Australia, Germany, and the United States in that order. The top three countries in terms of the h-index are the United States, Australia, and Germany. The United States dominates the field, with the highest number of publications, citations, centrality, and h-index. Although China and Brazil have more publications, they have fewer citations and their academic influence is more limited. Table 2 lists the 10 institutions with the highest number of publications. The most productive institution is the University of Kansas (31 publications), and the University of Melbourne (28 publications) has the second highest number of publications, followed by the University of Minnesota (27 publications). Annual number of publications on AD, exercise and AD from 1987 to 2022. The map of co-institutions. The nodes in the map represent co-institutions, and lines between the nodes represent co-citation relationships. The purple ring represents centrality.
Links between the University of Melbourne, Harvard University, University of Pittsburgh, University of California San Francisco and other institutions indicate close collaboration. The centrality of the University of Pittsburgh, the University of Western Ontario, the University of California San Francisco and Harvard University is greater than 0.1, demonstrating a wide range of academic influence.
Analysis of authors
The papers were contributed by 5,267 authors. The top 10 authors in terms of number of publications are presented in Table 3. The top three most frequently cited authors were Cotman CW (2,387 citations), Lautenschlager NT (1,488 citations), and Cox KL (1,322 citations). Cotman CW had a significantly higher number of citations than the other authors. He also had the highest number of citations per item and the highest h-index, endowing him with greater academic influence. As shown in Figure 4, the field has formed a relatively large number of research teams, with many highly productive authors among them. The major research teams are those of Yu F, Cotman CW, Burns JM, Zhang L, and Hasselbalch SG.
Analysis of journals and categories
All the retrieved publications were published in 396 different journals. Table 4 Publications from the top 10 journals were primarily published in Neurosciences (409 publications) and Geriatrics and Gerontology (268 publications). The top ten web of science categories in this field also include: Clinical The map of co-authors. The nodes in the map represent co-authors, and lines between the nodes represent co-citation relationships. The purple ring represents centrality.
Neurology ( Analysis of the top 10 most cited papers Table 5 lists the top 10 papers based on the number of total citations and average per year on exercise and AD research, which focused on the research themes of cognitive function, Analysis of keywords Figure 5 shows the co-occurrence of keywords, which are highly closely linked to one another. Figure 6 presents the cluster diagram for the keywords. The clusters include oxidative stress, cerebral blood flow, skeletal muscle, mice, doubly labeled water, education, apolipoprotein e, nervous system autonomic, vitamin c, leisure activity, behavioral training. Figure 7 shows the top 25 keywords with the strongest citation bursts since 1987. These keywords are also the research frontiers in the field. The red bars indicate the emergence and duration of research hotspots. The top 25 keywords with the strongest citation bursts began in 2001. These words are mainly concentrated in two categories: program of exercise intervention in AD (e.g., physical exercise, aerobic exercise, treadmill exercise, voluntary exercise, and leisure activity), mechanism of exercise intervention in AD (e.g., amyloid beta, cerebral blood flow, amyloid precursor protein, and long term potentiation). The keywords with the strongest citation bursts are amyloid beta.
Combined with the above data and analysis, the major research frontiers in recent years are exercise program, amyloid beta, and oxidative stress.
Discussion
Global research trends of exercise on Alzheimer's disease This study described the landscape of exercise-based rehabilitation and prevention of AD by analyzing subject categories and the contribution of countries, journals, and authors. A total of 1,104 papers were obtained through the retrieval strategy.
Our study found that research on the relationship between exercise and AD started in 1987, and the number of published papers exhibited an increasing trend every year. The number of published papers in 2021 is 3.97 times that in 2010 and 33.75 times that in 2000. Among all the studies, eight highly cited publications were published in the last 10 years (2013-2021), which might be a period of high-quality development in the field of sports and AD research. The aforementioned findings suggest that exercise and AD are eliciting extensive attention among researchers and have become popular research issues in recent years. This phenomenon may be related to the increase in AD incidence rate with the magnification of the aging society and the rapid development of sports science and rehabilitation medicine Zong et al., 2022).
By analyzing the journals about exercise and AD, we determined that researchers concentrated in the fields of neuroscience and geriatrics. Among them, the Journal of Alzheimer's disease (73 publications), Frontiers in Aging Neuroscience (34 publications), Behavioral Brain Research (26 publications), Current Alzheimer Research (21 publications), and International Journal of Molecular Sciences (21 publications) have given the most attention to this field, demonstrating that the research field has focused on neuroscience and gerontology. The top three Web of Science categories are neuroscience, geriatrics gerontology, and clinical neurology. In addition, the top 15 categories include sport science, psychology, behavioral science, biochemistry molecular biology, and rehabilitation. This finding suggests that the exercise-based rehabilitation and prevention of AD are typically multidisciplinary collaborative effort. The etiology of AD is closely related to neuroscience and aging (Atri, 2019;Ribaric, 2022). Meanwhile exercise intervention and prevention belong to the category of sports science and rehabilitation, and their mechanism research involves behavior, cognition, physiology, and biochemistry. Therefore, establishing a multidisciplinary team to conduct exercise intervention and prevention of AD is beneficial. The quantitative and visual analyses of countries/regions distribution show that the United States is the leading country in the field of exercise-based rehabilitation and prevention of AD, with the highest number of studies (424 publications), citations, centrality, and h-index. This result may be due to the internal drive caused by its aging society and the large amount of scientific research capital investment (the United States Department of Health Human Services, National Institutes of Health, and National Institute on Aging are the top three funders for this field, and all of which are United States The map of keywords. The circle size and the link illustrated the frequency and relevance of keywords, respectively. The cluster diagram for the keywords. institutions). Some Asian countries, such as China and Japan, have participated in research on this field and made several achievements. Among them, China has performed well in the number of published papers (137 publications) and the support of funds (National Natural Science Foundation of China, ranked 4th among funding institutions). However, China exhibits no advantages in average per item, citations, and h-index, indicating that the quality of research should be further improved. The analysis of the reasons may be as follows: the density and breadth of international cooperation in this field is insufficient for China, and iconic research institutions are lacking. Therefore, high-quality research output is also lacking. This situation may limit the development of research on exercising and AD, because China, the world's most populous country, is also a country with rapidly increasing aging population, and thus, it urgently needs to make breakthroughs in this field. Therefore, we suggest that European and American research institutions should strengthen cooperation and exchanges with China and institutions to promote the progress of research on the exercise rehabilitation and prevention of AD worldwide.
From the perspectives of author contribution and cocitation, the author's co-occurrence chart (Figure 4) shows numerous nodes, and the connection between clusters was relatively close, indicating a high number of international researchers in this field. However, the research direction was relatively scattered. As shown in Table 3, Yu F (24 publications,321 citations,, an American researcher from the University of Minnesota, published the largest number of literatures. He began studying the effects of exercise on AD in 2006. This author believes that physical activity and exercise can prevent or relieve the cognitive and functional impairments The keywords with the strongest citation bursts of publications. Each blue or red short line represents a year, and a red line stands for a burst detected year. brought by AD because exercise may improve the pathogenesis of AD and stimulate the brain plasticity of patients (Yu et al., 2006;Gronek et al., 2019;Zong et al., 2022). Another key author named Cotman CW (13 publications, 2,383 citations, and 12 h-indexes) had the highest citation on the basis of a high number of papers, showing a good academic influence. He has been conducting research in this area since 1999, focusing on the mechanism of exercise intervention in AD. The author believes that the main way for exercise to improve cognitive function in AD patients is to reduce Aβ deposition (Adlard et al., 2005) and alleviate the neuroinflammation caused by oxidative stress (Parachikova et al., 2008;Ionescu-Tucker and Cotman, 2021). In addition, the authors observed that exercise can restore the hippocampal function in AD patients by enhancing the expression of brain-derived neurotrophic factor (BDNF) and other growth factors that promote neurogenesis, angiogenesis, and synaptic plasticity (Intlekofer and Cotman, 2013;Berchtold et al., 2019).
Literature review
In accordance with highly cited literature, keyword cooccurrence and explosion analysis cannot only reveal the core contents and research topics of publications in a certain field but also help us learn the current research focus and development trends in this field .
The effect of exercise on Alzheimer's disease
According to the keyword burst chart (Figure 7), five burst keywords related to exercise were found: "leisure activity, " "treadmill exercise, " "physical exercise, " "aerobic exercise, " and "voluntary exercise." Burst keywords "treadmill exercise" and "aerobic exercise, " which appeared in 2018, have continued to appear in 2022. These burst keywords indicated that exercise program is the current research hotspot of the field. The effect of exercise on AD may vary based on the pattern, intensity, and lasting duration of exercise. A meta-analysis has been conducted to compare the effects of different exercise modalities (aerobic exercise, muscle strength training, and combined training) on the function of patients with AD (Lopez-Ortiz et al., 2021). The results of the meta-analysis showed that aerobic exercise can improve the cognitive and physical functions of AD patients, whereas muscle strength training and combined training had no significant effect. The forms of aerobic exercise included in this meta-analysis were cycling, walking, treadmill, and arm ergometry. As for exercise intensity, a medium to high intensity, which was measured by maximum heart rate or heart rate reserve, was usually used. A certain evidence indicates that high-intensity interval training is more beneficial than moderate-continuous exercise training for slowing down the progression of AD. The former can produce higher lactate levels, which elicit larger increases in BDNF, which participates in the neurotrophic signaling pathways of learning and memory function improvement (Boyne et al., 2019;Antunes et al., 2020). However, the study of Jahangiri et al. (2019) supports moderate and regular exercise, and they considered that high-intensive exercise will lead to excessive stress response, which may cause the symptoms of cognitive impairment. In addition, we summarized this meta-analysis and observed that the training time ranged from 30 to 90 min, and training lasted for 2-3 times a week. The total time of intervention ranged from 9 weeks to 9 months. In animal studies, we observed that most of the literature related to physical activity for AD used treadmill exercise (da Costa Daniele et al., 2020). Although numerous works have been conducted on the mechanism research of exercise benefits on AD, no study compared the different effects of various intensities, frequencies, and durations on the mechanism. Further studies evaluating differences in exercise programs are necessary.
Potential mechanism of exercise for Alzheimer's disease First, we ranked the top 10 references in terms of the number of citations to identify references that may be important in exploring the frontier knowledge base of research. As indicated in Table 3, the paper titled, "Effect of physical activity on cognitive function in older adults at risk for Alzheimer's disease: a randomized trial, " published by Lautenschlager et al. (2008) in the Journal of the American Medical Association (IF = 56.274) in 2008 was cited 1,033 times. This paper reports the first randomized controlled trial to determine whether physical activity reduces the incidence of cognitive decline among highrisk elderly population. The final results suggested that 6 months of physical activity improved the cognitive performance of AD subjects during the follow-up period of 18 months. This study laid the foundation for subsequent related research. Among the top ten papers cited average per year, "Combined adult neurogenesis and BDNF mimic exercise effects on cognition in an Alzheimer's mouse model, " was published in Science (IF = 47.728) in 2018 (Choi et al., 2018) and "Exercise-linked FNDC5/irisin rescues synaptic plasticity and memory defects in Alzheimer's models, " was published in Nature Medicine (IF = 53.44) in 2019 (Lourenco et al., 2019). Both papers focused on and explained the neuroprotective effects of exercise on AD from the perspectives of irisin, neurogenesis, and BDNF. In summary, we determined that the research trend of exercise and AD in recent years has shifted from discussing the influence of exercise on cognitive function to exploring the mechanism of exercise that improves cognitive function in AD patients. The current trend focuses on the neuroprotective effect of exercise.
From the keyword burst chart (Figure 7), the keywords with the highest burst value was "amyloid beta" (bursts strength value 10.03), and the burst has continued since 2018. This finding suggests that exercise clearance "amyloid beta" may be the research front of the mechanism research of exercise intervention in AD. The presence of neurotoxic amyloid plaques, which Aβ forms as a result of a pathological cascade reaction, is considered the gold standard for AD neuropathological diagnosis (Liang et al., 2022). The abnormal accumulation of extracellular Aβ causes evident neurotoxicity, which can induce brain inflammation, mitochondrial dysfunction, oxidative stress induced microglia activation, and other toxic side effects. This condition will exacerbate neuronal loss and promote the development of AD (Kinney et al., 2018). Therefore, removing the excessive accumulation of Aβ is an important train of thought in the treatment of AD. Exercise is considered an effective way to prevent and treat AD; such effectiveness may be related to the capability of exercise to participate in the clearance of the excessive accumulation of Aβ in the brain (Radak et al., 2010;Aczel et al., 2022). Adlard et al. (2005) published the study titled, "voluntary exercise decreases amyloid load in a transgenic model of Alzheimer's disease, " in the Journal of Neuroscience, and it was possibly the earliest study on the clearance of Aβ by exercise. The authors used TgCRND8 mice as animal models to observe the interaction between 5 months of voluntary exercise and AD cascade. Exercise caused extracellular hippocampus Aβ plaque reduction, and this result was related to the reduction of cortex Aβ 1-40 and Aβ 1-42. The authors believed that this mechanism is mediated by changes in the amyloid precursor protein processing after a short-term exercise. Recent systematic reviews have summarized the effects of involuntary chronic physical exercise on betaamyloid protein in experimental models of AD. The results from 36 included studies showed that regular physical exercise resulted in positive changes in amyloid precursor protein processing through different signal pathways, thus proving the anti-amyloid effect of exercise (Vasconcelos-Filho et al., 2021). In addition, different studies attempted to clarify the mechanism by which exercise reduces Aβ deposition to protect AD from different perspectives, such as Aβ generation, Aβ transporters crossing the blood-brain barrier, autophagy, degrading enzymes, etc. However, the mechanism by which exercise reduces Aβ deposition has remained unclear until now, therefore becoming the focus of attention of researchers.
Aβ deposition plays a key role in the progression of AD, and other pathological events (including mitochondrial dysfunction, oxidative stress, or neuroinflammation) contribute significantly to its development (Tan et al., 2021). According to the keyword centrality and cluster analysis results (Figures 5, 6), studies related to "oxidative stress" have attracted wide attention. Oxidative stress causes mitochondrial dysfunction, which is associated with the development of AD-related pathology (Yu et al., 2018). In addition, oxidative stress promotes Aβ deposition during the development of AD. The excessive accumulation of Aβ induces oxidative stress of microglia, resulting in chronic neuroinflammation and further aggravating the oxidative stress-induced nerve damage (Liang et al., 2021). Exercise is closely related to the improved antioxidant capacity of the brain and reduced oxidative stress-induced injury (Liang et al., 2021). TgF344-AD rats were used as models to observe the effect of 8 months of exercise pre-training on AD. The results showed that exercise pre-training reduced Aβ deposition and tau hyperphosphorylation, inhibited mitochondrial dynamic imbalance, and significantly inhibited oxidative stress and neuroinflammation in AD rats (Yang et al., 2022). Another study (Gholipour et al., 2022) reported the effect of high-intensity interval training on AD. The results showed that high-intensity interval training can reduce hippocampal oxidative stress and Aβ deposition, reduce neuronal damage, and improve AD symptoms. Based on the above information, exercise training can be used as a potentially effective strategy to improve the activity of antioxidant enzymes in neurons, reduce the release of mitochondrial reactive oxygen species and levels of oxidative stress and neuronal apoptosis, and ultimately delay the progression of AD.
Strength and limitations
Our study has several strengths. To our best knowledge, this study is the first bibliometrics analysis to evaluate hotspots and frontier in the field of exercise and AD research. Publications were searched from the SCI-E of Web of Science. A total of 396 scholarly journals with 1,104 publications on exercise and AD research were used in our study. This research included the analysis of the number of publications, citations, h-index, subject categories of Web of Science, collaboration analysis among countries/institutions, co-citation analysis of references/authors, and analysis of keywords.
This study still has some limitations. First, we only searched the literature in the SCI-E of the Web of Science Core Collection database, since different databases have different properties, such as citation counting and export formats. Second, English papers accounted for 98% of the included papers in our study, because Web of Science database mainly indexed papers written in English. This may lead us to ignore relevant research published in other languages. Third, some recent publications of high quality may not received enough attention because of low citation frequency, whilst older articles have accumulated more citations. This may undermine the significance of more recently-published articles. Therefore, readers should be aware that all these may lead to bias for our results.
Conclusion
This study collected relevant literature on exercise and AD, analyzed information of major countries/regions, institutions, and core journals in this field, and summarized research hotspots and frontiers. The number of publications on exercise and AD has been increasing rapidly, especially in the past 10 years. Most of these publications are associated with neurosciences, geriatrics, and gerontology, but they also involve sports science, psychology, behavioral science, and rehabilitation. From this perspective, enhanced inter-agency and interdisciplinary cooperation is essential for the progress and development of this scientific field. The countries in America and Europe, especially the United States, dominate in terms of publication and research collaboration on exercise rehabilitation of AD. Asian countries need to actively seek international cooperation to enhance their global influence for the further development of this field. "Amyloid beta, " "oxidative stress, " and "exercise program" are considered the current research hotspots and frontiers in this field. Although the included studies contribute to the understanding of the underlying pathways of exercise on AD, the mechanism remains unclear. In addition, no study compared the different effects of various intensities, frequencies, and durations on the related mechanism. Further studies evaluating differences in exercise programs are necessary.
Author contributions
JG and BC contributed to conception and design of the study and revised the manuscript. YF, BC, and WZ collected and analyses the data. BC, JG, YF, and GS wrote the manuscript. All authors read and approved the final version of manuscript.
Funding
This work was supported by the Natural Science Foundation of Jiangsu Province (grant no. BK20210907) and Research Foundation for Talented Scholars of Xuzhou Medical University (grant no. D2020056). | 2022-09-07T13:38:04.743Z | 2022-09-07T00:00:00.000 | {
"year": 2022,
"sha1": "30f63fd4498acaadb7e06eb877454cd8932f6994",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Frontier",
"pdf_hash": "30f63fd4498acaadb7e06eb877454cd8932f6994",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
237772916 | pes2o/s2orc | v3-fos-license | A Fractional Grey Multivariable Model for Modeling Fresh Graduates’ Career Choice
Aiming at exploring the effect of four factors on fresh graduate’s three popular career choices of continuing studying, working in state-owned enterprises, and working in private enterprises, this paper collects the specific information of 3237 students and builds the GM (0, N) model. The four variables include student’s grade point average (GPA), socioeconomic status (SES), gender, and whether the student comes from an urban household. Furthermore, this paper also considers the effect of the fractional order and proposes a fractional grey model (FGM (0, N) model) to enhance the performance of the traditional model. Eventually, the study finds that there are still some students with financial problems, which makes some negative effects on their choices of continuing studying and working in state-owned enterprises. Additionally, all the other three factors show the positive influence on the three choices. Besides, GPA shows the most positive effect on the choices of continuing studying as well as working in a state-owned enterprise; gender and SES have the greatest impact on the choice of working in a private enterprise.
Introduction
With the development of the society, the career choice of the fresh graduates has become more abundant; students can choose to continue studying, start their own businesses, or go directly to work in state-owned enterprises. At the same time, people's attention to employment tends to increase; there is now a large amount of literature that studies what factors affect people's career choices. Sehagl and Nasim [1] proposed that technology management skills and communication skills played an important role in choosing jobs. Gokuladas [2] focused on studying the factors influencing the first-career choice of undergraduate engineers and found that students from urban areas were more likely to be driven by intrinsic factors, while those from rural/semiurban were more likely to be influenced either by extrinsic or interpersonal reasons. Koch et al. [3] proposed that interest was also an important factor, and the high school counselor was the least influential person with respect to students' choice of careers in construction management. Mead et al. [4], Myburgh [5], and Greenman [6] made the conclusion that there were significant differences among racial/ethnic groups in factors that appeared to influence their career paths. Educational level of parents had also been proved to be an influential factor [7]. While in some papers, firsthand information sources (e.g., the work experience and personal experiences) were more influential than secondhand sources (e.g., class materials and faculty) [8]. Moreover, some people thought "students' self-efficacy and occupational aspiration" were the most important factor, followed by "tradition and cultural value," "career guidance," "support from parents," and "external consultation" [9].
In summary, there are many studies on exploring the influence of factors on fresh graduate's career choices. And, governments can take advantage of all the papers to provide the most suitable assistance to different students. However, there is no doubt that how to confirm student's need and provide suitable help is still complicated. Aiming at providing some reference for this problem, this paper selects four variables directly related to students and plans to explore the effect of these four factors on fresh graduate's career choices. ese factors include student's grade point average (GPA), gender, family socioeconomic status (SES), and whether the student comes from an urban household or not. Similarly, after summarizing the development direction of the graduates, this paper mainly selects three choices, including continuing study (Choice-1), working in private enterprises (Choice-2), and working in state-owned enterprises (Choice-3).
To finish the task, this paper builds the GM (0, N) model. And, there are two reasons for choosing this model: the first one is that grey system theory has been widely used in various fields such as natural science, social science, and engineering science [10]. And, there are many papers that can prove the good performance of the grey models. Additionally, the second reason is that, based on the results of the GM (0, N) model, this paper can obtain the effect of factors. Besides, due to the requirement of a high accuracy, the fractional orders have been added into this model, which have been proved to increase the performance of the models. e purposes of this paper are as follows: Firstly, some statistical description analysis can be performed to obtain the most popular career choice of fresh graduates and the specific information of students' family socioeconomic status. Secondly, this paper plans to propose a fractional grey model (FGM (0, N) model) by considering the function of the fractional order, while building the GM (0, N) model. irdly, the FGM (0, N) model is built to explore the effect of four factors on fresh graduate's career choices. Additionally, based on the results of models with similar performance, robustness of results can be tested. Accordingly, some suggestions on how to set up courses for students with different performances can be made.
ere are four main contributions in this paper: On one hand, there are two theoretical contributions: firstly, after getting the bad results of the GM (0, N) model, this paper chooses to consider the effect of the fractional order and proposes a new model, the GM (0, N) model with the fractional order (FGM (0, N) model). After comparing the accuracy, we find that the FGM (0, N) model can help scholars make better predictions. Besides it, since there are few studies on exploring the effect of SES on graduates' career choices, what we do in this paper can perfectly fill in the research gap. On the other hand, this paper also has a significant implication contribution in two parts: the strongest one is to provide a guideline for the universities to set up the courses based on the results obtained in this paper. Students with suitable skills can be better suitable for the development of the society. Besides, this can also help save the sources of enterprises to teach students. e other contribution is that the results of data description can tell that there are still many students with financial problems, which significantly affect their career choice. us, scholars and governments should pay more attention to consider how to provide some necessary help. e remaining of this paper is organized as follows: in Section 2, related works are introduced. Section 3 introduces some models used in the paper as well as the statistical description of the dataset. Section 4 shows the analysis results. Section 5 presents the conclusion and some suggestions.
Literature Review
To clearly show the summary of the recent studies, this paper divides the whole section into three parts. e first part contains the main studies on the effect of some factors on student's performance, while the work related to the models has been shown in parts 2 and 3.
e Relationship between Students' Performance and eir
Socioeconomic Status as Well as Other Factors. Until now, the relationship between the fresh graduates' socioeconomic status backgrounds and their choices is yet to be understood completely. We focus on performance on graduates' choices in an urban school district to identify what role SES plays. e excessive gap between the rich and the poor is still one of the important existing problems in the world. According to data released by the World Bank, 5% of the people in the United States hold more than 60% of the country's wealth. In 2018, China's Gini coefficient for measuring the gap between the rich and the poor reached 0.474, far exceeding the international warning line of 0.4, which shows that there is a large gap between the rich and poor. However, economic conditions will significantly affect people's decision-making, for example, the benevolencedependability value of those of lower perceived socioeconomic status significantly affected their intertemporal choices [11]. e effect of poverty on students' achievement has also been widely studied. Recent studies showed that students from low socioeconomic status backgrounds had lower academic performance and a chronic risk of lower academic growth during early adolescence [12,13]. We recognize that other factors aside from student poverty may contribute to explain variations in achievement. For example, Li et al. [14], based on a two-year longitudinal dataset of 942 middle-school students from a high-poverty district, found that emotional control had the strongest relation with GPA instead of the social perceptions and academic performance.
Recent
Studies on the GM (0, N) Model. Grey systems, proposed by Deng [15,16], have been widely utilized to cope with uncertain problems with poor and incomplete information [17]. And, there are many popular grey models, such as Grey Verhulst model [18], Grey Markov model [19], and so forth [20][21][22]. Among them, the GM (1, 1) model is the main forecasting model in grey systems. By accumulating generation operation in the GM (1, 1) model, the random disturbance of a short sequence is weakened [23,24]. is model has been extensively used in various fields, especially in the field of energy consumption [25][26][27][28]. e abovementioned grey models all are used in time series prediction and not suitable for making predictions on cross-sectional data. us, this paper chooses to use the GM (0, N) model which can help deal with this problem and obtain the effect of input variables on the output factor. e GM (0, N) model is a special form of the GM (1, N) model with no derivatives. e two models (GM (0, N) model and GM (1, N) model) are a typical multivariable forecast model in grey system theory [29]. Kung and Wen [30] successfully used the GM (0, N) model to analyze several variables of firm attributes. Tian et al. [31] proposed a novel GM (0, N) model to solve the problem of cost forecasting of commercial aircraft. Due to the successful applications of using the GM (0, N) model to explore the influence of variables in previous papers, this study also plans to take good advantage of this method to obtain the effect of several factors on fresh graduate's career choices.
Application of the Fractional Order.
Fractional calculus has been used in various fields of science, engineering, applied mathematics, and economics [32]. Similarly, numeric studies on fractional grey models have been performed in recent years. Previous GM (1, 1) models have been based on first-order accumulation techniques which revealed only partial memories and lacked the potential to represent overall memories fairly [33]. e fractional model has an accumulated generating order that can effectively manifest the nonlinear characteristics of real systems [34]. Due to the positive effect of the fractional order, scholars pay more attention to explore the possibility of combining the traditional models with the fractional order [35][36][37]. And, most of the results show that the performance of the fractional models can be better.
Based on the above summary, it is not difficult to get the following conclusions: firstly, fresh graduates' career choices can be affected by many variables. However, SES, as one of the main influential factors for students' performance, has not been widely used in forecasting students' career choices. erefore, it is reasonable for us to consider the effect of SES, which is also one of the contributions in this paper. Secondly, there are many papers to prove the good performance of the GM (0, N) model in predictions and the positive effect of the fractional order to enhance the performance of the model. us, this paper proposes a fractional GM (0, N) model (FGM (0, N)) reasonably.
e Brief Introduction of Classic GM (0, N) Model. Let
be the data sequence of system behavior characteristic and be the sequence of related factors: which is the 1-AGO sequence of which is the basic form of the GM (0, N) model. e GM (0, N) model has similarities with multiple regression, but there is a fundamental difference. e GM (0, N) model generates series 1-AGO series by the accumulation of the original data. Let . . , N, be described as in Definition 1, and the input matrix and the output vector of the model are, respectively, Let the parameters be listed as a � [b 2 , b 3 , . . . , b N , a] T , and the equation form of the GM (0, N) model is Y � Ba; then, the least square estimation of the model is
Introduction to the FGM (0, N) Model.
e fractional order accumulation generation method and prediction model have been proved to be an effective method to improve the accuracy of the grey models [33].
erefore, a novel model, the FGM (0, N) model, is proposed in this paper to reduce the prediction error. Let be the data sequence of system behavior characteristic and Journal of Mathematics be the sequence of related factors.
Let N and k � 1, 2, . . . , n, be the r(0 < r < 1) order-accumulated generating operator (r-AGO). Set which is the r order accumulated generation sequence. Let be the data sequence of system behavior characteristic. Let (k − 1) be the r(0 < r < 1) order-inverse accumulated generating operator (r − IAGO): which is the r order inverse accumulated generation sequence.
Let X (0) i and X (1) j , j � 2, 3, . . . , N, be described as in Definition 1: which is the basic form of the FGM (0, N) model. e least square method is used to estimate the parameters, a � [b 2 , b 3 , . . . , b N
Modeling Steps of the FGM (0, N) Model
Step 1: determine the system behavior characteristic data sequence X (0) 1 and related factor sequence X (0) j , j � 2, 3, . . . , N Step 2: calculate and generate system behavior characteristic data sequence, and related factor sequences, and r(0 < r < 1) order accumulated generation sequences X (r) i , i � 1, 2, . . . , N Step 3: the FGM (0, N) model was established by the sequence generated by r(0 < r < 1) order accumulation Step 4: the least square method is used to estimate the parameter a � [b 2 , b 3 , . . . , b N , a] T Step 5: according to formula (11), the prediction of sequence data is realized Step 6: the final data X (0) 1 is obtained by using r − IAGO inverse accumulated generating operator to restore the predicted data Besides, the accuracy of this article is measured by the ratio of the number of correct predictions to the total number.
Explore the Effect of Factors on Fresh Graduate's Career Choice: An Application of the FGM (0, N) Model
In this section, two models (GM (0, N) model and FGM (0, N) model) have been performed to forecast graduates' career choices and identify the influential factors. e career choices mainly contain three parts, including continuing studying, working in state-owned enterprises, and working in private enterprises. Besides, this paper only selects four different input variables, including GPA, gender, SES, and whether the student comes from an urban household or not. 4 Journal of Mathematics And, in order to compare the results of models, this paper sets the ratio of the training set and test set as 8 : 2.
e Statistical Description of the Dataset.
is article takes 3237 fresh graduates as the research object. It can be seen from Figure 1 that more than 60% of students still regard direct employment as their first choices after graduation and tend to choose to sign employment agreements to protect their rights and interests. As people's living standards improve, the number of students studying abroad has also increased, which accounts for 14% of the total number and even exceeds the number of people who choose to study for a postgraduate degree in China. is may be caused by the fact that the overseas postgraduate education system is shorter than that in China and studying abroad can avoid being forced to work directly because of failing to pass the postgraduate entrance examination in China. In addition, only 2% of the students do not find a job and wait for work at home, which shows that China has solved the problem of high unemployment at this stage well.
is article also counts the number of students who have declared economic problems. As shown in Figure 2, 89% of the students have no family financial problems, while 11% of the students have poor family financial status, of which 7% of students' family economic status is defined as general poverty, while 4% of students' family economic status is extremely poor. In other words, more than 10% of the families still have financial problems, and nearly 5% of the families have serious financial problems. is also reflects the large gap between the rich and the poor that has emerged in the country at this stage. And, the explanation and statistical description of whole variables have been shown in Tables 1 and 2.
Influential Factors for Fresh Graduate's Career Choice of
Continuing Studying. In this part, we totally collect the specific information of 3237 students (male: 33% and female: 67%), and the average GPA is 3.73. In addition, this paper sets the output number of 26.6% of the whole dataset (choice of continuing studying) as 1 and the others as 0.
It is not difficult to obtain from Figure 3 that the results of the GM (0, N) model are 53.9% and 56.57%, respectively, for the training set and testing set. Besides, as the fractional order changes, the performance of the model changes a lot. Among them, although the training-set accuracy of the models with the fractional order ranging from 0.2 to 0.9 is all over 50%, there are much fluctuation in the testing-set accuracy. And, the testing-set accuracy is smaller than the result of the traditional model.
However, if we set the fractional order to 0.1, the training-set and testing-set accuracy is 73.86% and 74.81%, respectively, which mean that the performance of this model is much better than the traditional one. And, we can choose this one to identify the influential factors.
From the results shown in Table 3 If we change the value of r, the specific parameters and equation will also change.
We can get the following conclusions: firstly, GPA shows the most positive effect, followed by town.
is means graduates with higher GPA and from an urban household are more likely to continue studying. Secondly, SES shows the negative influence. And, the reason may be that graduates with worse SES may choose to work aiming at weakening the family's stress.
Influential Factors for Fresh Graduate's Career Choice of Working in State-Owned Enterprises.
is section selects the whole dataset except the graduates with choice of continuing studying. Eventually, we get specific information of 2374 graduates (male: 35% and female: 65%), and average GPA is 3.64. Similarly, this paper sets the output number of 16.6% of the whole dataset (choice of working in state-owned enterprises) as 1 and the others as 0.
Based on the information shown in Figure 4, we find that, in the training set, there are five models with the accuracy of 82%. However, after considering the results of the testing set, we find that the performance of two fractional models (r � 0.1 and 0.2) and the traditional model is much better than the others. Due to the similar performance of these three models, we choose to take advantage of all these three models to explore the effect of factors on the graduate's choice of working in state-owned enterprises.
As we can see in Table 4, results obtained from the GM (0, N) model are similar with those of the other two fractional models, which indicates the robustness. And, there are also two conclusions: on one hand, gender and GPA make the contribution to the choice of working in state-owned enterprises, which means males with higher GPA are more likely to make this choice. On the other hand, the effect of SES can diversely affect this action.
Influential Factors for Fresh Graduate's Career Choice of Working in Private Enterprises.
e dataset used in this section is similar to that in Section 4.2. e only difference is that the output variable in this section is whether the students choose to work in private enterprises. After calculation, nearly 62% of the whole students take this action and we set the output number as 1.
Based on the information shown in Figure 5, we find the training-set accuracy of all the FGM (0, N) models is higher than the accuracy of the traditional model. Among them, the model with the fractional order (r � 0.1) has the best performance, while the testing-set accuracy of all the FGM (0, N) models is 59.68%, which may be caused by the fact that we set the situations of predictions more than 0.5 as 1 and Coefficient Journal of Mathematics the predictions less than 0.5 as 0 and most of the predictions are around 0.5. Comprehensively comparing the performance of models, this paper sets the training-set accuracy equaling to 58.5% as the standard and chooses results of the fractional models with the fractional orders ranging from 0.1 to 0.4 to make the following analysis. e results, as shown in Table 5, indicate that all the four factors show the positive effect on this behavior. Among these four factors, SES and gender are the most influential ones, followed by GPA. After comparing the results from the fractional models with the fraction order ranging from 0.1 to 0.4, we propose that the robustness of the results has been tested.
Conclusion and Suggestions
In order to confirm student's needs and provide suitable help, this paper builds the GM (0, N) models to forecast graduate's career choices. e career choices contain three parts, including continuing studying, working in stateowned enterprises, and working in private enterprises. And, GPA, SES, gender, and whether the student comes from an urban household or not are four input variables. More importantly, in order to increase the accuracy of models, we firstly combine the traditional GM (0, N) model with the fractional order and propose the FGM (0, N) model. However, we are surprised to find that the accuracy of some models is almost 60% and the accuracy of the fractional models in previous studies is more than 90%. After reading some related studies, this paper thinks that the most possible reason is the effect of COVID-19. is epidemic has an impact on the career choice of some fresh graduates. And, the effect may also be affected by other factors. us, the accuracy of some models is still small.
From the above analysis, we can mainly get the following conclusions: Firstly, after making the statistical description of the data, we find that most of the graduates are more likely to work instead of continuing studying. And, working in private enterprises is their first choice. Besides, there are still many students with financial problems, which may strongly affect their behavior. Secondly, based on the empirical studies, we propose that, in most of times, the performance of the FGM (0, N) models is better than the traditional one. However, there is no doubt that, in some cases, the accuracy of the GM (0, N) model is also very good, even higher than that of the fractional ones. irdly, GPA, gender, and town show the positive effect on all the three choices. Among these three factors, GPA is the most influential factor for the choices of continuing studying and working in state-owned enterprises. Fourthly, the effect of SES changes in forecasting graduate's different choices: while forecasting the choice of continuing studying and working in stateowned enterprises, SES shows the negative effect. However, SES makes the diverse contribution in forecasting the choice of working in private enterprises. us, based on the above conclusions, we propose the following suggestions: for the male students with higher GPA and from an urban household, schools should set up more theoretical courses and some modules about how to be better suitable for the work in state-owned enterprises. For the students with bad family socioeconomic status, schools setting up more practical courses can be better.
Data Availability
e data used to support the findings of this study are available from the corresponding author upon request.
Conflicts of Interest
e authors declare that they have no conflicts of interest. | 2021-09-28T01:09:52.735Z | 2021-07-06T00:00:00.000 | {
"year": 2021,
"sha1": "833955a46c62cb9a21fc3a49f4bf12e0b5d53450",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1155/2021/8237600",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "bb0e3cafebfaa6f5225c684d329d1bfabb298315",
"s2fieldsofstudy": [
"Economics"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
153325546 | pes2o/s2orc | v3-fos-license | The Aggregation Problem in the Employment Theory : The Representative Individual Model or Individual Employees Model ?
Employment theory does lacks a consensus concerning whether employment variation should be expressed as a change in the hours worked as a representative individual or as a change in the population of employed individuals. By appling the OLG model developed by Lucas [1] and Otaki ([2-4]), the present article describes a serious theoretical consequence of distinction. The crucial factor that different employment theories are the intertemporal substitution effect and the indivisibility of labor force. Monetary expansion increases the rate of return for money if it is credible in the sense of Otaki [5]. This enhances the hours worked in the representative individual model, and thus, aggregate supply causes demand. Conversely, in the indivisible employees model, such an intertemporal substitution effect does not exist. The monetary expansion directly improves the purchasing power of money and thereby increases the aggregate demand for goods by the older generation. Thus, demand derives supply.
Introduction
Independent of whether researchers adopt neoclassical or new Keynesian economic models, recent employment theories have rested on the assumption of a representative individual.However, it is important to note that the hours worked by a representative individual differs crucially from indivisible employees who each work equal amount of time.In this paper, we show that such a distinction has serious theoretical consequences.
The crucial factor is the existence of the intertemporal substitution effect.In the representative model, an expansion of money raises the rate of return as long as money is credible and stimulates the labor supply.Hence, apart from the spurious difference, both neoclassical and new Keynesian models seek the cause of employment variation for the supply-side incentive.
In contrast, there is no such substitution effect in the indivisible employees model 1 .A monetary expansion directly increases the purchasing power of money as Otaki [4] shows, even if the money-supply rule obeys that by Lucas [1] as long as money is credible.It also implies that the monetary expansion increasess the aggregate demand, which in turn increases the real GDP.That is, the demand causes the corresponding supply, as Keynes [6] observed.
The rest of the paper is organized as follows.Section 2 constructs alternative models concerning the employment theory.Section 3 contains brief concluding remarks.
The Structure of the Model
We consider a standard two-period deterministic OLG model in a production economy.In every period, a unit of individual is born.They can work only when they are young.A unit working hour produces unit goods.
The money supply obeys Lucas's [1] rule.That is, 1 Although we can principally separate the adjustment of hours worked from that of employment level (see Fukao and Otaki [7]), doing so requires far more complex dynamics, which are not essential to our discussion.Furthermore, if there is no fixed sunk cost for being employed, it will be clarified that every firm uniformly offers minimal hours worked because the increasing marginal disutility of labor requires higher wages for compensation.
1 , where 1 t m is the nominal money stock per capita that is carried over from the previous period.x is the gross increase rate of money.In this sense, new money is supplied as its own nominal interest rate.
We make the following alternative assumptions concerning the labor supply: 1) In the representative individual model, the representative individual can chooses his working hours and there is no unemployment problem; 2) In the indivisible employees model, each individual faces the discrete choice of whether to work.
The Definition of Equilibrium
For simplicity, we assume that the representative individual possesses the following utility function R U : where is a well-behaved linear homogenous function.
1 2 1 denote the consumption level sof generation during the young and old stages of life, respectively.is the hours worked.
The shape of is illustrated in Figure 1.
v h
The assumption that some lower limit exists for the disutility of labor is equivalent to the assumption that individuals do not incur any additional disutility by increasing in hours worked to some extent.As the classical economists presume.Its economic meaning of this assumption is that there is an urgent need to produce goods that correspond to the subsistent level, as shown below.
The lifetime budget constraint is Since the lifetime utility function concerning the consumption stream is concave and homothetic, we obtain the following correspond indirect utility function : Moreover, we can ascertain that holds.This expression implies that, as long as is sufficiently small, the equilibrium hours worked always exceeds the subsistent lower limit h , and that the problem of the indivisibility of hours worked never appears in the decision problem.The optimality conditions are We assume, according to Lucas [1], that leisure and the current consumption are not inferior goods.Equation ( 6) directly implies that In addition to the three optimality conditions, there is one independent market equilibrium condition.Here, we consider the condition for the money market equilibrium: that is, Furthermore, we assume the credibility of money in the sense of Otaki , , , c c h p p , and five independent Equations ( 5)-(9).Hence, the model is closed, and the solution consists of a temporary rational expectation equilibrium.
O 2 The concept of the credibility of money is a device used to select a unique rational expectation equilibrium (REE) from among multiple REEs that are generic to the OLG model of the monetary economy.Credibility economically means that people rationally believe its intrinsic value is kept intact even if the velocity of monetary acceleration is changed.
Comparative Statics
From Equations (8) and (9), it is clear that 2 increases with the nominal interest rate of money c .
x Equations ( 5) and ( 6) imply that 2 is a monotonically decreasing function of the effective inflation rate (the inverse of the c real interest rate) decreases as x increases.It is also apparent from Equation ( 5) that equilibrium working hours increases with x3 .t h To summarize, as long as money is credible, an easy monetary policy increases the real interest, and hence, the representative individual works more to enjoy more future consumption.Accordingly, a monetary expansion advances intertemporal substitution from current consumption and leisure into future consumption by raising the real rate of interest.As such, the expansionary effect of monetary policy is entirely based on the labor supply incentive, not on the expansion of the aggregate demand.In this sense, the representative individual model is inevitably classified as a neoclassical macroeconomic model.
The Time-Independence of the Model
Assume that the representative individual rationally expects that the real effective inflation rate That is, future consumption 2 becomes time-independent.Hence, from Equations ( 5)-( 7), 1 are also time-independent.Consequently, the rational expectation equilibrium characterized by the initial condition x and the expectation formulation (9) and (10) are stationary.
The Indivisible Employees Model
Here, we assume that labor supply is indivisible, and that each individual has the identical utility function I U : where is the same consumption utility function as in Equation (1).t u denotes a definition function that takes the value unity when the individual works unit time and that is zero when the individual does not work.
According to Equation (1), the nominal minimal revenue that individuals decide to work is represented as We must note that firms strictly prefer increasing employment to the upward adjustment in working hours per capita in any interior equilibrium in which unemployment exists and all individuals are indifferent to the decision of whether to work.
The reason is as follows.Even if a unit employment increases, as long as the working hours per capita are fixed, there is no appreciation of nominal wages.However, the concavity of requires nominal wages higher than to induce working hours that produce the same amount of output as in the case of employment adjustment.Thus, as long as unemployment exists, working hours per capita is fixed to the minimal level NR h .Accordingly, we obtain the following difference equation concerning the evolution of price sequence5 : Hence, the equilibrium real interest rate 1 is independent of x and takes a constant value.The equilibrium condition for the money market is where s is the marginal propensity to save.
Assuming the credibility of money (i.e., an increase in the monetary growth rate x increases the current value of money and empowers the purchasing power of old individuals as long as money is a credible asset.As such, the monetary expansion stimulates the economy through the multiplier effect developed by Otaki [2].
om time-independent, so does the real equilibrium GDP y 6 .
In addition, as we previously mentioned, the individual employees model is similar to a Keynes' [6]
Concluding Remarks
This article analyzed how the aggregation problem affects the the g results.First, because of the intertemporal substitution between goods and leisure, a change in working hours in the representative individual model is supply-side oriented even if money is credible and non-neutral.Furthermore, since it does not contain the concept of indivisibility of labor, this model cannot explain why unemployment occurs although it can spuriously trace the output movement.An accelaration in monetary growth increases the real interest of money, and thus, intertermporal substitution occurs from le mption to future consumption.Second, the indivisible individual employees model possesses the demand-driven property deepened by Keynes dogenously fixed, whenever money is credible, money comes to be highly valuated and raises the purchasing power of the old generation via the acceleration of monetary growth.As such, the effective demand expands, and the real GDP increases via the multiplier effect.
In summation, Keynes' [6] economics can be characterized by the following two factors.The first is the credibility on a fiat money of which intrinsic value is basically indetereninate.Second is the specificity of labor as a commodity, namely, the indivisibility of labor. | 2019-05-15T14:34:02.376Z | 2012-12-19T00:00:00.000 | {
"year": 2012,
"sha1": "5ba541079b5db3fd742be5952722fd4f59dfa92e",
"oa_license": "CCBY",
"oa_url": "http://www.scirp.org/journal/PaperDownload.aspx?paperID=25920",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "5ba541079b5db3fd742be5952722fd4f59dfa92e",
"s2fieldsofstudy": [
"Economics"
],
"extfieldsofstudy": [
"Economics"
]
} |
219282973 | pes2o/s2orc | v3-fos-license | Complementary Feeding Practices And Its Economic And Social Impact: A Cross Sectional Hospital Based Study
Introduction: According to NHFS-4 data, around 38% of under-five Indian children are malnourished and stunted. In addition to poor socio-economic status, faulty complementary feeding practice is a major contributor to this. The objective of this study is to know the prevailing complementary feeding practices in our area, the most common food type preferred for introduction during initiation of complementary feeds, knowledge of the mother and their family members regarding complementary feeding, the factors influencing in decision making and its financial burden on the family. Methods: This hospital based cross-sectional study was conducted in a private Medical College in Bhubaneswar, Odisha, India. 256 mothers of infants between six months to two years attending Paediatric OPD from December 2018 to June 2019 were selected by random sampling technique. Data were collected using a structured questionnaire. Results: Out of the total 256 subjects interviewed, 134 (76.13%) out of 176 families belonging to lower income group preferred commercially available processed food over home food as the initial weaning food as compared to 32 (40%) out of 80 of the high income group preferring the same. The lower income group spent 22.3% of the total family income on commercial preparations to feed their infants in the age group six to 12 months. Whereas high income group families spend an average of 14.3% of family income on baby food products in the same age group. Despite being in regular contact with the local physician, in 85% of the total visits to the doctor, the opportunity wasn’t utilised to counsel the family member about complementary feeding practices. Conclusions: Commercial preparations are the primary preferred weaning food. The dietary diversity of complementary food is very poor, thus affecting growth and development. The false perception that commercial preparations are critical to child growth and development is overburdening the family finances.
INTRODUCTION
Infant and young child feeding practices directly affect the nutritional status of children under two years of age and ultimately impact child survival. Improving infant and young child feeding practices in children zero to 23 months of age is therefore critical to improved nutrition, health and development of children. 1,2 Evidence indicates that inappropriate complementary feeding practices such as untimely introduction of complementary food, improper feeding frequency and low dietary diversity of food have numerous negative effects on children's health. 3 In India, 38% of children under age five years are stunted. This is a sign of chronic undernutrition. Twenty-one percent of children under age five years are wasted, which is a sign of acute undernutrition, while 36% of children under age five years are underweight. 4 Twenty percent of breastfed children had an adequately diverse diet since they had been given foods from the appropriate number of food groups, while 31% had been fed the minimum number of times appropriate for their age. 4 The feeding practices of only 9% of breast fed children age between six to 23 months, meet the minimum standards for all IYCF (Infant and Young Child Feeding) feeding practices. 4 Appropriate complementary feeding depends on appropriate information and support from the family, community and healthcare system. The incidence of malnutrition rises sharply during the period from six to 18 months of age in most countries, and the deficits acquired at this age are difficult to compensate for later in childhood. The incidence of malnutrition rising after six months of age indicates the importance of appropriate complementary feeding in future growth of the child. 5 Inappropriate feeding practices and their consequences are major obstacles to sustainable socioeconomic development and poverty reduction. Governments will be unsuccessful in their efforts to accelerate economic development in any significant long-term sense until optimal child growth and development, especially through appropriate feeding practices, are ensured. 6 In our ward we observed that most families of the admitted infants were giving commercially available processed cereals along with formula milk as the main complementary food. Their perception was formula milk is an integral part of complementary food along with processed cereals. Very few families were giving home based food like rice, wheat and dal etc. They believed that rice, wheat (chapati) cannot be digested by the baby between six to 12 months, so need to be given after one year of age.
Inadequate knowledge about appropriate food and feeding practices is often a greater determinant of malnutrition than the lack of food. In 2003, Piwoz et al. suggested that globally, complementary feeding has not received adequate attention with regard to infant and young child feeding. Often, complementary feeding was not sufficiently addressed and the main objective has been the promotion, protection and support of breastfeeding. 7 A focus on Infant and Young Child Feeding (IYCF) is second to management of malnutrition in terms of numbers of lives saved. 8 Older infants from six months are most vulnerable to malnutrition and growth faltering during the transition period from a milk diet to a diet that includes complementary food. 9
METHODS
This hospital based cross-sectional study was conducted in Department of Paediatrics of a tertiary care teaching hospital in Bhubaneswar, Odisha, India between December 2018 to June 2019. Assuming 20% of breast fed babies (Six months to two years) are given food with adequate dietary diversity 4 and considering confidence level of 95%, the desired sample size was calculated to be 246 using Open Epi sample size calculator. A total of 256 subjects were enrolled as study participants in this study. Mothers of babies age between six months to two years who visited the our OPD days (Monday) during the study period were informed about the study and asked to participate. Among them a total of 256 eligible mothers who gave their consent were included in the study. The age group of study subjects was selected based on WHO recommendation on complementary feeding. 5 Babies requiring hospitalisation and those families who refused to participate in the study were excluded. Data were collected by the authors using a structured questionnaire administered to the mothers. The questionnaire consisted of 42 information. It elicited information about demographic profile, knowledge and practice of complementary feeding and the factors influencing it, including the role the healthcare provider in counselling regarding complementary feeding. The questionnaire was pretested and was revised to enhance its clarity and comprehension. Detailed anthropometry was done and data on weight and length were used for calculation of nutritional status: weight-for-age, length-for-age and weightfor-length, expressed in standard deviation (SD) units (z-scores) as per the child growth standards of WHO. 10 Economic condition was categorised as per the RRY(Rajiv Gandhi Rin Yojana), Government of India, as it was easy to reproduce and comprehend. This takes family income into account to classify economic status. It is used by Government of India in various social welfare programs. According to RRY households having an average annual income up to Rupees two lakhs are considered low income group and those above two lakhs higher income group. Percentage was calculated and univariate analysis of the data was done where considered necessary using IBM SPSS Chicago version 20. P value < 0.05 was considered significant.
RESULTS
The demographic data of the study population is depicted in In the present study 79% mothers started complementary feeding at the recommended age of six months, 10% at seven to eight months and 9% between four to five months and 3% between three to four months (Figure 1).
The most preferred type of food for initiating complementary feeding was the commercially available processed cereals along with boiled vegetables (76.1%), 17.6% preferred home based food (supernatant liquid of cooked dal) along with commercially available formula milk, 4.3% added only formula milk along with breast milk as complementary diet and 2% giving chattua (Powdered flat rice mix), khichdi (rice porridge) etc ( Figure 2 12 months are commercially available processed cereal > supernatant cooked dal > mashed boiled vegetables (papaya, potato, carrot) > rice flake powder > boiled apple. (Fig.3) The primary source of information regarding what to give in complementary feeds in decreasing order of frequency: Elderly members of family (primarily mother in law) > Neighbours > Local pharmacist > ANM (auxiliary nurse midwife) > Doctor. The local doctors barely have any role in shaping the attitude of the mother & her family towards appropriate feeding practices.
In this study, 29% of the children were stunted, 32% were underweight and 33% were wasted. Despite the problem of high prevalence under five wasting & stunting in our children, the opportunity of counselling the mother about complementary feeding during contact with healthcare provider (Doctor) was not utilised in 85% of times during her visit to the health care facility. In the rest 15% instances despite the doctor's counselling regarding appropriate complementary feeding, it hardly had any effect on the attitude of the mother/family towards complementary feeding. Lesser formal educational status was associated with higher preference for commercial preparation as the primary complementary diet (p < 0.001).
Mothers residing in rural areas had increased preference for commercial preparation than those in urban areas (68.6% versus 43%) (p < 0.003). (Figure 5) Among the lower income group families, about an average of 22.3% of the family income is spent on purchasing commercially available baby food products in the age group six to 12 months. Around 10% of the income is spent on the other needs of the baby. (Fig. 6)
DISCUSSION
According to UNICEF, the first 1000 days of life, between a woman's pregnancy and her child's second birthday, is a unique period of opportunity when the foundations for optimum health and development across the lifespan are established. The 2015-16 NHFS (National Health and Family Survey), Government of India, the feeding practices of only nine percent of breastfed children age six to 23 months met the minimum standards for all IYCF (Infant and Young Child Feeding) practices. In one study diversity in the diet of child was significantly associated with better nutritional status, especially height-for-age index in sub-Saharan children. 11 In our study 17% of the breastfed infants between six to 23 months met the minimum standards of IYCF practices, which is better compared to NHFS-4 data. In this study, 29% of the children were stunted, 32% were underweight and 33% were wasted which is similar to national average. In a study by Srivasatava G et al. 29% of the children were stunted, 32% were underweight and 33% were wasted. 12 In an interventional study of 35 parents in Delhi, only 16.5% of mothers had started complementary feeding at the recommended time. 13 A prospective interview study of 200 parents by Aggarwal et al. showed that only 17.5% of mothers had started complementary feeding at the recommended time. 2 In our study the initiation of complementary feeding at the recommended age of six months was seen in 79% of children, which is much better than the mentioned studies. This may be due to the fact that this is a recent study, when the information about breast feeding is much better and more government initiatives to promote breast feeding.
In this study the low income group families and those residing in rural areas are shown to be spending larger proportion of their household income on commercially available food for their baby. The increased spending on commercial preparation is significantly associated with less formal education in mothers of low income group and rural areas. There is no other similar type of study for comparison but a study on infant feeding practices in rural Bangladesh by Owais et al. did not find an association between maternal literacy and receipt of minimally acceptable diet at infant age nine months. 14 According to a study done in Sudan, low parental education is associated with high prevalence of nutritional anaemia and malnutrition. 15 Significantly less involvement of local health care workers (doctors in particular) in counselling for appropriate complementary feeding to mothers is adding to the problem. Mistry S K et al. found nutritional counselling had a positive role on increasing some of the optimal IYCF practices, which might have resulted in significant reduction in stunting prevalence among children. 16 Studies from Uganda and Cameroon also reported that counselling mothers on child feeding practices is associated with over all improvement in optimum infant and child feeding practices. 17,18 In this study the expenditure on commercial preparation is high but due to associated poor dietary diversity and unsustainability due to the expense involved, is responsible for malnutrition in long term. This unnecessary expenditure on commercial preparations can be avoided and the nutritional needs of the babies can be easily addressed with locally available foods, if the families are properly counselled about the importance of dietary diversity in complementary feeding.
Early introduction of more flavourful, rich and sweetened processed cereal preparations is adversely affecting the acceptance of other home based food when introduced at a later age. In most of the times despite coming in contact with health professionals in particular doctors, the opportunity was not utilised to counsel regarding appropriate feeding practices. There may be time constraint on the part of the doctor but this may be very crucial in ensuring the health and wellbeing of our future generation.
CONCLUSIONS
There is a strong belief on the part of parents that commercial preparations are superior to home based food for their baby. The high cost of commercial preparations make it unsustainable. The dependence on commercial preparations is a hinderance to dietary diversity in complementary feeding. In rural areas and families of lower economic group, the use of processed cereals and formula milk is much higher compared to that in urban areas and families with higher income. Appropriate measures need to be taken at all levels, from government to individuals; particularly health care providers to use all the available opportunities to address the issue of nutrition in children for securing the health of our future generation.
The results obtained may not truly reflect the complementary feeding practices in general population because of the cross sectional study design, institution based study and small sample size. Large scale community based studies are required to find out more appropriate statistical results. | 2020-05-07T09:09:59.122Z | 2019-04-27T00:00:00.000 | {
"year": 2019,
"sha1": "7e0827c410364472013151340bf50b046857c981",
"oa_license": "CCBY",
"oa_url": "https://www.nepjol.info/index.php/JNPS/article/download/26473/23429",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "c833f050699967a24104b7d51367396e1b47df09",
"s2fieldsofstudy": [
"Medicine",
"Economics"
],
"extfieldsofstudy": [
"Medicine"
]
} |
25477246 | pes2o/s2orc | v3-fos-license | Development of Head and Neck Squamous Cell Carcinoma Is Associated With Altered Cytokine Responsiveness
Growth of head and neck squamous cell carcinoma (HNSCC) is generally associated with an inflammatory component. It is hypothesized that these tumor cells develop mechanisms to evade the growth inhibitory effects of cytokines that are present in the tumor microenvironment. This study determined the changes in responsiveness to inflammatory cytokines that accompany the transition of normal to transformed epithelial cells. Paired primary cultures of normal epithelial cells (NEC) and SCC cells were established from 16 patients. Receptor-mediated activation of signal transducer and activator of transcription and extracellular signal-regulated kinase pathways in response to cytokine treatments was identified by immunoblot analysis. Thymidine incorporation determined the impact of the cytokines on DNA synthesis. HNNEC and HNSCC displayed a prominent signaling in response to oncostatin M, interleukin-6, IFN-;, and epidermal growth factor. Untreated HNSCC showed an elevated level of phosphorylated signal transducer and activator of transcription 3 and extracellular signal-regulated kinase (P < 0.001) compared with HNNEC, suggesting constitutively activated pathways. Moreover, HNSCC cells phosphorylated significantly more signal transducer and activator of transcription 1 in response to oncostatin M (P = 0.002) and IFN-; (P = 0.018) treatments. DNA synthesis of SCC cells was less inhibited by cytokines produced by endotoxin-stimulated macrophages (P = 0.016) than that of NEC. Low-dose oncostatin M slightly enhanced proliferation of SCC, whereas that of NEC was suppressed (P = 0.016). This study identified significant alterations in signal transduction pathways engaged by cytokines and which are associated with loss of growth inhibition of HNSCC. Increased signal transducer and activator of transcription phosphorylation, along with constitutively phosphorylated extracellular signal-regulated kinase in HNSCC, suggest that these pathways as molecular markers are important in the malignant transformation process and are potential targets for treatment. (Mol Cancer Res 2004;2(10):585–93) Introduction Growth of malignant lesions is invariably associated with the presence of inflammatory cells at the primary site (1). The involvement of inflammation is particularly evident at sites prone to infection such as those found in the head and neck region. The role that inflammation has on function of normal tissue, including epithelial, stromal, and endothelial cells as well as on supporting proliferation of established malignancies, is presently an active area of research (1-3). Members from the family of IFNs, tumor necrosis factors, and the hematopoietic cytokines, particularly that of the interleukin-6 (IL-6) group, have been found to be largely responsible for the suppressing proliferation of normal epithelial cells (NEC) in association with inflammation. As shown in various preclinical epithelial cell studies, the mode of action of these cytokines includes induction of cell cycle arrest, activation of differentiation, and initiation of apoptosis (4-6). Because of the preclinical promise, some of these inhibitory cytokines have been used in clinical trials as biological adjuvants in the treatment of malignancies (5-8). More recently, the IL-6-related cytokine, oncostatin M (OSM), produced by tumor-associated macrophages and lymphocytes (9), has been noted to have particularly prominent growth inhibitory properties on established epithelial cell lines and primary cell cultures from different disease sites (10-14). However, the fact that epithelial tumors do grow in the presence of inflammation suggests that the effects of the inflammatory cytokines on those cell types may be subject to alteration during the transforming process. The finding in part supports this notion that the expression of inflammatory cytokines and the function of the corresponding receptors in the epithelial cells are frequently modified in progressing malignancies (15, 16). The change in cytokine responsiveness is believed to result from genetic and epigenetic changes in expression of receptors, presence of autocrine factors, and deregulated activity of signaling proteins. These changes that occur during oncogenic conversion from normal to carcinoma cells have been interpreted to provide certain tumor cell types a proliferative or survival advantage (15). OSM and other members of the IL-6 hematopoietic cytokine family function through cell surface receptors that are Received 5/28/04; revised 8/15/04; accepted 8/31/04. Grant support: National Cancer Institute grants CA85580 (H. Baumann) and Roswell Park Cancer Support Grant CA16056. The costs of publication of this article were defrayed in part by the payment of page charges. This article must therefore be hereby marked advertisement in accordance with 18 U.S.C. Section 1734 solely to indicate this fact. Requests for reprints: Heinz Baumann, Department of Molecular and Cell Biology, Roswell Park Cancer Institute, Buffalo, NY 14263. Phone: 716-845-5738; Fax: 716-845-5908. E-mail: heinz.baumann@roswellpark.org Copyright D 2004 American Association for Cancer Research. Mol Cancer Res 2004;2(10). October 2004 585 on April 29, 2017. © 2004 American Association for Cancer Research. mcr.aacrjournals.org Downloaded from composed of a ligand binding subunit (a) and a common signal-transducing subunit (h), gp130 (17-19). The signaltransducing receptor subunits have associated with their cytoplasmic domain members of the Janus family tyrosine kinase (18, 20-22). On ligand binding, signaling is initiated by activation of Janus kinases, which then phosphorylate receptor subunits and recruited substrates including the signal transducer and activator of transcription (STAT; refs. 23-25). Similarly, IFN-g receptor, when activated by ligand binding, preferentially phosphorylates STAT1 through the action of the receptorassociated Janus kinases (26). Activated STATs translocate to the nucleus where they mediate transcriptional induction of different genes (18). Besides STATs, all hematopoietic cytokine receptors, like most growth factor receptors, activate the mitogen-activated protein kinase/extracellular signal-regulated kinase (ERK) pathway (22, 27). The ERK pathway is considered to promote survival and/or growth stimulating effects (28, 29). The immediate activation of the Janus kinase/STAT and mitogen-activated protein kinase/ERK pathways by ligand binding is a specific and quantitative measure for the action of the cytokine receptor system in the target cells and thus qualitatively and quantitatively defines the responsiveness of cells (15). The signaling and effects of IL-6 cytokines have been characterized mainly in established cell lines. Much less is known about the level of responsiveness of normal, nondifferentiated, and proliferation-competent human epithelial cells and the range of responsiveness among such cells from different individuals. No study has addressed the responsiveness of head and neck squamous cell carcinoma (HNSCC) to IL-6-type cytokines. Considering that HNSCC, due to their anatomic location, can be particularly susceptible to infection and inflammation, the action of inflammatory cytokines predicts a potentially important role in the tumor growth control. The cellular complexity of head and neck tissue with substantial presence of nonepithelial cell types inevitably will render functional analyses in intact carcinoma tissues inconclusive. Thus, the approach of short-term primary cultures of epithelial cells derived from confirmed normal sites and cancer lesions has been chosen. This study characterizes the responsiveness of head and neck NEC (HNNEC) to cytokines and growth factors that would be found in the local tumor environment and determines the effects of malignant transformation on cytokine signaling and the regulation of proliferation. Results Patient Demographics From 25 surgical specimens that were processed for generating primary cell cultures, 16 specimens yielded matched sets of cultures representing NEC and SCC that could be analyzed comparatively for cytokine responsiveness. The other 9 specimens did not yield cultures for both NEC and SCC. Cells either failed to grow in vitro or were lost due to contamination. The demographic information on the 16 patients is given in Table 1. The following head and neck sites were represented: 6 oral cavity, 5 larynx, 3 hypopharynx, and 2 base of tongue. The patients enrolled in this study were stages I to IVa and none had any treatment prior to surgery. Cytokine Response Pattern The procedure to isolate and culture primary epithelial cells was successful in producing proliferating cultures that consisted almost exclusively of cells with epithelial morphology (Fig. 1A). Each NEC preparation, regardless of head and neck origin, yielded cells with essentially identical morphology. The corresponding SCC preparations often consisted of cells that were morphologically indistinguishable from the normal counterpart. In some cases, the cells were heterogeneous in size ranging from larger to smaller than the corresponding NEC (Fig. 1A). Immunocytochemical staining with anti-cytokeratin antibodies confirmed that each of NEC and SCC preparations contained >90% of cytokeratin-positive epithelial cells. When the primary cultures reached f70% confluence, subcultures from the passage 1 cells were established in 24-well culture plates. These subcultures were used to identify the responsiveness to leukemia inhibitory factor (LIF), IL-6, OSM, epidermal growth factor (EGF), and IFN-g. The level of phosphorylation of STAT and ERK after 15-minute treatment served as indicator for the ligand-inducible receptor activity. The comparison of the response patterns of NEC and SCC provided a measure of transformation-associated changes in receptor signaling that were detectable in tissue cultures. The comparison of basal level phosphorylation of ERK and STATs indicated whethe
Introduction
Growth of malignant lesions is invariably associated with the presence of inflammatory cells at the primary site (1).The involvement of inflammation is particularly evident at sites prone to infection such as those found in the head and neck region.The role that inflammation has on function of normal tissue, including epithelial, stromal, and endothelial cells as well as on supporting proliferation of established malignancies, is presently an active area of research (1)(2)(3).Members from the family of IFNs, tumor necrosis factors, and the hematopoietic cytokines, particularly that of the interleukin-6 (IL-6) group, have been found to be largely responsible for the suppressing proliferation of normal epithelial cells (NEC) in association with inflammation.As shown in various preclinical epithelial cell studies, the mode of action of these cytokines includes induction of cell cycle arrest, activation of differentiation, and initiation of apoptosis (4)(5)(6).Because of the preclinical promise, some of these inhibitory cytokines have been used in clinical trials as biological adjuvants in the treatment of malignancies (5)(6)(7)(8).More recently, the IL-6-related cytokine, oncostatin M (OSM), produced by tumor-associated macrophages and lymphocytes (9), has been noted to have particularly prominent growth inhibitory properties on established epithelial cell lines and primary cell cultures from different disease sites (10)(11)(12)(13)(14).However, the fact that epithelial tumors do grow in the presence of inflammation suggests that the effects of the inflammatory cytokines on those cell types may be subject to alteration during the transforming process.The finding in part supports this notion that the expression of inflammatory cytokines and the function of the corresponding receptors in the epithelial cells are frequently modified in progressing malignancies (15,16).The change in cytokine responsiveness is believed to result from genetic and epigenetic changes in expression of receptors, presence of autocrine factors, and deregulated activity of signaling proteins.These changes that occur during oncogenic conversion from normal to carcinoma cells have been interpreted to provide certain tumor cell types a proliferative or survival advantage (15).
OSM and other members of the IL-6 hematopoietic cytokine family function through cell surface receptors that are composed of a ligand binding subunit (a) and a common signal-transducing subunit (h), gp130 (17)(18)(19).The signaltransducing receptor subunits have associated with their cytoplasmic domain members of the Janus family tyrosine kinase (18,(20)(21)(22).On ligand binding, signaling is initiated by activation of Janus kinases, which then phosphorylate receptor subunits and recruited substrates including the signal transducer and activator of transcription (STAT; refs.[23][24][25].Similarly, IFN-g receptor, when activated by ligand binding, preferentially phosphorylates STAT1 through the action of the receptorassociated Janus kinases (26).Activated STATs translocate to the nucleus where they mediate transcriptional induction of different genes (18).Besides STATs, all hematopoietic cytokine receptors, like most growth factor receptors, activate the mitogen-activated protein kinase/extracellular signal-regulated kinase (ERK) pathway (22,27).The ERK pathway is considered to promote survival and/or growth stimulating effects (28,29).The immediate activation of the Janus kinase/STAT and mitogen-activated protein kinase/ERK pathways by ligand binding is a specific and quantitative measure for the action of the cytokine receptor system in the target cells and thus qualitatively and quantitatively defines the responsiveness of cells (15).
The signaling and effects of IL-6 cytokines have been characterized mainly in established cell lines.Much less is known about the level of responsiveness of normal, nondifferentiated, and proliferation-competent human epithelial cells and the range of responsiveness among such cells from different individuals.No study has addressed the responsiveness of head and neck squamous cell carcinoma (HNSCC) to IL-6-type cytokines.Considering that HNSCC, due to their anatomic location, can be particularly susceptible to infection and inflammation, the action of inflammatory cytokines predicts a potentially important role in the tumor growth control.The cellular complexity of head and neck tissue with substantial presence of nonepithelial cell types inevitably will render functional analyses in intact carcinoma tissues inconclusive.Thus, the approach of short-term primary cultures of epithelial cells derived from confirmed normal sites and cancer lesions has been chosen.This study characterizes the responsiveness of head and neck NEC (HNNEC) to cytokines and growth factors that would be found in the local tumor environment and determines the effects of malignant transformation on cytokine signaling and the regulation of proliferation.
Patient Demographics
From 25 surgical specimens that were processed for generating primary cell cultures, 16 specimens yielded matched sets of cultures representing NEC and SCC that could be analyzed comparatively for cytokine responsiveness.The other 9 specimens did not yield cultures for both NEC and SCC.Cells either failed to grow in vitro or were lost due to contamination.The demographic information on the 16 patients is given in Table 1.The following head and neck sites were represented: 6 oral cavity, 5 larynx, 3 hypopharynx, and 2 base of tongue.The patients enrolled in this study were stages I to IVa and none had any treatment prior to surgery.
Cytokine Response Pattern
The procedure to isolate and culture primary epithelial cells was successful in producing proliferating cultures that consisted almost exclusively of cells with epithelial morphology (Fig. 1A).Each NEC preparation, regardless of head and neck origin, yielded cells with essentially identical morphology.The corresponding SCC preparations often consisted of cells that were morphologically indistinguishable from the normal counterpart.In some cases, the cells were heterogeneous in size ranging from larger to smaller than the corresponding NEC (Fig. 1A).Immunocytochemical staining with anti-cytokeratin antibodies confirmed that each of NEC and SCC preparations contained >90% of cytokeratin-positive epithelial cells.
When the primary cultures reached f 70% confluence, subcultures from the passage 1 cells were established in 24-well culture plates.These subcultures were used to identify the responsiveness to leukemia inhibitory factor (LIF), IL-6, OSM, epidermal growth factor (EGF), and IFN-g.The level of phosphorylation of STAT and ERK after 15-minute treatment served as indicator for the ligand-inducible receptor activity.The comparison of the response patterns of NEC and SCC provided a measure of transformation-associated changes in receptor signaling that were detectable in tissue cultures.The comparison of basal level phosphorylation of ERK and STATs indicated whether the transformation was also associated with an activation of signaling reaction that was independent of treatment with exogenous ligands.
The representative case of hypopharyngeal SCC cells and the corresponding NEC in Fig. 1B reveals the salient features of the cytokine response pattern of epithelial cells from head and neck origin and some of the alterations of the pattern that were observed in HNSCC cells.The quantitative values for the cytokine and growth factor -induced phosphorylation of STAT3 (Fig. 2A), ERK-1/2 (Fig. 2B), and STAT1 (Fig. 3) were compiled for the 16 cases.The response pattern of NEC was highly consistent among independent cell preparations made from different head and neck locations.The following features characterized the pattern: Unstimulated cells showed a low to nondetectable basal phosphorylation of STAT1, STAT3, and ERK (Fig. 1B).LIF treatment yielded a barely detectable activation of STAT3 and ERK.IL-6 was more effective than LIF but consistently less than OSM.The specificity of OSM response was indicated by the high-level phosphorylation of STAT1, STAT3, and ERK (Figs. 1B, 2A and B, and 3A).The response of NEC to EGF was evident by the phosphorylation of ERK and, in some cases, by a minor phosphorylation of STAT3.Although STAT3 activation by EGF was variable among cell preparations from different donors, it was consistently less than STAT3 activated by OSM.Lastly, IFN-g produced in all NEC cultures a maximal activation of STAT1 with minor elevation of phosphorylated STAT3 and ERK (Figs. 1B, 2, and 3).
The comparison of normal and tumor cells indicated that, in 11 of 16 HNSCC cases, an increased basal level of phosphorylated ERK was detectable, whereas the amount of total ERK proteins was not appreciably different between NEC and SCC (e.g., Fig. 1B).The magnitude of the enhanced ERK phosphorylation varied substantially among SCC preparations (Fig. 2B).Despite the elevated basal phosphorylation, in all HNSCC cultures, ERK phosphorylation was increased by treatment with the cytokines and EGF noted to be effective in NEC.
In 5 of 11 of the HNSCC cases with elevated basal level of phosphorylated ERK, an increased basal level of phosphorylated STAT3 was found as well (Figs.1B and 2A).Of note is that in none of the SCC cases was a basal phosphorylation of STAT1 detectable.A minor LIF response was detectable in all HNSCC cases and often was f 2-fold higher than in NEC.In contrast, the magnitude of IL-6 response was reduced.In 14 of 16 of the HNSCC cases, there was an enhanced activation of STAT1 in response to OSM and IFN-g treatments.The effect of OSM and IFN-g on STAT1 was probably in part due to a more effective recruitment of this factor, because the level of total STAT1 in most of the HNSCC cells did not differ appreciably from that of the corresponding normal cells.A few SCC cultures showed, however, a <2-fold increase of immune detectable STAT1 that could contribute to the higher amounts of phosphorylated STAT1 (e.g., Fig. 1B).The comparison of HNNEC and HNSCC cells also indicated in 10 of the 16 cases an enhanced EGF response, as is evident by the more prominent activation of STAT3 and ERK (Fig. 2B).
The low responsiveness of various epithelial cell types to IL-6 and LIF (as seen for HNNEC; Fig. 2B) has previously been correlated with a low to nondetectable expression of the ligand binding subunits, IL-6Ra and LIFRa, as judged from transcript analyses by reverse-transcription and PCR and by immunoblot analysis of cellular proteins for LIFRa (15).In contrast, the elevated basal phosphorylation of ERK and STAT3 in HNSCC conceivably could represent, among others, the result of genetic changes causing the constitutive activation of signaling pathways or the action of autocrine factors (24).The paucity of cellular material available from the primary culture systems precluded a characterization of the former possibility.The latter possibility, namely, an autocrine activity as potentially exerted by secreted cytokines, was assessed by two approaches: One approach was to treat SCC cultures with function-neutralizing anti-gp130 antibodies to determine the role of gp130 in elevated phosphorylation of ERK and STAT3.The other approach was to test 3-day conditioned medium from SCC cultures with high basal levels of phosphorylated ERK for its ability to induce signaling in NEC.However, both approaches failed to document significant activating effects of autocrine components that act extracellularly (data not shown).
These experiments could not rule out stimulatory activity, such as an autocrine factor, that acted in SCC at the intracellular level and thus was inaccessible to extracellular probing agents.Because the assessment of the responsiveness of the epithelial cells relied on 15-minute cytokine treatment that generates the maximal signaling reaction (Fig. 1B), the response pattern thus established was not comparable with the one expected for an autocrine factor that acted in a chronic manner.Therefore, to evaluate whether prolonged stimulation of NEC with OSM, the most effective cytokine on these cells, would in principle result in a pattern of phosphorylation of STAT3 and ERK as found in untreated SCC, primary NEC cells were incubated in the presence of OSM for 24 hours (Fig. 1C).Phosphorylation of STAT3 and ERK was maximal within 15 minutes.ERK phosphorylation returned to pretreatment level by 2 hours, whereas phosphorylation of STAT3 was maintained at low but above basal level.This level was comparable with that observed in some of the untreated SCC (Fig. 1B).The results indicated that an autocrine IL-6-related activity could account for an increased STAT3 activation in SCC but not for elevated ERK activity.
Effects of Cytokines on DNA Synthesis
Based on the altered responsiveness of HNSCC cells, we assessed whether treatment of the cells with OSM, or the physiologically relevant mixture of inflammatory mediators as provided by endotoxin-stimulated lung macrophages (CMM), would differentially affect the proliferation of the epithelial cells.CMM contains a complex mixture of mediators, including, among others, IL-6, LIF, OSM, tumor necrosis factor-a, IL-1h, IL-8, IL-10, and granulocyte-colony stimulating factor at concentrations from 1 to 300 ng/mL.HNNEC and HNSCC cells were treated with serially diluted CMM (Fig. 4) or OSM (Fig. 5).
Treatment of HNNEC with normal growth medium containing serially diluted CMM reduced DNA synthesis in a dose-dependent manner with a maximal reduction of 40% observed at the highest concentration tested (Fig. 4).In separate sets of cultures, NEC cells from the second and third passages from different donor tissues were maintained in normal growth medium up to 9 days to establish the growth rate.Doubling times varied substantially, ranging from 33 to 50 hours.
Proliferation rates of HNSCC maintained in normal growth medium did not significantly differ from those of the corresponding NEC.This was also evident by the comparable incorporation of [ 3 H]thymidine.In our experimental setting, the incorporation of radioactivity (counts per minute; mean F SE; n = 11) for control cultures of HNNEC was 154,883 F 13,376, and for HNSCC, it was 164,143 F 6,889.When considering only the five HNSCC cases that exhibited constitutive activation of both ERK and STAT3 pathways, a slightly higher thymidine incorporation was measured (183,120 F 5,294 counts per minute).Treatment of HNSCC cells with the same CMM dose gradient as applied to HNNEC did not lower thymidine incorporation below that of the control-treated cultures (Fig. 4).In fact, treatment of the cells with a 1:1,000 dilution of CMM even increased DNA synthesis by 30%.The response of HNSCC at this and higher CMM concentrations differed significantly from that of the corresponding HNNEC cultures (P = 0.016).
Both HNNEC and HNSCC cells responded to OSM in a dose-dependent manner with a decrease in thymidine incorporation.A treatment for 40 hours at the maximal concentration of 100 ng/mL OSM reduced DNA synthesis by 50% (Fig. 5).At lower concentrations ranging from 0.1 to 1 ng/mL, OSM seemed to be less inhibitory or even to exert a minor stimulatory action on DNA synthesis in the HNSCC when compared with the corresponding NEC cultures.This minor stimulatory effect has been detected in 14 of the 16 HNSCC cases that also presented an increased STAT1 (Fig. 3A).None of the HNNEC cultures showed a comparable stimulation.At the dose of 0.1 ng/mL OSM, the relative thymidine incorporation by HNSCC and HNNEC reached statistically significant difference (P = 0.016) but not at the other dose level.
Discussion
This comparative study of normal and transformed head and neck epithelial cells has indicated several significant alterations in the signaling pathway and cytokine responsiveness that accompany the tumorigenic process.Changes found with high frequency in SCC include an enhanced basal activity (phosphorylation) of the ERK pathway, an elevated activation of STAT1 signaling by OSM and IFN-g, an attenuation of IL-6 responsiveness, and a less suppressed DNA synthesis by inflammatory cytokines.These changes represent not only novel markers for the regulatory capabilities of HNSCC but also potential explanations for the ability of SCC to proliferate in presence of inflammation.The future goals will be (a) to elucidate the precise molecular mechanisms that link signaling reactions activated by OSM and other inflammatory mediators with the proliferation control characteristic for NEC and SCC and (b) to establish what extent these changes have a causative role in clinical progression of the cancer.
The findings of this study draw attention to following two questions: How do these alterations in signaling arise and what is the functional consequence of these alterations to the biology of head and neck tumors?Whereas deviations in responsiveness to cytokines by altered signaling at various levels are generally observed in transformed cell types (15), the high frequency by which the same changes (i.e., activation of ERK and STAT) occur in independent HNSCC cases suggests that these may have been functionally selected.Whereas technical limitations intrinsic to primary epithelial cell culture systems precluded us from identifying the cause of the constitutive ERK and STAT phosphorylation, potential mechanisms could include deregulating mutations of signaling kinases or the introduction of autocrine stimulatory activities.The higher activation of STAT proteins by cytokine treatments often in the absence of a detectable change in STAT protein levels, as seen in most HNSCC cases, suggests a more effective recruitment of the STAT proteins to the signal-transducing components by the cytokine receptors.In some cases, a transformation-associated changes in the level of STAT proteins, such as STAT1 (Fig. 1B), probably also contribute to the apparent increased STAT signal in HNSCC.Moreover, increased amounts of receptor proteins could also explain an enhanced overall signaling reaction as noted for LIF (15) but not the preferential recruitment of specific signaling pathways.Loss of receptor function, such as the specific reduction of IL-6 responsiveness has tentatively been attributed to a reduced expression of IL-6Ra.The function of the gp130 subunit seems not to be attenuated based on the maintenance of OSM response (Fig. 1B).With the currently available immunoreagents for receptor proteins, we have not yet been able to quantitate receptor proteins in the limited amounts of extracts from the primary head and neck epithelial cell cultures.
Because the signaling capability of the receptor systems was tested by short-term (15 minutes) treatment with cytokines and EGF, the level of STAT and ERK phosphorylation reflects to a large extent the activity of the receptorproximal protein tyrosine kinases and enzymes of the immediate downstream signaling cascade such as of the mitogen-activated protein kinase pathways.The effects that are bought about by a long-term cytokine treatment, such as enhanced or reduced proliferation, involve a broader array of signaling reactions that include not only the immediate mediators STATs and ERKs but also many secondary effectors downstream of STATs and ERKs, such as signal-attenuating phosphatases, kinase inhibitors of the SOCS family, and coactivators or corepressors for signal-mediating transcription factors, including those for STATs.Clearly, the role of these components needs to be determined in the definition of the regulatory phenotypes of HNSCC.
The consequence of the altered signaling reactions is interpreted in the context of growth regulation in HNSCC.A possible causal relationship is suggested by the finding that DNA synthesis is less suppressed in cytokine-treated HNSCC (Figs. 4 and 5).Activation of mitogen-activated protein kinase/ ERK is often connected to growth promotion and/or enhanced survival (28,(30)(31)(32).Thus, the increased ERK phosphorylation as noted in HNSCC cases would predict a more effective protection of the cells from the inhibitory action of the cytokines signals.Besides activation of ERK, OSM and IL-6 are also effective in activating STAT3.The effect STAT3, alone or in the combination with activated ERK on growth control in HNSCC, is, however, less predictable.Several studies have linked activated STAT3 alone to increased cell proliferation and oncogenic action (33,34), whereas others have noted suppression of proliferation in cells with STAT3 activated through cytokine receptor action (20,35).Similarly unclear is the role of STAT1 in tumorigenesis.Activated STAT1, such as through the action of the IFN-g or OSM, is believed to promote growth arrest and apoptosis, yet the level of STAT1 is increased in many tumor cells (10,36,37).Our data on HNSCC suggest that the control of DNA synthesis, and thus proliferation, by OSM is dependent on both the combination and the activation levels of ERK and STATs.The relative magnitude and duration of these pathways to be activated are in turn a function of the cytokine dose and signaling capability of the receptor system.These processes may account for the switch from stimulation to suppression of DNA synthesis when treating HNSCC cultures with low or high concentration of OSM.
The control of epithelial cell proliferation in response to CMM is even more complex; besides OSM and IL-6, other potent effectors such as tumor necrosis factor, IL-1, IL-8, and prostaglandins act on the target cells (Fig. 4).The intracellular signals communicated by these factor, including stress mitogen-activated protein kinase, nuclear factor-nB, and G proteins, will certainly influence the effects expected for STAT and ERK that are activated by IL-6 cytokines.The results revealed that the transforming process has led in most SCC to a modification that attenuated the suppressing signaling function.The responsible mediators and the effect of transforming process on the function of those remain to be identified.
The analysis of epithelial cells in primary tissue cultures allowed us to define the signaling capability and the qualitative and quantitative responsiveness to defined inflammatory cytokines of the cells as a function of transformation from HNNEC to HNSCC.This approach purposely removed the influence that tissue milieu has on the epithelial cells in situ.However, to extrapolate the results gained in tissue culture to regulatory properties in tumor, we have to consider the nature of the tumor environment.In the majority of patients who develop a HNSCC, tobacco and alcohol consumption is a relevant component that contributes to the local milieu.Chronic injury and repair in the upper aerodigestive tract caused by tobacco and alcohol use involves inflammatory reaction that in turn coordinates stimulation and suppression of epithelial cell growth.Moreover, the growth of malignancies adds appreciably to the local inflammatory reaction.These injury and tumor-related inflammatory processes are involving the suppressive action of those cytokines as defined in vitro (10,38).Hence, we hypothesize that HNSCC in vivo have a similarly altered signaling process as found in the same cells in primary culture; thus, the cells are less subject to growth suppression than normal cells.An important goal of future studies will be to establish the correlation of in vitro established phenotype based on marker proteins with the manifestation of the same markers in tumors.This will allow a more specific prediction as to the influence of inflammation on tumor progression.
Tissue Procurement
Postsurgical specimens from mucosal-bearing head and neck sites were obtained through the institutional review boardapproved tissue procurement protocol CIC 00-91.From each specimen, the diagnosing pathologist selected residual tissue representing confirmed normal epithelium and corresponding SCC.These tissue samples were immediately transferred to laboratory analysis that was covered by the institutional review board -approved protocol CIC 00-17.
Primary Cell Cultures
Preparation of primary epithelial cell cultures from normal and carcinoma tissue was carried out by a modified tissue dissociation procedure (39).Briefly, the specimens were rinsed with PBS containing antibiotics penicillin, streptomycin, and amphotericin B at a 1:1,000 dilution.The epithelial layer was separated from the specimens with a scalpel, cut in to 2 to 3 mm pieces, and partially disassociated by digestion with 1% trypsin in PBS containing 1 mmol/L EDTA for 15 minutes at 37jC.The pieces were then placed in 6 cm tissue culture dishes in serum-free, hormonally defined keratinocyte medium containing bovine pituitary extract, cholera toxin, and recombinant human (rhu) EGF (Life Technologies, Carlsbad, CA).Whereas this culture condition favored the outgrowth of epithelial cells (proliferation of primary epithelial cells is growth factor dependent), low level of cocultured fibroblasts could occur.Essentially all fibroblasts were removed from the epithelial cell cultures by selective release by brief digestion with 0.5% trypsin for 5 minutes at 37jC.Greater than 90% homogenous epithelial cells cultures, as determined by cytokeratin staining, were routinely obtained after 2 to 3 weeks.Subcultures of the first and second passages were used to determine the cytokine response profile and DNA synthesis of the cells, respectively.Because the amount of starting tissue material as well as the proliferation of the epithelial cells from the tissue samples often differed appreciably, simultaneous treatment of control and carcinoma cells from the same patient was not always possible.In all cases of paired samples with different growth, cell extracts were stored at À70jC for combined immunoblot analyses.The determination of thymidine incorporation was carried whenever the individual cell preparations of the second passage became available.In most cases, the normal as well as tumor epithelial cell cultures gradually lost proliferative activity after the second to fourth passages, what limited, or even precluded further biochemical analyses of those cell cultures.
Resident pulmonary macrophages were mechanically extracted from residual tumor-free lung tissue and purified by centrifugation on Histopaque (Life Technologies).After adhesion to plastic tissue culture support, 3 Â 10 6 macrophages per milliliter RPMI containing 10% FCS were treated for 16 hours with 1 Ag/mL lipopolysaccharides.The concentrations of cytokines in CMM were determined by multiplex immunobead flow cytometry (Luminex, Inc., Austin, TX).
Cytokine Treatments for Analysis of Signaling
Cells from passage 1 of the primary cultures of NEC and SCC were plated into 24-well cluster plates.When the cultures reached f 90% confluence, they were incubated for 2 hours in serum-free and factor-free RPMI followed by incubation for 15 minutes with the same medium containing 100 ng/mL rhu IL-6, rhu OSM (Amgen Corporation, Seattle, WA), rhu LIF (Wyeth Pharmaceuticals, Cambridge, MA), rhu EGF (Invitrogen, Carlsbad, CA), or rhu IFN-g (Roche Applied Science, Indianapolis, IN).Dose-response analyses have indicated that 100 ng/mL of each cytokine was f 10 times above the concentration required to trigger maximal receptor signaling.However, a dose of 100 ng/mL was needed to sustain maximal stimulation during long-term treatment such as when measuring the cytokine effects on gene induction and proliferation.Thus, a treatment dose of 100 ng/mL was chosen to ensure that maximal receptor signaling did occur under all selected culture conditions.Treated cells were washed with PBS and lysed in the culture well with radioimmunoprecipitation assay buffer containing 0.1 mmol/L orthovanadate and 1:100 diluted protease inhibitor cocktail (Calbiochem, San Diego, CA).
Western Blot Analysis
Replicate aliquots of cell lysates containing 10 or 20 Ag of protein were electrophoresed on 7.5% to 10% polyacrylamide gels.The proteins were transferred to protean membranes (Schleicher & Schuell, Keene, NH).Immediately after transfer, the membranes were stained with Ponceau red to verify loading and membrane transfer of equal amounts of protein per sample.In each experimental series, two replicate blots were cut horizontally at the f 60-kDa size position; the upper sections were probed for phosphorylated and total STAT3 and the lower sections for phosphorylated ERK and total ERK.Separate blots were used to probe for phosphorylated and total STAT1.Unless limited by the available cell material, replicate separations and immunoblot analyses instead of reprobing of membranes were applied due to difficulties in complete removal of the antibodies from the first round of reaction.The membranes were reacted with antibodies to phosphospecific forms or ERK-1/2, STAT1, and STAT3 (Cell Signaling Technology, Inc., Beverly, MA) and total forms of ERK-1/2, STAT1, and STAT3 (Santa Cruz Biotechnology, Santa Cruz, CA).The membranes were incubated with the appropriate peroxidase-conjugated secondary antibodies (ICN Biomedical, Aurora, OH) and the antibody binding was visualized by enhanced chemiluminescence reaction (Amersham Biosciences, Piscataway, NJ).In each experimental series, immunoblots were exposed to X-ray films for various lengths of time (1 second to 30 minutes) to obtain images that are in the linear range of signal detection by the scanner.
Densitometric Analysis
The chemiluminescence images of immunoblots were scanned with a high-resolution desktop scanner.The digital images were quantified with Image Quant Software 5.0 (Amersham Biosciences).The net pixel value for each protein band that lied within the linear range of detection was normalized to the coanalyzed standard and used to calculate the relative difference to the untreated control cells in each experimental series.To compare the responses between NEC and SCC from individual patients as well the responses among different patients, the OSM-induced activation of STAT1, STAT3, and ERK-1/2 of the NEC in each paired set was used as an internal reference.The net pixel values determined for the phosphorylated signaling proteins in OSM-treated cells were defined as being equal 1.0.In each set of NEC and SCC, the pixel values of the basal level phosphorylation and those induced by cytokine treatments (where detectable) were then expressed relative to the OSM reference.
Thymidine Incorporation
To determine the effect of cytokines on DNA synthesis, NEC and SCC cells from passage 2 were seeded into 24-well culture plates (5 Â 10 4 cells per well).After 24 hours, duplicate cultures were treated with full growth medium containing serially diluted rhu OSM or conditioned medium from lipopolysaccharide-activated primary human pulmonary macrophages (CMM).The use of growth medium was necessary, because in the absence of growth factors, the growth of epithelial cells was arrested.Twenty-four hours later, 1 ACi of H]thymidine (Amersham Biosciences) was added to each culture and incubation continued for additional 16 hours.Cells were released by trypsin and collected onto paper filter by the cell harvester (Tomtec, Hamden CT).The amount of incorporated tritium was measured by a scintillation counter (Trilux microbeta, Perkin-Elmer Wallac, Turku, Finland).The mean of the net values of the duplicate wells was expressed relative to the incorporation determined for the control cultures in each of the series, which was defined as 100%.Proliferation rates (doubling times) of epithelial cells were determined by seeding cells at a density 5 Â 10 3 cells/cm 2 (equivalent to f 5% confluence) and maintained for 6 to 9 days in full growth medium alone or containing 1:10 diluted CMM, 100 ng/mL OSM, or 5 Ag/mL function-neutralizing monoclonal antihuman gp130 antibody (R&D Systems, Minneapolis, MN).Media were changed every third day.Cells were released by trypsin digestion, resuspended in trypan blue dye -containing PBS, and counted using a hematocytometer.
Statistical Evaluation
All statistical analyses were done in an exploratory manner at the significance level of 0.05.The exact nonparametric inferences were employed for all the hypothesis tests.For each possible biomarker (P-STAT3, P-ERK, and P-STAT1) and treatment (one control and five cytokines) combination, the relative increase in phosphorylation of the SCC compared with its matched baseline from the same patient was tested using the matched pairs sign test.To test the decreasing or increasing trend of DNA syntheses within a treatment (OSM or CMM) by different doses of the treatment, the matched pairs sign test and Page's L test were used because samples from the same individual were used across the different dose levels.
FIGURE 1 .
FIGURE 1. Activation STAT1, STAT3, and ERK in cytokine-treated HNNEC and HNSCC. A. Morphology of proliferating cell culture of HNNEC and corresponding HNSCC derived from patient 3. Phase microscopic images were taken at Â40 magnification.B. Cells from patient 3 were treated for 15 minutes with the cytokines as indicated.Phosphorylated and total STAT and ERK were determined by immunoblotting.Representative enhanced chemiluminescence exposures of the coanalyzed NEC and SCC samples are reproduced.C. Time course of signaling in NEC.
FIGURE 2 .
FIGURE 2. Relative changes in STAT3 and ERK signaling between HNSCC and corresponding HNNEC.Densitometric analyses of STAT3 phosphorylation (A) and ERK phosphorylation (B) in untreated control and in cells in response to15-minute treatment with LIF, IL-6, IFN-g, or OSM cytokine treatments were compared between NEC and the corresponding SCC in all 16 cases.
FIGURE 3 .
FIGURE 3. Comparison of the amount of STAT1 phosphorylated in response to OSM (A) and IFN-g (B) treatments.The level of STAT1 phosphorylation was determined for all 16 paired sets of NEC and SCC by immunoblotting as shown in Fig.1.The quantitative values of the SCC samples were expressed relative to the NEC (defined as 1.0).The average fold increase of STAT1 phosphorylation by OSM in SCC was 2.37; STAT1 phosphorylation by IFN-g cells was 2.61.
FIGURE 4 .
FIGURE 4. Comparison of the effect CMM has on the DNA synthesis of HNNEC and HNSCC.SCC and NEC were treated with serial dilutions of CMM.[ 3 H]Thymidine incorporation of the untreated controls in each assay was defined as 100% and all other data were expressed relative to the control.Bars, SE of 11 HNSCC and 10 HNNEC cases.
FIGURE 5 .
FIGURE 5. Comparison of the effect OSM treatments has on the DNA synthesis of HNNEC and HNSCCA.SCC and NEC were treated with serial dilution of OSM.Incorporation of [ 3 H]thymidine in untreated NEC in each set was defined as 100% and all other data were expressed relative to the control.Bars, SE of 11 HNSCC and 10 HNNEC cases.
Table 1 .
Patient Demographics, Age, Disease Site, and Pathologic Stage Based on the American Joint Committee on Cancer 6th Edition Are Listed for the 16 Cases Used in This Study | 2017-04-29T23:54:27.866Z | 2004-10-01T00:00:00.000 | {
"year": 2004,
"sha1": "30d6e66b4db4508b59f2f3e5d4e0a6180ee4ce69",
"oa_license": "CCBY",
"oa_url": "https://aacrjournals.org/mcr/article-pdf/2/10/585/3135951/585-593.pdf",
"oa_status": "HYBRID",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "7bb922ce84696985875685b80df4e744efa74b2e",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
53518818 | pes2o/s2orc | v3-fos-license | Formulation and in vitro evaluation of berberine containing liposome optimized by 3 2 full factorial designs
Article history: Received on: 01/06/2015 Revised on: 10/07/2015 Accepted on: 19/07/2015 Available online: 27/07/2015 The present study demonstrates the application of 3 full factorial design for optimization of berberine loaded liposome for oral administration. Thin film hydration method was used to prepare liposome and optimization was done by 3 full factorial designs combined with desirability function. Nine formulations were prepared by using different drug : lipid and soyphosphatidylcholine : cholesterol (SPC:CHOL) ratios and evaluated for entrapment efficiency and vesicle size. The statistical validity of model was done by analysis of variance (ANOVA). Response surface graph and contour plots were used to understand the effect of variables on responses. The optimized formulation with 0.782 desirability value was prepared and evaluated for responses. The results of entrapment efficiency and vesicle size were found to be very close with the predicted values. In addition, an optimized formulation was also characterized for zeta potential, in vitro drug release and morphology. The formulation was found to be spherical shape with an average diameter of 0.823 nm and -1.93 mV zeta potential and also shows sustained release pattern. These results support the fact that 3 full factorial designs with desirability function could be effectively used in optimization of berberine loaded liposome.
INTRODUCTION
Berberine (BER) is a quaternary isoquinoline alkaloid obtained from various plants of Berberis species.It has been historically used as an anti-diarrheal, anti-protozoal, and antimicrobial agent in Ayurvedic and Chinese medicine.It also possesses multitude of biological effects, including antiinflammatory, antidiabetic, lipid peroxidation, and neuroprotective activity (Liu et al., 2009;Lee et al., 2010;Wu et al., 2010;Zhou et al., 2010;Zhao et al., 2011).However, quaternary amine cation of BER causes poor water solubility, resulting in low bioavailability.In addition, BER also induce the activity of multidrug efflux transporter P-glycoprotein (P-gp) in the intestine, responsible for active efflux of drug from cells, cause its own ejection resulting in 90% reduction in BER transport (Zhang et al., 2011;Di Pierro et al., 2012;Shan et al., 2013).Moreover, intramuscular and intravenous administration may leads to risk of adverse reactions, such as drug rash and anaphylactic shock.
Oral route is the most easiest and convenient way for administration of drugs.However, some of the drugs have a very low oral bioavailability because of poor aqueous solubility and permeability, multidrug resistance protein (MRP) efflux and metabolic stability (Choi et al., 2004).Recently lipid based formulations are widely used for the oral administration of phytoconstituents.Nevertheless, lipid-based formulation can also be formulated in different dosage form like self-emulsifying systems, multiple emulsions, microemulsions, liposomes, and solid lipid nanoparticle.There are various mechanisms responsible for the absorption enhancement of drug from lipid based formulation for instance, altering the intestinal environment, interacting with enterocyte-based transport, stimulation of lymphatic transport, and active ingredients release modification.Furthermore, degradation of active ingredient in gastrointestinal tract can be protected by phospholipids (Fricker et al., 2010).
Among the lipid based systems, liposome seems to be the most promising system for its ability to enhance the permeability of drug across the enterocyte, to stabilize drugs, and provide the opportunity of controlled release (Charman et al., 1986).Liposomes are spherical-shaped vesicle consisting of one or several phospholipid bilayers separated by aqueous inner compartments and are nontoxic, biocompatible and biodegradable.These vesicles have ability to incorporate hydrophobic, hydrophilic and ampiphilic substances.It has also been demonstrated that liposomes can improve solubility, stability and encapsulation efficiency, and drug protection against degradation.Many researchers indicated that bioavailability of orally administered drug with poor solubility and permeability was obviously enhanced after encapsulation with liposomes and changes the in vivo distributions of entrapped drugs.(Moutardier et al., 2003;Deshmukh et al., 2008;Jain et al., 2012a;Jain et al., 2012b;Niu et al., 2012;Gradauer et al., 2013).In the present investigation, we prepared a BER loaded liposome using thin film hydration technique, and was optimized using 3 2 full factorial design.They were further characterized for their entrapment efficiency, vesicle size and zeta potential, in vitro drug release and morphology.
Materials
Berberine (BER) was purchased from Yucca Enterprize, Mumbai.Soyphsophatidylcholine (SPC, purity, 98%) was provided as a gift sample from Lipoid GmBH Company (Ludwigshafen, Germany).Cholesterol (CHOL) and all other solvents and reagents used were analytical grade and purchased from S D Fine-Chem Ltd (Mumbai, India).
Preparation of liposome
Thin film hydration method was used to prepare berberine loaded liposome (Szoda, 1981;Law et al., 1998;Fresta et al., 1999).In this method, SPC (Lipoid S 100), CHOL and BER were firstly dissolved in chloroform in different molar ratio (Table 1).The chloroform was evaporated at 60 o C for 1 h under vacuum at 150 rpm by rotary evaporator (Remi Instruments, Mumbai, India) to form a thin lipid film.The dried thin lipid film was hydrated by adding phosphate buffer saline (PBS) pH 6.8 at 45 o C in rotary vacuum evaporator rotated at 100rpm until the dispersion of all the lipids in the aqueous phase.For vesicle size reduction, the dispersion was subjected to bath sonication (Toshniwal Instruments, Ajmer) for 20-30 min at a frequency of about 30±3KHz at 40°C.Thereafter, the mixture was kept for 1 h at room temperature for the formation of vesicle followed by 4°C for 24h in an inert atmosphere.The formulation was centrifuged for 1h at 15000 rpm in a cold centrifuge (Remi Instruments, Mumbai, India).Then, the supernatant containing the vesicles in each case was separated and taken for further studies in a suspended form.
Experimental design 3 2 factorial designs
The formulations were optimized by 3 2 factorial designs consisting of drug: lipid molar ratio (X 1 ) and SPC: cholesterol (X 2 ) as a independent variables while vesicle size (Y 1 ) and entrapment efficiency (Y 2 ) as response (Table 1).Nine formulations were prepared and evaluated for response.The obtained data were fitted into Design Expert software (Design Expert 9.0.4,Stat-Ease, Minneapolis, MN).Analysis of variance (ANOVA) was used to validate design.
Response surface plot
Contour plot and (3D) response surface plots were constructed to establish the understanding of relationship of variables and its interaction.
Optimization using desirability function
The formulations were optimized by keeping the X 1 and X 2 within the range used in present work while Y 1 at minimum and Y 2 at maximum using Design-Expert software.On the basis of these assigned goals, software determines the possible formulation composition with maximum desirability value.
Checkpoint analysis
According to desirability value and composition of variables, formulation was prepared and evaluated for response.The predicted and observed response was compared and percentage prediction error was calculated to confirm the validity of design for optimization.
Morphology of liposome
Shape and lamellarity of vesicle was observed by placing the suspension under optical microscope (Olympus BX 41, USA).Photomicrographs were taken by a camera attached to the optical microscope in 10x100 magnifications.
Vesicle size
The optimized formulation, serially diluted 100-fold with Double distilled water, was used to determine mean vesicle size and polydispersity index (PDI) using Zetasizer HAS 3000 (Malvern instrument Limited, UK).
Zeta potential
Zeta potentials of the optimized formulations was measured by Zetasizer HAS 3000 (Malvern instrument Limited, UK) at 25 o C. (Law et al., 1998)
Entrapment efficiency
Liposome suspension was centrifuge at 15000 rpm to separate unentrapped drug.Free drug present in supernatant was determined using UV spectrophotometer at 345 nm.EE(%) was calculated by following equation: EE (%)= [(C total -C free )/C total ]x100 Where, C total = total drug added, C free = unentrapped drug
In vitro diffusion study
Membrane diffusion technique was used to determine release of BER from plain drug suspension and formulation.Liposomal suspension (1.5 mL) with known amount of drug was filled in dialysis bag (Mw cut-off = 12000-14000, Hi-media laboratories, Mumbai), previously soaked in distilled water for 24h.The bag was placed in 25mL of phosphate buffer saline (PBS, pH 6.8), continuously stirred by magnetic stirrer, maintained at 37°C.Samples (1 mL) were withdrawn at specified time interval and substituted with fresh PBS (pH 6.8).UV spectrophotometer was used to determine drug from sample at 345 nm.
Stability Study
Berberine loaded liposomes were stored in glass vials and kept at 4-8°C, 25±2°C and 37±2°C for one month.The samples were taken after one month and entrapment efficiency was determined as described earlier.
Experimental design
The three level two factor design is an effective approach for investigating variables at different levels with a limited number of experimental runs (Table 2).The vesicle size and EE of total 9 batches showed a wide variation from 571 to 1105 nm and 56 to 82%, respectively.
Fitting the model to data
Response data of all formulations were fitted to cubic, linear and quadratic model.According to Design Expert software, best-fitted model was linear for response Y 1 and quadratic for response Y 2. All the responses were fitted to model to establish full model (FM) polynomial equation.Y 1 = 964.78+113.X 1 -169.83X 2 +45.50 X 1 X 2 -29.33 X -118.17X Y 2 = 75.20 +7.61 X 1 +4.64 X 2 -1.64 X 1 X 2 -1.72 X -2.44 X Statistical validity of the polynomials was established on the basis of ANOVA provision in the Design Expert ®software.Further analysis using ANOVA indicated significant effects of the independent factors (p>F) on response Y 1 and Y 2 .F-value for Y 1 =53.25 and Y 2 =40.88, while resulted R 2 for Y 1 =0.9875 and Y 2 =0.9876.Statistical models were generated for each response parameter and tested for significance.Further Adj-R 2 and Pred-R 2 values for all responses were in reasonable agreement, indicating that the data were described adequately by the mathematical model.Values of ''p'' less than 0.05 indicated that model terms were significant except for responses Y 1 , two model terms X 1 2 and X 1 X 2 were at p>0.05 (p value: 0.3197, 0.0797, respectively), and for Y 2, model term X 1 2 , X 2 2 and X 1 X 2 were at p>0.05 (p value: 0.1949, 0.1001,0.1119,respectively) indicated necessary model reduction to improve the model (Table 3).
Response surface (3D) and Contour plot analysis
The obtained results can be observed visually in the response surface (3D) and contour plots (Fig. 1, 2).Response surface graph of Y 1 shows that vesicle size of liposome was decreased with decreasing SPC concentration because phospholipids constitute the liposome membrane.With increasing total lipid (SPC:Cholesterol) concentration more drug could be incorporate into liposome.In addition, response surface graph of Y 2 shows that the increase in SPC:Cholesterol ratio significantly increased the drug entrapment efficiency.These results supported by the fact that, movement of fatty acids hydrophobic tails was reduced by incorporation of a bulky molecule of cholesterol in the lipid bilayer of liposome.It leads to permeability reduction of liposome membrane via resistance of phospholipids exchange with apoprotein.These ultimately improve the drug retention in liposome by prevention of drug leakage from lipid bilayer.
Optimization of formulation
The search for the optimized formulation composition was carried out using the desirability function approach with Design expert software, criterion being one having the maximum desirability value.The optimization process was performed by setting the Y 1 at minimum and Y 2 at maximum while X 1 and X 2 within the range obtained.The optimized formulation was achieved at X 1 =1:9.56,X 2 =50:50 with the corresponding desirability (D) value of 0.782 (Fig. 3).This factor level combination predicted the responses Y 1 =654 nm, Y 2 = 75.68%.
Checkpoint Analysis
The comparisons of predicted and experimental results shows very close agreement, indicating the success of the design combined with a desirability function for the evaluation and optimization of liposome formulations (Table 4).
Vesicle size and shape
Vesicle size determination is essential parameters for application of liposome (Maherani et al., 2012).Several methods are available for preparation of liposome with different size, composed of one or more lipid bilayer.Generally, lipid film hydration is used for preparation of multilamellar vesicles.Sonication was done to produce small unilamellar vesicle.The optimized liposome (BL 10) was spherical in shape and found to be unilamellar to multilamellar (Fig. 4).The average vesicle size was found to be 0.823 nm with 0.354 polydispersity index (Fig. 5).
Zeta potential
Zeta potential of liposome ensures stability and entrapment efficiency and also used to predict in vivo behavior (Maherani et al., 2012).Entrapment efficiency was increased due to electrostatic attraction between charged molecule and liposomes.Any subsequent modifications of the liposomal surface, such as cholesterol incorporation, also influence zeta potential.The higher values of zeta potential enhance the stability of liposome by increasing the repulsion of vesicle, and thereby preventing aggregation.Liposome prepared by using different lipids acquires different surface charge.Liposome employing pohosphatidylserine, stearylamine or dioleoyltrimethylammonium propane and phosphatidylcholine get negative, positive and neutral charge respectively (Brgles et al., 2008).On the contrary, in present study liposome prepared with phosphatidylcholine possess slightly negative charge (-1.93 mV) (Fig. 6).It may be due to the effect of cholesterol on surface charge.
Entrapment efficiency
Drug can be incorporated into liposome by several ways depending on various properties like polarity and solubility.It can be adsorbed on surface of membrane, entrapped in lipid bilayer, encapsulated in inner aqueous core, attached between polar head or supported by a hydrophobic tail (Maherani et al., 2011).Method of preparation and composition of lipid can also influence the entrapment efficiency.The present study shows 78.43% entrapment efficiency indicating good electrostatic interaction between bioactive agent and liposomes.
In vitro diffusion study
Release characteristics of BER from liposome was evaluated in vitro and compared to that of pure drug.It was observed that the release of BER suspension was completed within 10 h while liposomal formulations shows 70% release within 24 h (Fig. 7).This results supported support by the fact that the layer of drug-encapsulated liposomes attached to the semi-permeable membrane breaks and leaches its contents slowly before another layer replaces the leached vesicles.Due to this mechanism controlled release of drug in liposomes can be expected over a prolonged period of time.
Stability Study
Stability study reveals considerable drug loss (approx.12%), was marked from formulation storage at high temperature, i.e., 37±2°C.On contrary, formulation stored at 4-8°C and 25±2°C, could retain 93% and 97% of the entrapped drug, respectively.Substantial loss of drug at high temp may be due to the deprivation of phsopholipids leads to disturbance in packing of membrane.In addition, high temperature also cause change in gel to liquid transition of lipid bilayer.The results of the study indicate that the development of BER loaded liposome can overcome the limitation of the molecule related to poor oral absorption and can enhance the bioactivity of the BER.
CONCLUSION
In this study, 3 2 full factorial designs were used for predicting the optimum condition for preparation of liposome.The formulations were successfully prepared by thin film hydration method to observe the effect of drug:lipid and soyphosphatidylcholine:cholesterol ratio on vesicle size and entrapment efficiency.Increase in lipid concentration was found to produce liposome with highest entrapment efficiency.On the other hand, decrease in SPC concentration produce smaller vesicle.These effects were fitted into polynomial model to identify the significant effects of independent variables on response and visually observed by contour plot and response surface (3D) plots.The effectiveness of experimental design was confirmed by close agreement of experimental value with estimated value of optimized formulation prepared in accordance with desirability value.Thus, 3 2 full factorial design with desirability function is an effective means to optimize berberine loaded formulations.
Fig. 1
Fig. 1 Response surface (A) and its Contour plot (B) shows effect of X1 and X2 on vesicle size.
Fig. 2
Fig. 2 Response surface (A) and its Contour plot (B) shows effect of X1 and X2 on Entrapment efficiency.
Fig. 3
Fig. 3 Contour plot for overall desirability of liposome as a function of X1 and X2.
Fig. 7
Fig. 7 In vitro drug diffusion of berberine loaded liposome and plain drug.
Table 2
Variables in 3 2 Factorial designs for liposome Variable
Table 3
Analysis of Variance of the factorial models for the responses.
Table 4 .
Checkpoint batch with their predicted and observed value of responses | 2018-10-13T07:17:33.471Z | 2015-01-01T00:00:00.000 | {
"year": 2015,
"sha1": "315a7fe755057f516b315a7046061440f472c10f",
"oa_license": "CCBY",
"oa_url": "https://japsonline.com/admin/php/uploads/1549_pdf.pdf",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "315a7fe755057f516b315a7046061440f472c10f",
"s2fieldsofstudy": [
"Medicine",
"Materials Science",
"Chemistry"
],
"extfieldsofstudy": [
"Materials Science"
]
} |
51952479 | pes2o/s2orc | v3-fos-license | A combined planning approach for improved functional and esthetic outcome of bimaxillary rotation advancement for treatment of obstructive sleep apnea using 3D biomechanical modeling
In recent years, bimaxillary rotation advancement (BRA) has become the method of choice for surgical treatment of obstructive sleep apnea (OSA). As dislocation of the jaw bones affects both, airways and facial contours, surgeons are facing the challenge of finding an optimal jaw position that allows for the reestablishment of normal airway ventilation and esthetic surgical outcome. Owing to the complexity of the facial anatomy and its mechanical behavior, individual planning of surgical OSA treatment under consideration of functional and esthetic aspects presents a challenge that surgeons typically approach in a non-quantitative manner using subjective evaluation and clinical experience. This paper describes a framework for individual planning of OSA treatment using bimaxillary rotation advancement, which relies on computational modeling of hard and soft tissue mechanics. The described framework for simulation of functional and esthetic post-surgery outcome was used in 10 OSA patients. Comparison of the simulation results with post-surgery data reveals that biomechanical simulation provides a reliable estimate for post-surgery facial tissue behavior and antero-posterior airway extension, but fails to accurately describe a surprisingly large lateral stretch of the velopharyngeal region. This discrepancy is traced back to anisotropic effects of pharyngeal muscles. Possible approaches to improving the accuracy of model predictions and defining sharp criteria for optimizing combined OSA planning are discussed.
Introduction
Reduced lung ventilation and resulting blood oxygenation due to obstructive sleep apnea (OSA) is known to be related to a plethora of pathological syndromes including PLOS ONE | https://doi.org/10.1371/journal.pone.0199956 August 9, 2018 1 / 15 a1111111111 a1111111111 a1111111111 a1111111111 a1111111111 musculoskeletal, heart and mental disorders [1][2][3][4][5][6][7][8]. Surgical treatment of OSA using bimaxillary rotation advancement (BRA) with counter clockwise rotation aims to mechanically widen constricted airways which provides remedy for OSA symptoms. The efficacy of BRA for surgical treatment of OSA has been demonstrated in a number of previous works [9][10][11]. Surgical success of OSA treatment is often evaluated on the basis the Sher's criterion [12], i.e., a greater than 50% reduction of the apnea-hypopnea index (AHI) and/or an AHI of less than 20 events per hour, and its modifications [13]. Zinser et al. report an average reduction of AHI from 47.9 ± 15.6 before to 5.6 ± 2.1 after BRA [14]. However, tangible quantitative criteria for OSA diagnostics and individual surgery planning under consideration of both functional and esthetic aspects are not yet well established. Previous experimental and computational works have indicated a causal relationship between geometrical and mechanical properties of airway walls and stability of pharyngeal airflow [15][16][17][18]. Narrow and mechanically compliant airway walls cause turbulences in pharyngeal airflow that, in turn, exert negative pressure on soft tissue walls resulting in their further collapse [19]. Bimaxillary advancement is capable to efficiently widen constricted airways, especially in the velopharyngeal region, which reduces the risk of irregular jet-like airflow [20,21]. In the absence of reliable tools for individual planning of bimaxillary advancement, surgeons tend to undertake maximal admissible bone displacement to achieve a therapeutically sufficient extension of the constricted pharyngeal regions.
Advancing the maxillo-mandibular complex by 1cm or greater is frequently suggested in the literature as a common rule for surgical treatment of OSA [22,23]. However, large maxillomandibular displacements may have a strong impact on patients' facial contours, occasionally causing a pronounced mid-face elongation. While physical mechanisms of OSA have been previously investigated in a number of isolated experimental and computer modeling studied, little has been done, to date, to integrate these findings into routine planning and customization of surgical OSA treatment. In our previous works [24], a general approach to anatomyand physics-based modeling of cranio-maxillofacial surgery interventions was presented.
Here, we extend this approach to quantitatively assess the impact of jaw dislocation on facial and pharyngeal soft tissues. This work presents a methodological framework for customized planning of bimaxiallary rotation advancement resulting in the first reported feasibility study involving post-surgical evaluation of functional and esthetic outcome.
Participant information and study design
This study deals with comparative analysis of pre/post facial and pharyngeal soft tissue in 10 patients who underwent the BRA treatment performed by the first author (R.F.). Participant information and pre-/post-surgery measurements of AHI, velo-(VPX) and laryngopharyngeal (LPX) cross section areas and dimensions are summarized in Table 1. Imaging of patient's head and assessment of OSA symptoms were performed 2-4 weeks before and repeated 12-24 weeks after surgery.
Ethics statement
This study was approved by the Seegarten Clinic Ethics Committee, approval no. PFS21002-34. All procedures were carried out in accordance with the ethics standards of the responsible committee on human experimentation and with the Helsinki Declaration revised in 2008. Participating patients were informed about assessment and usage of their anonymized data for research purposes in verbal and written form.
Surgical techniques
All patients have been treated using the well-known bimaxillary advancement procedure combined with a counter clockwise rotation of the jaws, see Fig 1(a). In the first step, the mobilization of the maxilla is done with a slightly modified Le Fort I osteotomy (Fig 1(b)). In particular, the anterior part of the maxilla from the piriformis aperture back to the zygomaticalveolar arch is resected as a cuneiform fragment with the maximum height at the piriformis aperture decreasing to a minimum at the arcus zygomato-alveolaris (Fig 1(b1)). In contrast, the posterior part behind the alveolar-zygomatic-arch is cut through without resecting a triangular bone fragment (Fig 1(b2)). This way the center of maxilla rotation is effectively shifted to its middle point which allows to perform rotation without losing the intermaxillar height. To avoid compression of the nasal septum, a V-shaped osteotomy of the anterior nasal process is performed. An anterior vascularized transposition of at least 5mm with counter clockwise rotation is fixed with 4 L-shape mini ostheosynthesis plates and each plate is fixed with 4 mini screws. As a next step, the mandible is split using the well known Obwegeser-Dalpont osteotomy technique on both sides. The posterior part of the mandible containing the articular process is positioned in central position with a posterior and cranial alignment. The anterior part is fixed to the maxilla using a occlusal splint. Subsequently, the two parts are fixed with a semirigid osteosynthesis miniplate secured with 4-5 miniosteosynthesis screws. The predefined jaw displacements and rotations are used as boundary conditions for subsequent computation of soft tissue deformations. Fig 1 gives an overview of anatomical structures and landmarks related to the BRA surgery procedure.
Generation of 3D patient models from CBCT and optical surface scanning data
Cone Beam Computer Tomography (CBCT) data of the patients' head were acquired in supine position as 512x512x539 (0.3x0.3x0.3mm) DICOM images using Newtom 5G scanner (QR S.r. l., Verona, Italy). DICOM images are semi-automatically segmented and mesh models of facial [24]. Briefly, the piecewise isotropic, homogeneous, non-linear elastic model based on the generalized Hookean law is used for approximation of constitutive soft tissue properties where σ is the so-called Cauchy stress tensor, ε is the Green-Lagrange strain tensor and (E, ν) are the Young's modulus and the Poisson's ratio-two elastic constants describing material stiffness and compressibility, respectively. Our previous studies have shown that this model is capable to accurately describe the deformation of facial soft tissue in context of cranio-facial surgery planing. Here, we extend this FE framework to prediction of patients' photo-realistic appearance and pharyngeal airways, see
Evaluation of facial and pharyngeal soft tissue prediction
Prior to comparative measurements, pre-/post-surgical and simulated 3D anatomical models are aligned using the Artec Studio rigid registration tool which relies on a set of manually defined landmarks. For quantification of differences between facial and pharyngeal surfaces, the surface distribution of maximum shortest bijective distances between each two surfaces (A, B), i.e., pre-, post-surgery and simulated facial outlines, is calculated: The d min metrics is introduced to avoid artificially short distances between two convex surfaces when using the unidirectional distance, see Fig 3(a). To assess changes in geometry of pharyngeal airways, areas and axial dimensions (e.g., anterior-posterior (AP) and lateral (Lat) diameters) of 1mm equidistantly placed cross sections are measured parallel to the palatal plane, see Fig 3(b). For analysis of changes between pre-/postsurgery and simulated airways, comparison of velo-and laryngopharyngeal cross sections using the t-test is performed. Furthermore, differences in areas and AP/Lat dimensions of the narrowest cross section are assessed Fig 3(c). The bijective distance between two surfaces is calculated using the d min metrics (Eq 2). The shortest distance between the node A 5 of the surface A to and from the surface B is given by A 5 B 9 and B 6 A 5 , respectively. Given B 6 A 5 > A 5 B 9 , the d min bijective distance between A and the surface B measured from the node A 5 is B 6
Results
Computational simulation of soft tissue deformation upon BRA treatment is performed for 10 OSA patients using pre-surgery 3D image data and the elastomechanical FE simulation as described above. An example of simulated impact of BRA on facial and pharyngeal soft tissues in a 37 y.o. male patient (case study #1 in Table 1) is shown in S1 and S2 Movies.
To validate accuracy of facial soft tissue predictions, the maximum shortest bijective distance (d min ) between each two facial surfaces is calculated using Eq 2 as described in Fig 3( Table 2. Our experimental data show that deviation of computationally predicted facial surfaces from post-surgery results amounts in group average to 0.8mm (SD = 0.7mm). The largest deviation of model predictions from postsurgery data is found in interface regions between the nose, lips and bones, where special boundary conditions such as tissue-bone sliding occur.
For comparison of pre-, post-surgery and simulated airways, d min distances between registered pharyngeal surfaces are computed. Figs 4 and 5(b) illustrate 3D superpositions and color-mapping of distances between pre-, post-surgery and simulated pharyngeal airways. While pre/post and pre/sim distance maps qualitatively show expected differences in velopharynx, large post/sim deviations are observed in oropharyngeal region. We trace these deviations back to higher geometrical variability of the oropharyngeal region due to occasional swallowing artifacts and different tongue positions in pre-and post-surgery 3D scans. To quantify the overall differences between pre-and post-surgery pharyngeal airways upon BRA and to evaluate agreement between post-surgical and simulated outcomes, areas and AP/Lat diameters of equidistantly placed VPX and LPX cross sections in the palatal plane are measured. The results of statistical testing of dissimilarity between the entire set of cross section areas and AP/Lat diameters using the two-paired t-test are summarized in Table 3. As one can see, computational simulation provides a quantitatively good estimate for post-surgery changes in VPX cross section areas and AP diameters, i.e., low-or non-significant sim/post difference, but fails to accurately describe Lat stretch of post-surgical VPX cross sections. For the LPX region, lower significance of pre/post differences as well as lower accuracy of computational predictions is measured.
Based on repeatedly reported physiological relevance of pharyngeal constrictions for emergence of OSA symptoms [16,[25][26][27][28][29][30], the narrowest VPX and LPX cross sections are identified. narrowest velopharyngeal cross sections. Remarkably, our FE model provides a comparatively accurate estimate for AP extension of constricted VPX pharyngeal regions, which deviation from post-surgery result amounts in cohort average to 12% (SD = 11%), see Fig 6(a). In contrast, the Lat diameter of the narrowest VPX cross sections exhibits in average 30% (SD = 10%) larger stretch in post-surgery data than estimated by the FE simulation, see Fig 6(b). Accordingly, the cross section area predicted by FE model is typically smaller than the post-surgical result, Fig 6(c). Similar differences between post-surgical and simulated cross sections are found for the LPX region: computational model provides a more accurate prediction for the AP (Fig 6(d)) than for the Lat diameter (Fig 6(e)) and the cross section area (Fig 6(f)).
Post-surgical polysomnogram measurements reveal a consistent reduction of AHI to more than 50% of its pre-surgical value for almost all participants, while the absolute values of postsurgery AHI in this cohort exhibit a large variability in the range between 0.5-36.6 with mean ±stdev = 11.3 ± 11.6.
Discussion
Experimental results of this study confirm previous observations [24] of a sufficiently accurate prediction of post-surgical facial appearance using an isotropic, homogeneous constitutive model of soft tissue mechanics. This can be explained by a particular organization of anisotropic facial soft tissue forming a relatively thin, quasi-2D layer which has a surface tangent that is nearly perpendicular to the direction of skin displacement triggered by bimaxillary advancement with counterclock rotation. Structural organization of pharyngeal soft tissue does not exhibit such an exceptional symmetry which leads to more pronounced anisotropic effects and, consequently, larger deviation of computational predictions from post-surgery deformation of pharyngeal airways. Large lateral elongation of the post-surgical velopharynx has been previously reported in the literature [11,14]. However, its mechanism is not yet well understood. It is reasonable to assume that the lateral stretch of pharyngeal and, in particular, velopharyngeal airways is caused by pharyngeal muscles that establish a remote mechanical link between displaced jaws and airway walls. The transmission of forces along muscle fibers is associated with reduced dissipation of mechanical energy in comparison to an isotropic, homogeneous medium assumed in our material model. To account for anisotropic effects of pharyngeal muscles, consideration of anisotropic properties of muscle fibers is required. Since generation of individual anisotropic models is not feasible, utilization of anisotropic templates has been previously suggested [31]. Anisotropic templates of highly complex pharyngeal musculature are, however, widely missing. Alternatively, effects of pharyngeal muscles can be simulated by introducing corrective (penalty) forces that will account for differences between post-surgical airways displacements and displacements obtained from computational simulation using the simplified anisotropic model.
Conclusion
Complexity of the head anatomy and mechanical tissue behavior makes quantitative planning of bimaxiallary advancement for treatment of OSA a challenging task. Reliable computational models of facial and pharyngeal soft tissue are highly demanding for the prediction of functional and esthetic BRA outcome. Comparison of our simulation results with post-surgical data indicates that our soft tissue model is capable to estimate facial tissue displacements and antero-posterior extension of pharyngeal space upon BRA. Thereby, largest deviations of model predictions from post-surgical data are observed in the lips, nose and velopharyngeal regions. We trace these deviations back to special boundary conditions (such as tissue-bone sliding) and anisotropic properties of pharyngeal muscles that are not considered by our piecewise isotropic homogeneous soft tissue model. Consideration of anisotropic effects of pharyngeal muscles can also be studied on post-surgery relocation of the hyoid bones which is known to be a natural mediator and landmark of pharyngeal muscles action [32].
Our computational framework provides low bound estimation for post-surgical extension of pharyngeal airways. A physically consistent simulation of resulting effects on pharyngeal airflow is principally feasible, however, it requires additional assumptions of material parameters of the patient' soft tissue such as elasticity of airways walls and anisotropy of pharyngeal muscles. Further experimental and computational studies are required to determine whether obvious features of airway geometry, such as dimensions of the narrowest velopharyngeal cross section, or more subtle physical indicators of local airflow instability can provide robust criteria for optimal planning of OSA treatment using BRA.
Supporting information S1 Movie. Example of simulated BRA impact on facial soft tissues and esthetic appearance of a 37 y.o. male OSA patient (case study #1 in Fig 6(a)). (AVI) S2 Movie. Example of simulated BRA impact on pharyngeal airways in a 37 y.o. male OSA patient (case study #1 in Fig 6(a)). (AVI) | 2018-08-14T20:37:25.783Z | 2018-08-09T00:00:00.000 | {
"year": 2018,
"sha1": "38d6fe25e703f0733c71cee9803d6f0d660577fe",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0199956&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "38d6fe25e703f0733c71cee9803d6f0d660577fe",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
18194485 | pes2o/s2orc | v3-fos-license | AFIR: A Dimensionless Potency Metric for Characterizing the Activity of Monoclonal Antibodies
For monoclonal antibody (mAb) drugs, soluble targets may accumulate several thousand fold after binding to the drug. Time course data of mAb and total target is often collected and, although free target is more closely related to clinical effect, it is difficult to measure. Therefore, mathematical models of this data are used to predict target engagement. In this article, a “potency factor” is introduced as an approximation for the model‐predicted target inhibition. This potency factor is defined to be the time‐Averaged Free target concentration to Initial target concentration Ratio (AFIR), and it depends on three key quantities: the average drug concentration at steady state; the binding affinity; and the degree of target accumulation. AFIR provides the intuition for how changes in dosing regimen and binding affinity affect target capture and AFIR can be used to predict the druggability of new targets and the expected benefits of more potent, second‐generation mAbs.
For monoclonal antibody (mAb) drugs, soluble targets may accumulate several thousand fold after binding to the drug. Time course data of mAb and total target is often collected and, although free target is more closely related to clinical effect, it is difficult to measure. Therefore, mathematical models of this data are used to predict target engagement. In this article, a ''potency factor'' is introduced as an approximation for the model-predicted target inhibition. This potency factor is defined to be the time-Averaged Free target concentration to Initial target concentration Ratio (AFIR), and it depends on three key quantities: the average drug concentration at steady state; the binding affinity; and the degree of target accumulation. AFIR provides the intuition for how changes in dosing regimen and binding affinity affect target capture and AFIR can be used to predict the druggability of new targets and the expected benefits of more potent, second-generation mAbs.
Study Highlights
WHAT IS THE CURRENT KNOWLEDGE ON THE TOPIC? þ Mathematical models for target mediated drug disposition of mAbs are widely used to guide drug development by predicting the dosing regimen at which a certain threshold of target inhibition is achieved. Although many mathematical analyses of these models have been published, there has not yet been a demonstration for how the key model parameters (like binding affinity and average drug concentration) link to target engagement in repeated dosing scenarios.
WHAT QUESTION DID THIS STUDY ADDRESS?
þ How do the PK and binding properties of the mAb impact target engagement? WHAT THIS STUDY ADDS TO OUR KNOWLEDGE þ A simple nondimensional potency factor (AFIR) links target engagement to three key quantities: average drug concentration, binding affinity, and total target accumulation. HOW MIGHT THIS CHANGE DRUG DISCOVERY, DEVELOPMENT, AND/OR THERAPEUTICS? þ The AFIR metric provides intuition for standard TMDD models and can be used to rapidly predict the druggability of new targets and the expected benefits of second-generation, more potent mAbs.
Monoclonal antibodies (mAbs) are one of the fastest growing classes of therapeutic agents with 47 approved as of November 2014 and an expectation of about 4 new approvals per year. 1 Unlike small molecules, which have a molecular weight of about 500 Da and that are cleared mainly by the liver and kidneys, mAbs are large molecules with a molecular weight of about 150 kDa and are cleared mainly through cellular uptake followed by proteolytic degradation. Whereas small molecules typically have a half-life of hours, fully human mAbs exhibit long half-lives of around 3 weeks due to the FcRn receptor, which binds to the mAb after pinocytosis and rescues it from undergoing lysosomal degradation. 2 There are two classes of targets for mAbs: membrane-bound and soluble. Antibodies with membrane-bound targets (e.g., trastuzumab/ HER2, demosumab/RANKL, nivolumab/programmed cell death protein 1) have an additional route of clearance via receptor-mediated internalization, which can lead to a nonlinearity in the drug pharmacokinetics (PKs); this phenomenon is known as target-mediated drug disposition (TMDD). Antibodies with soluble targets (e.g., omalizumab/Immunoglobulin E, bevacizumab/vascular endothelial growth factor, siltuximab/ interleukin-6) often demonstrate significant target accumulation after single (Figure 1a,b) or repeated dosing ( Figure 1c) because the mAb-target complex often has a much longer half-life than the free target molecules. [3][4][5] Although this accumulation plateaus at large doses, this plateau does not necessarily imply a plateau in efficacy; and increasing the dose after the plateau has been associated with further reduction of the free target concentration, 6 greater inhibition of downstream biomarkers, 7,8 and improved efficacy. 9,10 To understand why the plateau in total target concentration does not imply a plateau in efficacy, and to understand how changes in the dose regimen or drug properties may impact target engagement, it is useful to characterize the free and bound concentrations for the drug and target using the model in Figure 2, which describes both TMDD and the accumulation of total target during therapy. This model, often referred to as a TMDD model, 11,12 is mathematically more complex than the usual compartmental models for describing the PK of small molecules, because the kinetics of the free drug, free target, and drug-target complex all need to be considered. Given a dosing regimen and estimates for the parameters governing drug kinetics, target kinetics, and binding affinity, the TMDD model allows one to make predictions for the target inhibition, as shown in Figure 1 where data and model predictions are shown for three antibodies.
The TMDD model has been used to support many aspects of drug development 13 including: 1. Early evaluation of the druggability of a target, 14 in which if the target level or turnover is too high, an unfeasibly large dose may be required for efficacy. 2. Identification of a minimally active dose in phase I first-in-human studies. 15,16 3. Comparison of different drugs with the same target to determine whether a second-generation mAb with higher affinity is expected to outperform the first-generation drug. 17 Modelers have sought to develop rules of thumb for predicting how changing the drug properties or dosing regimen will impact measures of drug activity, such as maximum target inhibition or duration of effect of a single dose. [18][19][20][21][22] Thus far, the existing metrics and analyses apply only to single-dose scenarios, which is of limited value because most mAbs are dosed repeatedly in the clinic.
In this article, a new potency factor based on multiple dosing scenarios is derived. The potency factor is named the Average Free target concentration to Initial target concentration Ratio (AFIR), which depends upon the structural model ( Figure 2) and the parameters ( Table 1) that describe the drug, target, and binding kinetics. AFIR provides an intuitive understanding for how dosing regimen and binding affinity affects target inhibition. Examples showing how this intuition can be used to support drug development decisions are also provided.
Theory
In this section, the ratio of steady-state free target (under regular dosing) to baseline free target is derived. This ratio represents the relative degree of suppression of free target after binding to the drug. The TMDD model commonly used to describe biologics binding to their target is given in Figure 2 and by the equations below. The model parameter descriptions and parameter values for three drugs (omalizumab, bevacizumab, and siltuximab) are provided in Table 1. The model parameters were chosen to match the clinical data from, [3][4][5] noting that varying k off and k on over multiple orders of magnitude while keeping the binding affinity (K d 5k off =k on ) fixed would yield similar curves. This model describes both subcutaneous and intravenous dosing, distribution of the drug to the peripheral tissue, binding of the drug to target in the serum, synthesis of the target, and elimination of the drug, target, and complex. Distribution, Week c) siltuximab 6 mg/kg every 3 weeks T Ttot Dtot Figure 1 Data: The time course of drug concentration (D tot ), total target (T tot ), and free target (T) for omalizumab/immunoglobulin E, 3 bevacizumab/vascular endothelial growth factor, 4 and siltuximab/interleukin-6. 5 The circles are data digitized from the literature. Free target data was collected for omalizumab, but not for bevacizumab or siltuximab. Model: The lines denote model simulations using the parameters in Table 1. The model provides predictions for the free target concentration in cases in which it was not measured. Table 1. The initial conditions for the drug and complex concentration are zero. D, drug; T, target; DT, complex.
Dimensionless Potency Metric for Characterizing the Activity of Monoclonal Antibodies Stein and Ramakrishna binding, and turnover of the target in peripheral tissue generally are not modeled due to limited available data in the peripheral tissue. Input Absorption Distribution Binding Elimination The key quantity of interest in predicting drug effect over time is the ratio of free target to baseline target (T =T 0 ). This ratio can be written as the product of two ratios as shown below, where T tot;ss is the steady state total target concentration under a repeated dosing regimen of large doses.
Free Target Baseline Target
To compute the first ratio T =T tot;ss , the quasi-equilibrium (QE) approximation is used, 23 which assumes that binding and unbinding occurs rapidly compared with other processes, such that the drug, target, and complex are in quasiequilibrium.
Substituting the equation for the total target [ðDT Þ5T tot 2T ] into Eq. 5 and solving for T =T tot , gives the equation below. The first approximation holds when D ) K d and the second approximation holds when D tot % D, which occurs when the drug is dosed in vast molar excess to the target, as is the case for most mAb drugs in the clinic. These parameters were chosen to provide good agreement with the data in Figure 1 and, thus, differ slightly from the population estimates reported in the literature.
Dimensionless Potency Metric for Characterizing the Activity of Monoclonal Antibodies Stein and Ramakrishna
The second ratio T acc 5T tot;ss =T 0 , from Eq. 4, is computed by adding Eqs. 2 and 3 for the free target (T) and the complex (DT) to give an equation for the total target (T tot ): Solving Eq. 7 for steady state when no drug [ðDT Þ50] is present gives T 0 5k syn =k eT . When large amounts of drug are present for a long time during intervals of regular dosing, Eq. 6 shows that very little target is free and, thus T % 0 and T tot % ðDT Þ, 24 giving: At equilibrium, T tot;ss 5k syn =k eDT , and the target accumulation ratio (T acc ) is computed as follows: Substituting Eqs. 6 and 9 into Eq. 4 gives the equation below: For drugs with linear PK that are dosed at regular intervals (s), the Trough Free target concentration to Initial target concentration Ratio (TFIR) can be computed using the trough drug concentration at steady state D tot;min , (referred to here as C min to match the more commonly used nomenclature), which can be written as a sum of exponentials. 25 C min 5D tot;min 5F Á Dose Á X i C i exp ð2k i sÞ ð12exp ð2k i sÞÞ Substituting into Eq. 10 gives: The AFIR is given in the equation below, where t ss denotes a time at which the drug and target have reached steady state.
In the case of linear PK, recall that the average drug concentration is given by C avg 5D tot;avg 5ðF Á DoseÞ=ðCL Á sÞ. 25 When the drug is given as an infusion at rate Dose=s, then the steady-state drug concentration is a constant (CðtÞ5C avg ) giving: In practice, mAbs are usually dosed every 2-8 weeks and the above equations are approximations rather than exact solutions. It will be shown in the next section that this approximation is often good. Thus, the average target inhibition (AFIR) depends upon three quantities: the dissociation constant (K d ), the target accumulation ratio (T acc ), and the average drug concentration (C avg ).
A number of assumptions were made in developing the AFIR metric. When these assumptions do not hold, three alternative formulas for AFIR have been derived in the Supplementary Material.
1. When the dose is not large enough for the total target to reach its steady-state plateau, AFIR avg may be used. 2. When the drug concentration is not in vast excess to the target concentration, AFIR QE may be used. 3. When the irreversible binding approximation is more accurate than the quasi-equilibrium approximation, which may occur when k eT > k off , then AFIR IB may be used.
METHODS AND RESULTS Basic sensitivity analysis
To gain a better understanding of the AFIR and TFIR ratios and to explore the conditions under which these potency metrics accurately describe the system, a basic sensitivity analysis is performed using the parameters for siltuximab (Figure 3), as well as omalizumab and bevacizumab (Supplementary Material). Equation 13 demonstrates that AFIR depends on 8 different parameters; 7 parameters are explored (excluding F because changing F has the same effect as changing Dose) and k syn is also included to confirm that target synthesis does not affect AFIR. Each row of plots in Figure 3 explores the sensitivity of a different quantity: fD tot ; T tot ; T =T 0 ; AFIR; TFIRg. In each column, is the result of changing one parameter while holding the other parameters fixed. The parameter that is changed and the range over which it is changed is shown at the top of each column. To test the approximations for AFIR and TFIR, a direct calculation of the target inhibition using Eqs. 11 and 14 is compared to a numeric calculation from the model simulation after 2 years of therapy. For siltuximab, there was generally good agreement between the theory and the numeric estimates for AFIR and TFIR. However, there is divergence from the theory in each of the AFIR and TFIR plots when AFIR; TFIR > 30% and the target does not accumulate to T tot;ss , leading to a lower observed target accumulation ratio (T acc ) than would be observed for a larger dose. In this case, the AFIR avg calculation described in the Supplementary Material should be used. The inaccuracy of the AFIR equation can be especially pronounced in the limit where the dose and drug concentration approach zero and the theoretical approximations for AFIR approach infinity, even though the true behavior is that AFIR 5 1. Although the numerically calculated AFIR generally approaches its theoretical value as AFIR falls below 30%, divergence from the theory is observed for very small k off , (see k off sensitivity plots of Dimensionless Potency Metric for Characterizing the Activity of Monoclonal Antibodies Stein and Ramakrishna AFIR and TFIR) where the quasi-equilibrium assumption is less accurate because the target elimination rate (k eT ) is larger than k off . In this case, drug-target binding approaches the irreversible-binding approximation such that further reduction of k off has no additional benefit. In this scenario, the AFIR IB calculation should be used (see Supplementary Material for details).
Lumped sensitivity analysis
A lumped parameter sensitivity analysis was performed in Figure 4; the system is reparameterized, replacing rate constants fk on ; k syn ; k eT g with lumped parameters fAFIR; T tot;ss ; T 0 g. The rate constants for the model are then calculated from the lumped parameters as shown: By parameterizing the system in this way, the effect of changing parameters while keeping AFIR fixed can be examined and minimal impact on the free target dynamics is observed. This can be seen by noting that the AFIR and TFIR plots in Figure 4 are flat. However, there are some exceptions. For large s (3 months) or large CL (1 L/d), the true AFIR is larger than what is theoretically predicted while the TFIR calculation remains accurate. The inaccuracy in the AFIR theoretical prediction is due to large changes in Figure 3 Basic sensitivity analysis for siltuximab centered about 3 mg/kg dosing every 3 weeks. For each column of plots, the parameter in the title is varied relative to the parameters in Table 1 by either 16-fold (43 lower to 43 higher for CL and s), or 100-fold (103 lower to 103 higher for all other parameters). Each row represents a different variable of the system. The green dashed line in Averaged Free target concentration to Initial target concentration Ratio (AFIR) and Trough Free target concentration to Initial target concentration Ratio (TFIR) plots show the theoretical calculation compared to the estimate from the numerical simulation (circles).
Dimensionless Potency Metric for Characterizing the Activity of Monoclonal Antibodies Stein and Ramakrishna drug concentration over the dosing interval such that the assumption of a constant drug concentration over the dosing interval leads to inaccuracies. Although high CL 5 1 L/d is typically not observed for mAbs, infrequent dosing (s53 months) is sometimes prescribed, as is the case for ustekinumab. As for the basic sensitivity analysis, when k eT > k off , both AFIR and TFIR are higher than predicted by the theory and the irreversible binding approximation should be considered because the quasi-equilibrium approximation declines in accuracy.
Effect of increasing dose on total target and free target
It is instructive to focus on the effect of changing the dose on the total target and free target, as shown in Figure 5a. Notice that above 1 mg/kg, further increases in dose do not have much impact on the total target accumulation.
However, this plateau in total target does not necessarily imply a plateau in free target reduction or efficacy, as demonstrated by the free target curves (T =T 0 ) and as observed elsewhere. 9,10 Identifiability of the dissociation constant and baseline target concentration In considering the identifiability of the four key parameters governing the target dynamics k syn ; k eT ; k eDT ; K d È É , it is useful to reparameterize the model as T 0 ; T tot;ss ; k eDT ; K d È É ; T 0 is the target concentration before drug is given; T tot;ss is the total target concentration after the total target reaches its plateau following a large enough dose; k eDT governs the rate at which the total target approaches its plateau (see Eq. 8 and ref. 24); and K d determines the dose needed for the target to approach its plateau (lower K d means the total Table 1 by either 16-fold (43 lower to 43 higher for CL and s), or 100-fold (103 lower to 103 higher for all other parameters). Each row represents a different variable of the system. The green dashed line in Averaged Free target concentration to Initial target concentration Ratio (AFIR) and TFIR plots show the theoretical calculation compared to the estimate from the numerical simulation (circles). Ttotss, Total target at steady state; Dtot, Total Drug.
Dimensionless Potency Metric for Characterizing the Activity of Monoclonal Antibodies Stein and Ramakrishna target will approach its plateau at lower doses). Thus, all four parameters are identifiable as long as enough measurements are taken and the target assay is sufficiently sensitive.
In practice, the assay for measuring total target is often not sensitive enough to detect the baseline target concentration before the drug is given, as was the case for siltuximab. 5,7 Although the steady-state plateau ðT tot;ss Þ and the time scale for reaching it ðk eDT Þ can still be identified, the baseline target level (T 0 ) is no longer identifiable and this leads to unidentifiability of K d as well. This can be observed graphically in Figure 5b, which shows the kinetics after a single dose of siltuximab, where both T 0 and k on are simultaneously increased while all other parameters are held fixed. The dotted line indicates where a limit of quantification of the total target assay could lie. The sensitivity analysis shows no impact on the profiles for D tot and T =T 0 . For T tot , the blue and gray curves (T 0 < 0.3 pM and K d < 50 pM) are overlapping indicating that while the quotient K d =T 0 5160 is identifiable, only an upper bound for T 0 and K d can be identified. AFIR is also identifiable as can readily be seen because it can be written as AFIR5ðK d =T 0 Þ Á T tot;ss =C avg . Thus, if the goal is to predict AFIR, estimation of the ratio ðK d =T 0 Þ may be sufficient and a total target assay that is sensitive enough to measure baseline target levels may not be needed.
Simulation code
All simulations were performed using Matlab R2015a. Code for generating all figures in this article is available in the Supplementary Material.
DISCUSSION
The key insight from this work is that under many clinically relevant scenarios, AFIR can be estimated using three parameters: the dissociation constant (K d ), the target accumulation (T acc ), and the average drug concentration (C avg ).
AFIR5
K This simple formula provides intuition for how changing the dosing regimen or improving the binding affinity of the drug would be expected to alter target inhibition: doubling the dose, halving the dosing interval, or halving the dissociation constant by using a higher affinity drug would all reduce the free target concentration by 50%. Although target accumulation plateaus at large doses, the AFIR formula shows that further increasing the dose continues to reduce the free target levels (as illustrated in Figure 5a), which could then Figure 5 (a) Sensitivity analysis for siltuximab where dose is varied over 10,0003 (from 0.01 mg/kg to 100 mg/kg). Note that above 1 mg/kg (gray line), there is a plateau in target accumulation, but the free target to initial target ratio continues to decline with dose, as predicted by Averaged Free target concentration to Initial target concentration Ratio (AFIR). (b) Sensitivity analysis for siltuximab where T 0 and k on are simultaneously varied over 1003 such that AFIR stays fixed. Note that the blue and grey curves are almost overlapping above the dotted line (potential limit of quantification), so in the event that the baseline target is not measurable, the parameters T 0 and K d 5k off =k on are unidentifiable.
Dimensionless Potency Metric for Characterizing the Activity of Monoclonal Antibodies Stein and Ramakrishna potentially lead to greater biomarker inhibition, 5,8 and efficacy. 9,10 Although target occupancy (ratio of bound target to total target) provides another metric to assess target engagement, this metric is misleading for soluble targets because it does not account for target accumulation. For example, consider a scenario where there is 99% target occupancy together with a 100-fold accumulation of target. In this scenario, no reduction in the absolute concentration free target has been achieved and, thus, achieving 99% target occupancy would not be expected to provide clinical benefit. Thus, for soluble targets, AFIR is a preferred metric of target inhibition.
Applications
The AFIR potency metric allows for a rapid assessment of new drugs without requiring extensive simulation. Specifically, the formula AFIR5K d Á T acc Á CL Á s=ðF Á DoseÞ allows the drug developer to quantify how a second generation drug with better PK properties or higher binding affinity could lead to improved target inhibition. Alternatively, AFIR could be used to identify dosing regimens that allow for less frequent dosing, ultimately leading to a reduction in the number of injections, the number of visits to the doctor, and the cost of goods.
In designing a phase II dose-finding study, if one has an idea of what level of target inhibition is required, AFIR provides a means to identify the largest dose to be given. When designing preclinical studies, the AFIR metric indicates that it is possible to predict target inhibition without estimating all parameters of the TMDD model. In particular, it is sufficient to have an estimate of the dissociation constant K d and the expected target accumulation ratio T acc . The target accumulation can be predicted either by first measuring the baseline target concentration and then measuring the total target after a large dose, or by computing the ratio of estimates of the half-life of the target and the complex. Furthermore, if one has rich time-course data for the total target dynamics, even if the assay is not sensitive enough to estimate the baseline target concentration T 0 , it may be reasonable to fix the baseline level at a value from the literature; although this will affect the estimate for K d , it will not impact the AFIR prediction. The overarching principle is that because the AFIR metric is a lumped parameter, it is not necessary to estimate each individual rate constant of the model to make predictions for target engagement.
Caveats
When interpreting the AFIR metric, one must ensure that the following conditions hold: 1. AFIR < 30%: otherwise the total target has not reached its plateau and AFIR avg should be used. 2. Drug is in excess to the target: otherwise, there is not enough drug to bind all target molecules and AFIR QE should be used. 3. k off > k eT : otherwise, there is near-irreversible drug binding and AFIR IB should be considered.
Each of these terms is derived in the Supplementary Material, although performance of the metrics has not yet been numerically explored. It is also important to recognize that the TMDD model analyzed here is a simplification and leaves out many physiological processes such as: 1. The PK nonlinearity that occurs for many membrane-bound targets. 26,27 2. Target synthesis and distribution in both peripheral tissue 16 and the target tissue (e.g., the joint or tumor). 28,29 3. Competition for target binding sites between the drug and the target's endogenous ligand. 30 4. Feedback mechanisms 31 leading to either an increased synthesis of the target in the presence of drug 32 or a decrease in target synthesis or expression. 16,33 5. A drug binding multiple targets (which could also include shed receptors); the current model assumes that all of the target is measured, but for infliximab binding tumor necrosis factor-a, target exists in both membrane-bound and soluble forms, and a model that only accounts for soluble tumor necrosis factor-a may considerably underestimate K d . 3 6. This analysis was primarily developed for mAbs and may need to be amended when analyzing other biologics, such as bispecifics.
Implicit in this work is the assumption that the model prediction of free target based on total drug and total target data is accurate, because usually free target data is not available. Thus far, the free target prediction has only been validated for omalizumab. 16 Others tried to test this prediction with siltuximab, but because dosing of siltuximab led to an immediate input of IL-6 into the blood, a more complex model was needed to describe the system and many possible free target levels and AFIR ratios were consistent with the data. 34 Publication of other systems in which both total and free target are measured would help to validate the assumption that the model prediction for free target is generally accurate.
Furthermore, target engagement is only the first pharmacodynamic step toward achieving efficacy, and thus AFIR must be interpreted with caution. Even though the AFIR equation demonstrates that the free target concentration will continue to decline with increasing dose, there may be a threshold (e.g., 5%) such that further reduction in AFIR no longer leads to improved clinical efficacy. Understanding this process can be aided with collection of a downstream pharmacodynamic biomarker (e.g., C-reactive protein and neutrophil levels for anti-IL-6 therapy 7,35 ). Finally, AFIR should not be applied to agonists like TGN1412 16 or blinatumomab, 36 where only small amounts of target binding (AFIR > 90%) are necessary.
CONCLUSION
In summary, the AFIR potency metric has been derived. This metric predicts the target inhibition at steady state under a repeated dosing regimen using three quantities: the average drug concentration at steady state, the dissociation constant, and the target accumulation ratio. AFIR provides intuition for how various physiological properties of the system impact target engagement. In addition, AFIR has been used for rapid assessment of the druggability of Dimensionless Potency Metric for Characterizing the Activity of Monoclonal Antibodies Stein and Ramakrishna new targets and exploration of the binding affinity, half-life, bioavailability, and dosing regimen needed for a secondgeneration drug to achieve comparable or superior efficacy to a marketed compound.
Author Contributions. A.S. analyzed the data and performed the research. A.S. and R.R. wrote the manuscript. | 2018-04-03T05:18:07.137Z | 2017-04-01T00:00:00.000 | {
"year": 2017,
"sha1": "f8d8e26f2f33c6ded1a6ee9593705b58866dadb5",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.1002/psp4.12169",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "f8d8e26f2f33c6ded1a6ee9593705b58866dadb5",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Chemistry",
"Medicine"
]
} |
54850869 | pes2o/s2orc | v3-fos-license | Effect of the Radial Pressure Gradient on the Secondary Flow Generated in an Annular Turbine Cascade
This paper introduces an investigation of the effect of radial pressure gradient on the secondary flow generated in turbine cascades. Laboratory measurements were performed using an annular sector cascade which allowed the investigation using relatively small number of blades. The flow was measured upstream and downstream of the cascade using a calibrated five-hole pressure probe. The three-dimensional Reynolds Averaged Navier Stokes equations were solved to understand flow physics. Turbulence was modeled using eddy-viscosity assumption and the two-equation Shear Stress Transport (SST) k-ω model. The results obtained through this study showed that the secondary flow is significantly affected by the pressure gradient along blade span. The experimental measurements and the numerical calculations predicted passage vortex near blade hub which had larger and stronger values than that predicted near blade tip. The loss distribution revealed that secondary flow loss was concentrated near blade hub. It is recommended that attempts of reducing secondary flow in annular cascade should put emphasis on the passage vortex near the hub.
Introduction
Large-scale steam and gas turbines are always used in power generation and industrial applications.Therefore, turbine efficiency and performance have major concern.The losses in a turbine can be divided into profile loss, secondary flow loss, and tip clearance loss.The profile loss is caused by the growth of the boundary layer on the blades.Secondary flow loss is generated due to the deflection through blade channel.Tip leakage loss is induced due to pressure difference between blade pressure side and blade suction side when the tip clearance gap exists.There are many factors which influence turbine losses.The pressure gradient, turbulence level, blade geometry, incoming velocity, and inlet boundary layer thickness represent important parameters affecting turbine efficiency.
It is practically very difficult to perform detailed flow field measurements in an engine at operating conditions.Understanding the physics that governs the flow and the associated turbine cascade losses has been obtained through wind tunnel experiments.These laboratory tests not only allow detailed flow field measurements but also give the experimenter the possibility to investigate the effect of several parameters separately.
Experimental studies using linear turbine cascades introduce the aspect of flow periodicity by arranging a number of blades of constant cross-sections separated by a constant pitch.Linear cascade experiments provide several advantages such as geometric simplicity, simple adjustment, large blade sections, and simply changing incidence angle.They are used as a tool to provide quasi-three-dimensional blade-to-blade data for the simulation of the flow.Linear cascades have been used extensively and have succeeded in providing a better understanding of the physics involved.They simplified the problem and allowed the usage of large-scale cascades to provide better detailed measurements in different regions of the turbine.Linear turbine cascades have been used very extensively for basic investigations of secondary flows through turbine cascades [1][2][3][4][5][6][7][8].
The reliability of the experimental data is improved by using numerical calculations for the interpretation of the data.Linear turbine cascade measurements are also commonly used in defining the proper boundary conditions for numerical calculations and the selection of appropriate turbulence models and validation of the numerical techniques.
Although the experimental data of the secondary flows obtained in linear cascades are very valuable for numerical validation, they cannot be used directly for calculating turbine flows where the radial static pressure gradient field plays a particularly important role with respect to the spanwise distribution of losses and outlet flow angles.Figure 1 shows proposed static pressure gradients through linear and annular cascades.In linear cascades, the static pressure changes along the blade passage producing pressure gradient from the blade pressure side to the blade suction side.This gradient induces secondary flow which is symmetrical along blade midspan in linear cascades.In actual turbines, the cascades are annular with pressure gradient along blade passage from the pressure side to the suction side and a pressure gradient along blade span caused by the curvature of the endwalls at the hub and at the casing.As a consequence, the secondary flow in actual machines is not symmetrical along blade midspan, and annular cascades measurements are required to represent the actual machines.
The annular turbine cascade consists of an annular space between two concentric cylinders which contains a turbine blade row.The main advantage of annular cascades over linear cascades is the possibility of simulating the radial static pressure gradient and, therefore, simulating turbine flow conditions more closely than the linear cascade.However, the increased number of blades in annular cascades makes the blades have smaller scale than that of linear cascades causing increased probe blockage effects and higher measurement errors due to stronger pressure gradients.Recently, El-Batsh and Bassily Hanna [9] obtained measurements using annular cascade with tip clearance gap and relatively short blades which have aspect ratio of 0.8 and at Reynolds number 9 × 10 4 .Matsunuma [10] used annular cascade with blade span of 75 mm and blade aspect ratio of about 1 with flow Reynolds number from 4 × 10 4 to 26 × 10 4 .A very comprehensive review of advanced applicable techniques to both linear and annular cascade testing has been published by Hirsch [11].
Annular large-scale sector cascades are used as a compromise between the advantages of linear and annular cascades simulating properly the radial pressure gradient field and increasing the blade size which enables accurate measurements for laboratory investigation using a relatively low number of blades.In addition, blades with large aspect ratios including blade twist and changes of section area along blade span can be examined using annular sector cascades.Furthermore, existing engine parts can be examined, and the blade profiles may be obtained from the engine.The established radial pressure gradients ensure that secondary flows develop as they would exist in an operating engine.Model and testing costs for annular sector cascades are considerably lower than those for their fully annular equivalents.
This study aims to investigate the effect of radial pressure gradient on the three-dimensional flow through radial turbine cascades without tip clearance.The study aims also to predict secondary flow generated in real turbines.This is achieved by using an annular turbine cascade sector with large-scale turbine blades.The effects of the radial pressure gradient on the secondary flow and loss mechanism through the cascade are examined.To interpret the data and to understand the flow physics, numerical calculations are performed as well.
Experimental Setup
Limited information is available concerning experiment design using annular sector cascades.Vogt and Fransson [12] successfully developed an annular sector facility for studies of the aeromechanical phenomena in axial flow turbomachines.Reducing the annular cascade to a sector cascade allowed them to maximize the size of the test object.Povey et al. [13] conducted a direct aerodynamic comparison of the flow in an annular cascade facility and in an annular sector facility of five vane passages.They obtained excellent periodicity across most of the sector.The design of the annular sector cascade presented in the present study is based on the guidelines of Povey et al. [13].
Annular Sector Turbine
Cascade.The annular sector cascade constructed in this study was equipped with 5 blades representing blade profile exact replicas of the first stage rotor of the gas turbine engine from General Electric working in electric power generation.The rotor contains 92 blades and has hub and tip diameters of 1946 mm and 2366 mm, respectively.The blade profile is tapered to represent actual turbine blade with different cross-sections along blade span.Radial pressure gradient exists in the rotor and in the stator.In the stator, flow deflection caused by the hub and the casing produces radial pressure gradient which changes secondary flow pattern between blade tip and blade root.In the rotor, the problem is more complicated due to the rotation of the blades and the associated centrifugal force.This effect makes the design usually based on radial equilibrium.However, experimental measurements for the rotor during rotation are rather complicated and in most cases are not possible if we have to consider the actual speed of rotation.Therefore, the measurements are performed in the present study while the blades are fixed.This would give the flow field in the absence of the centrifugal forces.The measurements also allow numerical model verification.Further study will be performed which allows blade rotation using the adopted numerical technique which would simulate the actual case in the machine with the same rotating speed.Blade coordinates were obtained using a scanning system which is used in reverse engineering.It provided the blade coordinates over eight cross-sections along blade span.Figure 2 shows the blade profile at the tip and the three-dimensional blade profile, and Table 1 summarizes cascade parameters.
The sector annular cascade was constructed from two concentric cylinder sectors representing the hub and the casing.The diameters of the cylinders are the hub and tip diameters as given in the real machine.The measurements were obtained upstream and downstream of the middle blade which was obtained from the real gas turbine engine and was not manufactured.The other blades were manufactured by casting and finished to obtain smooth surfaces.The blades were fixed to the hub and the casing, and, therefore, there was no tip clearance gap.The angle between two adjacent blades is 360/92 = 3.913 • .
The annular sector cascade (Figure 3) was examined by using a low-speed wind tunnel which is a blow-down facility.The wind tunnel is equipped with a centrifugal fan which was driven by a 10 HP electric motor.Inlet air velocity was adjusted by using a throttling system at fan inlet.The distance between the annular sector cascade and the fan is about 4.5 m.In order to obtain uniform flow at the inlet to the test section, three grids were used at distances of 0.89 m, 1.41 m, and 1.93 m.
The upstream flow measurements were obtained at a distance of 105 mm upstream of the blade leading edge while the downstream measurements were obtained at a distance of 52.5 mm downstream of the blade trailing edge.
The flow was measured downstream of the annular sector cascade using a mesh with 8 equally spaced points in the pitchwise direction and 25 points in the spanwise direction.
A traverse mechanism was used to allow probe rotation in the pitchwise direction and translation in the spanwise direction.
The upstream and the downstream measurement meshes were vertical.The probe was moved with a step of 5 mm in the region close to the hub and to the casing and 10 mm elsewhere.Figure 4 shows the measurement mesh with a total number of 200 grid points.The five-hole probes can be used to measure total and static pressures as well as gas velocity and flow angle.This is achieved by extensive calibrations for the probe to cover all of the expected flow conditions that will be encountered by the probe during wind tunnel measurements.A five-hole probe has five pressure ports which are distributed on a conical tip.The five-hole probe used in this study has L-shape with tip diameter of 3 mm as shown in Figure 5(a).One-hole port is located at the apex of the cone, and the other hole ports 2-5 are uniformly distributed around the central port.This probe geometry provides the capability of making accurate measurements for flow angles of up to ±30 • if the probe has been properly calibrated within those limits.The threedimensional velocity vector that is measured with this probe is expressed in the probe axis coordinate system shown in Figure 5(b).Typically the incidence angle of the flow with respect to the probe tip is defined using pitch angle α and yaw angle β.Hence, the velocity components in the probe axis coordinate system may be expressed using these angles as well as the velocity magnitude.
Calibration Procedure.
Probe calibration was performed here by placing the probe into a calibration rig with uniform and one-dimensional flow.The total pressure (P t ), static pressure (P s ), and the orientation of the probe tip with respect to the flow field direction represented by pitch and yaw angles are known.The pressures at the five ports of the probe tip (p 1 , p 2 , p 3 , p 4 , p 5 ) were measured.The probe was rotated in the pitch and yaw directions using a traversing mechanism to allow measuring pressure signals obtained from the five holes.The probe was rotated at a set of angle combinations pitch and yaw that cover the range of incidence angles that are expected for the probe to encounter during the wind tunnel measurements.The calibration was performed in the range for pitch and yaw angles from −30 to +30.The pressure signals obtained from the probe are measured using digital micromanometers with full scale of 3700 Pa and accuracy of 0.1%.The pressure data were expressed as a group of non-dimensional coefficients as follows.
The pitch angle coefficient the yaw angle coefficient total pressure coefficient the static pressure coefficient 2.4.Measurement Procedure.During wind tunnel measurements, the probe was inserted into the unknown flow field, and the pressures at the tip of the probe (p 1 , p 2 , p 3 , p 4 , p 5 ) were measured for each test point.The five pressures were used to calculate the pitch and yaw angle coefficients (k α and k β ).These values were used to obtain flow angles α and β using calibration map of Figure 6.Then pitch angle and yaw angle are used in combination with the calibration maps (Figures 7 and 8) to obtain the total and static pressure coefficients (k t ,k s ).Rearranging the equations defining the total and static pressure coefficients, the velocity magnitude is given by The velocity components were obtained as The uncertainties of the calculated results were estimated on the basis of the uncertainties in the primary measured values.The result R is given function of the independent variables x 1 , x 2 , x 3 , . . ., x n .Thus, Let w R be the uncertainty of the result and w 1 , w 2 , w 3 , . . ., w n be the uncertainties in the independent variables.If the uncertainties in the independent variables are all given, then, the uncertainties in the results are given by From the experimental measurements, the uncertainty in the velocity, flow angle, and total pressure was found to be within 3%, 5%, and 9%, respectively.
Numerical Predictions
The three-dimensional flow through the cascade was obtained by solving the flow governing equations.In all International Journal of Rotating Machinery calculations performed here, the Mach number was small, and therefore the flow was considered incompressible, and as a consequence solving energy equation was not required.Since the flow through turbine cascades is almost turbulent, an appropriate turbulence model was required.The selected turbulence model is able to predict the losses with reasonable accuracy.A commercial CFD code was employed to solve the flow-governing equations.Navier Stokes equations.For incompressible turbulent flow neglecting external forces they are given by [16] where the velocities u i are mean values, u i are the fluctuating values, and −ρu i u j are the Reynolds stresses which are calculated using eddy viscosity turbulence models as The eddy or turbulent viscosity μ t was calculated in this study using the Shear Stress Transport (SST) k-ω model; δ i j is the Kronecker second-order tensor given by 3.2.The SST k-ω Model.Bardina et al. [17] discussed the performance of different turbulence models.They found that the SST k-ω model can predict the flows with strong adverse pressure gradients and separation.The SST k-ω model was also recently used successfully to predict the threedimensional complex flow through axial flow turbine blades [9,18].
The SST k-ω model is an empirical model based on model transport equations for the turbulence kinetic energy k and the specific dissipation rate ω.Eddy or turbulent viscosity is calculated as In the turbulent boundary layers, the maximum value of the eddy viscosity is limited by forcing the turbulent shear stress to be bounded by the turbulent kinetic energy time a 1 .This effect is achieved with the auxiliary function F 2 and the absolute value of vorticity Ω.The auxiliary function F 2 is defined as a function of wall distance y as The transport equations as developed by Menter [19] and presented by Bardina et al. [17] are P ω is the production term of ω, and the function
Numerical Methodology. A control-volume technique
was employed to convert the differential equations to algebraic equations which can be solved numerically.The upwind scheme was used to represent the convection terms of the governing equations.The semi-implicit method for pressure-linked equation (SIMPLE) was used to solve the discretized equations.A CFD commercial code was employed here to solve the equations.All computations have been carried out using a personal computer with a single processor of Intel I5 with frequency 2.4.there is no symmetry plane at midspan which exists in linear cascades.Fixed wall boundaries were considered at the blade surface and at the hub and at the casing.A twodimensional grid was generated at the hub and was copied in the spanwise direction to form the three-dimensional grid.The grid has a total number of about 996000 grid points.The grid size was based on the grid-independent results obtained by Hildebrandt and Fottner [20] with reasonably fine mesh with 303000 grid points for the solution in linear cascade with nearly the same aspect ratio using half-span simulation.The grid was generated using multiblock topology which leads to a numerical grid of high quality expressed in terms of orthogonality and smoothness.These are essential prerequisites for obtaining accurate results in regions that are dominated by the viscous effects.Special consideration was paid during grid generation to obtain dense mesh near the blade and near the hub and tip wall.This was required Pitch Pitch ps ps ss ss 0.7 0.8 0.8 0.9 0.9 1 1.1 to solve the boundary layer near the wall and to satisfy the requirements of the solution technique near the fixed walls.Figure 9 shows the computational grid.
Boundary Conditions.
In the present study, the inlet velocity boundary condition was defined at the inlet while the outlet boundary condition was considered at the exit.Since the inlet velocity profile is important for the development of the secondary flow field, the inlet velocity was measured upstream of the cascade at a distance of axial chord.The velocity was measured by using the calibrated 0.2 0 five-hole probe along a line from the hub to tip using the same spanwise distance given in Figure 4. Figure 10 shows the inlet velocity distribution used at the inlet to the numerical calculations.Matsunuma [10] studied the effect of free-stream turbulence on the flow with and without tip clearance gap.He found that the turbulence level does not significantly affect the flow field and loss behavior either with or without tip clearance gap.Inlet turbulence parameters, namely, turbulence intensity and length scale were estimated according to the general guideline for CFD calculations [21].The inlet turbulence level was estimated in the calculations performed here as 1% while the inlet turbulence length scale was estimated based on blade span h as 0.07h.
Results and Discussion
The results presented here can be divided into three groups.Firstly, the velocity distribution through the cascade was examined.Then the static pressure was investigated because it affects the secondary flow through the cascade.Finally, loss distribution was examined, and the locations of high losses are distinguished.
Velocity Distribution.
Figure 11 shows contour plots for the dimensionless velocity calculated at three different locations along blade span.These locations exist at 10% blade span near the hub, at the midspan, and at 90% blade span near the casing.The dimensionless velocity is defined as the local velocity normalized using the mass averaged exit velocity which is given by The figure indicates that at the three studied locations, the flow accelerates through the blade channel reaching the maximum velocity at the throat.On the blade suction surface, the flow accelerates to the maximum velocity and then the flow decelerates.However, on the blade pressure surface, the flow accelerates from the blade leading edge to the blade trailing edge.
Figure 12 shows the comparison between the experimentally measured and the numerically calculated velocity on the downstream plane at blade midspan.Relatively good agreement was obtained between measurements and calculations.Figure 13 close to the hub is smaller than the velocity level for the passage vortex that is close to the casing.This was observed both in the numerical calculations and in the experimental measurements.
Static
Pressure.The static pressure was examined through the blade channel using the static pressure coefficient which was defined as where P 1 is the inlet static pressure, P is the local static pressure, and ρ is the air density.Figure 14 shows contour plots for the distribution of the static pressure coefficient through blade channel at different levels along blade span.Generally, the figure shows nonuniform pressure distribution at the planes considered here.The figure shows that at all levels, the pressure increases at the blade pressure side and reduces at the blade suction side.The pressure calculated at the blade pressure side near the tip region at 90% span is higher than the pressure calculated at the blade midspan and at blade root.Figure 15 shows the pressure distribution on the blade surface at two locations at 0.1 and 0.9 blade span.For clarity in presentation, the blade axial chord at the tip is smaller than the blade axial chord at the hub because the studied blade is tapered blade.The figure indicated that at the same distance through the axial chord, the static pressure on the blade pressure surface is higher in the blade tip than in the blade root.The separation near the hub was caused by the constant flow angle at the inlet and the strong flow turning from inlet to exit. Figure 16(a) shows contour plot for the static pressure distribution through blade channel at a distance of midaxial chord (x/C x = 0.5) measured from the blade leading edge.along blade span.The maximum pressure was predicted at the casing on the blade pressure surface.
Flow Angle.
The flow was investigated downstream of the cascade using deviation angle which was defined as where β 2 is the exit flow angle and β 2 is the exit blade angle measured from the tangential direction.
Figure 17 shows the pitchwise mass averaged exit deviation angle along blade span.The deviation between measurements and calculations is about 1-2 • which could be attributed to the initial setting of the zero position of the probe during calibration and during cascade measurements.Nevertheless, both experimental measurements and numerical calculations give the same trend.At blade mid-span, a positive deviation angle was obtained which indicates that the flow angle is larger than the blade angle and the flow deflects from the blade pressure side to the blade suction side.This is caused by the pressure difference between blade surfaces.The figure shows also that the deviation angle increases further in the region above blade mid-span.This is caused by the increase in pressure difference between blade surfaces with blade span as demonstrated in Figure 16.The deviation angle significantly decreased due to the passage vortex near the hub and near the casing.The effect near the hub is more significant which indicates that the passage vortex near the hub is stronger than that near the casing and the pressure difference between blade surfaces is reduced.
Cascade Losses.
The losses through turbine cascade were examined by using the total pressure loss coefficient which was defined as where P t1 is the inlet total pressure and P t is the local total pressure.Figure 18 shows contour plots for the calculated total pressure loss coefficient C Pt around the blade in the three studied planes through blade span.At blade mid-span, the losses start to increase on the blade suction surface downstream of the location of maximum velocity.Decreasing the flow velocity increases the adverse pressure and increases the boundary layer thickness.The losses at the other locations are not identical indicating that the losses were not symmetrical along blade mid-span.The losses were concentrated near the hub while the loss level near blade casing was smaller.
Figure 19 shows the comparison between the experimentally measured and the numerically calculated total pressure loss coefficient at mid-span on the downstream plane and shows relatively good agreement.Figure 20 shows contour plots for the experimental measurements and the numerical calculations of the total pressure loss coefficient on the downstream plane.Good agreement was obtained between calculations and measurements.Two passage vortices were calculated and predicted.The first passage vortex was predicted near the casing while the second passage vortex was predicted near the hub.The passage vortex near the hub was predicted by the numerical calculations and measured larger and with higher loss level than that predicted near the casing.
Figure 21 shows the pitchwise mass averaged total pressure loss coefficient through blade span.Very good agreement was obtained between experimental measurements and numerical calculations.The discrepancy near the endwall was caused by the blockage effect of the probe during the experimental measurements and the high velocity gradient in the boundary layer.The figure indicates high losses near the hub with reduced loss level near the casing.The passage vortex increased the loss level at 10% and 90% blade span.The measured and the calculated mass averaged loss losses were higher near the hub than that near the casing.
In order to verify the grid-independent numerical solution, a denser mesh was generated with increased grid size by 60% to obtain as mesh with 1593000 grid points.The cells were mainly added in the high gradient flow regions.Figure 22 shows the results of the grid-independent analysis by comparing the spanwise distribution of the total pressure loss coefficient using two meshes.The figure indicates that there is no significant improvement by increasing the grid size, and the solution obtained is grid independent.The uncertainty of the assumption of the inlet turbulence intensity was also examined by repeating the calculations assuming inlet turbulence intensity as 5%.Figure 23 shows the comparison between the numerical results obtained assuming different inlet turbulence level.The results confirm the findings of Matsunuma [10] that secondary flow losses are independent of the inlet turbulence intensity.Figure 24 shows the secondary flow velocity vectors on the downstream plane.The figure indicates the secondary flow movements from the pressure side to the suction side near the endwall and from the suction side to the pressure side at the midspan.
Conclusions
This study introduced experimental measurements using annular sector cascade verified by numerical calculations for the effect of the radial pressure gradient on the threedimensional flow in turbine cascades.The results presented here showed that the numerical calculations and the experimental measurements provided the same flow features.The main conclusions obtained from the present study are as follows.
(a) The annular sector cascade with five blades provided reasonable results and could be used instead of fully annular cascade to reduce experiment cost and to obtain laboratory measurements for large scale blade profiles.Reducing the annular cascade to a sector rather than a full annulus reduces the required mass flow rate considerably whilst maximizing the size of the test object.
(b) The secondary flow near the hub is stronger than that near the shroud.For the blade aspect ratio considered in this study, the secondary flow was found to move across blade mid-span.
(c) The measured and the calculated total pressure loss coefficient indicated that the losses were concentrated near the hub, and therefore attempts to reduce the secondary loss should emphasize the losses near the hub.
(d) The increase in pressure difference between the blade pressure and suction surfaces with blade height increases flow deflection through blade height.Therefore, the flow is underturned near the hub and overturned near the shroud.
The radial pressure gradient would also affect the case with tip clearance gap, and the leakage flow induced from the blade pressure surface to the blade suction surface will be different from that predicted in linear cascade.In addition, the interaction between the passage vortex and the tip leakage vortex would be different from that obtained in linear cascades.Therefore, it is recommended for future work to extend the present study to the case with clearance gap.
Figure 1 :Figure 2 :
Figure 1: Pressure gradient in linear and in annular cascades.
(a) Photo for the five-hole probe
Figure 5 :
Figure 5: L-shaped five-hole probe with a conical tip.
Figure 6 :Figure 7 :
Figure 6: Calibration map for the yaw and pitch dimensionless coefficients.
3. 1 .
Governing Equations.Fluid flow characteristics are described by the conservation of mass (continuity equation) and momentum (Navier-Stokes equations).For turbulent flows, Reynolds averaging procedure is commonly used, and the governing equations are called Reynolds Averaged
Figure 11 :Figure 12 :
Figure 11: Dimensionless velocity in blade channel at different locations along blade span.
Figure 13 :
Figure 13: Counter plots for the dimensionless velocity downstream the cascade.
Figure 14 :Figure 15 :
Figure 14: Static pressure coefficient through blade channel at different locations.
Figure 18 :Figure 19 :
Figure 18: Total pressure loss coefficient through blade channel at different locations.
Figure 20 :
Figure 20: Total pressure loss coefficient downstream the cascade.
Figure 24 :
Figure 24: Secondary flow predicted downstream of the cascade. | 2018-12-11T20:03:03.513Z | 2012-05-22T00:00:00.000 | {
"year": 2012,
"sha1": "82692862449e15548eab532c49cc68f70833c03c",
"oa_license": "CCBY",
"oa_url": "https://downloads.hindawi.com/journals/ijrm/2012/509209.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "82692862449e15548eab532c49cc68f70833c03c",
"s2fieldsofstudy": [
"Physics",
"Engineering"
],
"extfieldsofstudy": [
"Physics"
]
} |
53211824 | pes2o/s2orc | v3-fos-license | Re-evaluation of the Latent Structure of Common Childhood Disorders: Is There a General Psychopathology Factor (P-Factor)?
In the field of psychopathology, there is high comorbidity between different disorders. Traditionally, support for two broad correlated dimensions of internalizing and externalizing symptoms has consistently emerged for children and adolescents. To date, oblique 2 and 3 first-order factor models (factors for externalizing and internalizing, and fear, distress, and externalizing) and bi-factor models with the corresponding two and three group factors have been suggested for common internalizing and eternalizing child and adolescent disorders. The present study used confirmatory factor analyses to examine the relative support for these models in adolescents (≥ 12 to 18 years; N = 866) and children (6 to < 12 years; N = 1233) and the reliability and convergent and divergent validities of the psychopathology factor (P-factor) and group factors in the optimum bi-factor model. All participants were from a clinic and underwent Diagnostic and Statistical Manual of Mental Disorders, 4th Edition clinical diagnosis. The findings showed that the bi-factor model with two group factors (internalizing and externalizing) was the optimum model for both children and adolescents. For both groups, findings showed relatively higher reliability for the P-factor than the group factors, although the externalizing group factor showed substantial reliability in adolescents, and both the externalizing and internalizing group factors also showed substantial reliability in children. The factors of the optimum bi-factor model also showed good convergent and discriminant validities. The implications for theory and clinical and research practice related to psychopathology are discussed.
A robust finding for psychopathology (viewed either categorically or dimensionally) is the high comorbidity/co-occurrence across the different disorders and syndromes (Angold et al. 1999;Angold and Costello 2009;Krueger 1999;Krueger and Markon 2006). Traditionally, and based on factor analysis studies, support for two broad correlated dimensions of internalizing and externalizing symptoms has emerged consistently for children and adolescents (Achenbach 1966;Achenbach and Edelbrock 1978;Lahey et al. 2015;Patalay et al. 2015;Tackett et al. 2013). The internalizing dimension includes behaviors that have the propensity to express distress inwards, such as mood and anxiety disorders. In children and adolescents, the externalizing dimension includes behaviors that have the propensity to express distress outwards (Krueger 1999;Krueger and Finger 2001), such as the symptoms in attention deficit hyperactivity disorder (ADHD), oppositional defiant disorder (ODD), and conduct disorder (CD). Other researchers have proposed that the broad dimension of internalizing is better considered to comprise separate, but correlated, dimensions for fear-related behaviors/ disorders or syndromes and distress-related behaviors/disorders or syndromes (Slade and Watson 2006;Vollebergh et al. 2001). Thus, psychopathology is viewed in this model in terms of three correlated broad dimensions: fear, distress, and externalizing (Martel et al. 2017;Lahey et al. 2012). More recently, other broad dimensions have been suggested, such as psychotic symptoms (Stochl et al. 2015), thought problems (Carragher et al. 2016;Caspi et al. 2014;Laceulle et al. 2015), and autism spectrum-related problems (Noordhof et al. 2015). Therefore, overall, at least two to four correlated broad dimensions have been proposed for psychopathology.
Independent of the different numbers of broad dimensions involved, a robust finding in these studies is that the proposed dimensions are consistently moderately to highly correlated with each other (Achenbach and Rescorla 2001;Angold and Costello 2009;Angold et al. 1999;Krueger and Markon 2006). For example, among children and adolescents, the correlations between internalizing and externalizing factors appear to range between 0.40 and 0.60 (Achenbach and Rescorla 2001). Similar correlations have been found for fear, distress, and externalizing factors (Angold and Costello 2009;Angold et al. 1999) and for the intercorrelation between internalizing, externalizing, and thought disorder problems (Lahey et al. 2004;Wright et al. 2013).
The high inter-correlations between the various broad psychopathology dimensions raise the possibility that an even broader overall psychopathology dimension could exist that potentially explains the co-variances found between them. Since 2012, a growing number of studies have examined this possibility using a type of confirmatory factor analysis (CFA) model called the bifactor model (Reise 2012). A conventional bi-factor model is an orthogonal first-order factor model with a general factor in which all items (usually) load along with separate group factors for the different dimensions, after removing the variances accounted by the general factor. In such a model, the general factor captures the covariance across all the items, and the group factors capture the unique covariance of the items within the relevant dimensions, after accounting for their variance due to the general factor (Reise 2012). Thus, when applied to set of psychopathology constructs, the general factor or more specifically, the general psychopathology factor (or Pfactor; Caspi et al. 2014) reflects the covariance across all the items (be they classified as problem behaviors, symptoms, disorders and/or syndromes) forming the broad dimensions (such as internalizing symptoms/disorders), and the group factors reflect the unique covariance of the broad dimensions, after accounting for the variance allocated to the general P-factor.
For a bi-factor model or a higher order factor model, the appropriate internal consistency reliability indices for the general factor and the group factors are omega hierarchical (ω h ) and omega-subscale respectively (Brunner et al. 2012;Zinbarg et al. 2005). The values for ω h and ω t range from 0 to 1, with 0 indicating no reliability and 1 reflecting perfect reliability. According to Reise et al. (2013a, b), ω h and ω t values of at least 0.75 are preferred for meaningful interpretation of a scale. A number of other fit indices have also been proposed that could enable a more sophisticated and accurate interpretation of the dimensionality of the general factor (or P-factor) in the bi-factor model. These include explained common variance (ECV; Reise et al. 2013a, b), percentage of uncontaminated correlations (PUC; Bonifay et al. 2015), and the index of construct reliability (H; Hancock and Mueller 2001).
To date, bi-factor models of psychopathology have been examined for adults (Caspi et al. 2014;Lahey et al. 2012;Stochl et al. 2015), adolescents (Carragher et al. 2016;Castellanos-Ryan et al. 2016;Laceulle et al. 2015;Noordhof et al. 2015;Patalay et al. 2015), children (Lahey et al. 2015;Martel et al. 2017), and adolescents and children together (Tackett et al. 2013). Given the focus of the present study, past studies involving children and adolescents (Carragher et al. 2015;Castellanos-Ryan et al. 2016;Laceulle et al. 2015;Lahey et al. 2015;Martel et al. 2017;Noordhof et al. 2015;Patalay et al. 2015;Tackett et al. 2013) are particularly relevant for the present study. A robust finding in past studies of the bi-factor model of psychopathology in children and adolescents is the support for the bi-factor model (Carragher et al. 2015;Castellanos-Ryan et al. 2016;Laceulle et al. 2015;Lahey et al. 2015;Martel et al. 2017;Noordhof et al. 2015;Patalay et al. 2015;Tackett et al. 2013). Independent of whether these studies included two group factors (Castellanos-Ryan et al. 2016;Laceulle et al. 2015;Lahey et al. 2015;Patalay et al. 2015;Tackett et al. 2013), or three (Carragher et al. 2015;Martel et al. 2017) or four group factors (Noordhof et al. 2015), they have consistently found that the bi-factor model fitted better than the corresponding two or three or four firstorder oblique models (Carragher et al. 2015;Laceulle et al. 2015;Lahey et al. 2015;Martel et al. 2017;Patalay et al. 2015). Additionally, the P-factor in the bi-factor model has shown acceptable external validity. As examples, Martel et al. (2017) reported that the P-factor (but not the group factors) was significantly associated with global executive functioning, and Patalay et al. (2015) reported that the P-factor best predicted future psychopathology and academic attainment. To date, only two studies have examined reliability for the bi-factor model (i.e., Martel et al. 2017;Murray et al. 2016), and the findings have been mixed. Based on ω h and ω t values of at least 0.75 for meaningful interpretation of a scale, Martel et al. (2017) reported acceptable reliability for the P-factor (ω h = 0.898). In contrast, Murray et al. (2016) reported unacceptable reliability for the P-factor (ω h values ranging from 0.53 to 0.64 for eight age groups, ranging from 7 to 15 years).
Although there have been numerous studies of the bi-factor model of psychopathology in children and adolescents, there are limitations and omissions in the existing literature. First, to date, all relevant studies have been on community samples and have used dimensional scores (derived from questionnaires and rating scales) of children's and adolescents' problems. No study has examined clinic-referred samples, based on clinical diagnosis, to model their broad dimensions (such as internalizing and externalizing). Thus, it is uncertain how these findings are directly applicable to samples of children and adolescent referred to clinical settings and who are given clinical diagnoses. Indeed, it is possible that there could exist qualitatively different features in clinical and non-clinical levels for some psychopathologies, thereby raising concerns over measuring and using clinical traits in non-clinical population to understand the clinical level (Murray et al. 2016;Reise and Waller 2009). Second, there have been only two studies involving children (Lahey et al. 2015;Martel et al. 2017) and a third study combining children with adolescents (Tackett et al. 2013). Therefore, the data in this age group are presently limited. Third, the reliability data for the bi-factor model of psychopathology is also limited, and they have all reported only ω h and ω t scores, and none of the other indices (such as, ECV, PUC, and H) that allow for a more sophisticated and accurate interpretation of the dimensionality of the general factor (in the present study's case, the P-factor) in the bifactor model. Fourth, although Martel et al. (2017) examined the convergent and discriminant validity of the P-factor and group factors (fear, distress, externalizing) by investigating how these factors correlated with similar factors for maternal psychopathology, none of the other studies involving children and adolescents provided similar information. Thus, there is need for studies to include such an evaluation to reinforce or question these initial (though innovative) findings.
Given the aforementioned limitations and omissions on the bi-factor model of psychopathology in children and adolescents, the first aim of the current study was to use CFA to simultaneously examine the structure of the major childhood internalizing and externalizing disorders, based on interviews of parents of clinic-referred children in Australia. For Diagnostic and Statistical Manual of Mental Disorders, 4th Edition (DSM-IV) (also DSM-IV-TR), the common internalizing disorders for children and adolescents included in the present study were separation anxiety disorder (SAD), social phobia (SOP), specific phobia (SPP), panic disorder (PD), agoraphobia (AG), generalized anxiety disorder (GAD), obsessive-compulsive disorder (OCD), post-traumatic stress disorder (PTSD), dysthymia (DYTH), and major depressive disorder (MDD). The externalizing disorders included were ODD, CD, and ADHD. The present study focused on parent interviews for reasons explained in the BMethods^section below. Five different measurement models were examined. They were one-factor (model 1); two-factor oblique with primary factors for the internalizing disorders and the externalizing disorders (model 2); three-factor oblique model with primary factors for distress disorders, fear disorders and externalizing disorders (model 3); bi-factor model with orthogonal factors for a P-factor and group factors for the internalizing disorders and externalizing disorders (model 4); and bi-factor model with orthogonal factors for a P-factor and group factors for distress disorders, fear disorders, and externalizing disorders (model 5). All five models are shown in Figs. 1, 2, 3, 4, and 5. To allow examination of the robustness of the findings, all the models were examined in children and adolescents separately. A second aim of the study was to examine model-based reliabilities of the factors(s) in the optimum bi-factor model for these groups, contingent on such a model being supported. More specifically, the focus was to examine ω h and ω t , as well as ECV, PUC, and H reliability indices. A third aim of the study was to test the convergent and divergent validities of the factors in the optimum bi-factor model, contingent on such a model being supported. Based on the recent findings reported by Martel et al. (2017), support was expected (in terms of fit) for a bi-factor model (either with a P-factor and three group factors [fear, distress, externalizing] or a P-factor and two group factors [internalizing and externalizing]), and relatively good support in terms of reliability and validity for the P-factor is such a model.
Participants
The data for all participants were collected from archival files from the Academic Child Psychiatry Unit (ACPU) of the Royal Children's Hospital, Melbourne, Australia. The ACPU is an outpatient psychiatric unit that provides services for children and adolescents with behavioral, emotional, and/or learning problems. Referrals are generally from other medical services, schools, and social and welfare organizations. The present study used the records of children, aged between 6 and 18 years, referred between 2004 and 2016, who had been interviewed for clinical diagnosis. In total, there were 2099 children, comprising 1504 males (71.8%) and 592 females (28.2%). The overall mean age of participants was 11.22 years (SD = 3.10). The frequencies of children (< 12 years) and adolescents were (≥ 12) were 1233 and 866, respectively (see Table 1). Table 1 provides the sociodemographic characteristics and clinical diagnoses of participants in the study. As shown, most fathers were employed, and most mothers were mainly employed or involved in home duties. About two thirds of participants had mothers and fathers who had attended at least secondary schools, and most were from families with income less than $50,000 (AUS) per year. These figures correspond close to the Australian population. In terms of parental relationship, approximately 48.68% were living together, 43.6% were separated or divorced, and the remainder were single for other reasons (e.g., death of partner).
In relation to clinical disorders, externalizing disorders were highly prevalent, with around 75.3% and 66.8% of the participants having ADHD and ODD/CD, respectively (see Table 1). Among the internalizing disorders, GAD, SPP, DYTH, and SOP were more prevalent. Approximately 44.6%, 32.30%, 39.5%, and 32.2% of the participants were diagnosed with GAD, SPP, DYTH, and SOP, respectively. PD, PTSD, and AG were relatively rare. Table 2 shows the frequencies of different levels of comorbidity for the sample used in the present study. As shown of those with a clinical diagnosis, only 9% had no comorbidity. Approximately, 69.3% had one to five other disorders. Although details are not presented, for those with an anxiety disorder, 57% had a depressive disorder, and for those with a depressive disorder, 87.2% had an anxiety disorder. A total of 75% of children were comorbid for at least one externalizing and one internalizing disorder. 1996). The ADISC-IV is a semi-structured interview, based on the DSM-IV/DSM-IV-TR diagnostic system (APA 2000). Although ADISC-IV has been designed primarily to facilitate the diagnosis of the major childhood internalizing disorders, it can also be used for diagnosing other major childhood disorders. ADISC-IV diagnoses do not take into account the hierarchical, exclusionary rules outlined by the DSM-IV for making diagnoses. The ADISC-IV guideline for diagnosis is that the child be given diagnosis of all disorders meeting the diagnostic criteria. There are different ADISC-IV versions for parent interview and for child interview, and clinical diagnosis can be based either on parent or child interview or on both interviews considered together. All diagnoses reported in this study were based on parent interviews, as the child interview version does not allow for the diagnoses of CD and ODD, both of which were part of the externalizing disorders modeled in the psychopathology measurement models evaluated in the present study. Additionally, it should be noted that there is evidence of poor levels of agreement for diagnosis between information across the child and parent versions of the ADISC-IV (Grills and Ollendick 2003) and that clinical interviews of children can lead to unreliable diagnosis (Jensen et al. 1999). The parent version of ADISC-IV has robust psychometric properties (Silverman et al. 2001). Test-retest reliability for the ADISC-IV scores over a 7-to 14-day interval has shown good-to- Child Behavior Checklist/6-18 The Child Behavior Checklist/6-18 (CBCL) is a measure in the Achenbach System of Empirically Based Assessment (Achenbach and Rescorla 2001). Completed by parents, it has 113 items and is used to rate children between 4 and 18 years of age. Respondents indicate the degree or frequency of each behavior described in the item on a scale of 0 (not true), 1 (somewhat or sometimes true), or 2 (very true or often true). The standard rating period for the CBCL is 6 months. The CBCL has excellent psychometric properties and includes scales for various behavior and emotional problems (Achenbach and Rescorla 2001). In addition, it provides two broad scores for internalizing behavior problems and externalizing behavior problems. These broad scores were used in the present study to examine the external validities of the factors in the optimum model.
Procedure
The study was approved by the RCH ethics committee as part of ACPU's comprehensive examination of children and adolescents referred for psychological problems. Each legal guardian and participant provided informed written consent for any data provided by them to be used in future research studies. This is a standard part of the ACPU assessment procedure. All children and their parents participated in separate interviews and testing sessions with breaks over 2 days. Information was also obtained from teachers using various checklists and questionnaires. In all cases, parental consent forms were completed prior to the assessment. The data collected covered a comprehensive demographic, medical, educational, psychological, familial, and social assessment of the child and child's family. All psychological data were collected by research assistants, who were students in clinical psychology, and under the supervision of registered psychologists. The research assistants were provided with extensive supervised training by their supervisors prior to them collecting data. This training for the ADISC-IV included observations of it being administered by the psychologists. The research assistants commenced administering the ADISC-IVonly after they attained competence in its administration, as assessed by their supervisors. There was adequate inter-rater reliability for the diagnoses made between the research assistants and the psychologists (κ = 0.88). Standard procedures were used for the administration of all measures. However, where necessary, researchers read the items to participants (approximately 5% of the sample). Approximately 95% of the parent ADISC-IV interviews involved mothers only, and the remainder involved fathers only or both fathers and mothers together. Using the categorical data from the parent ADISC-IV, clinical diagnosis was also determined by a consultant child psychiatrist, who independently reviewed these data. The inter-rater reliability for diagnoses (for 10% of the parent interviews) of the initial diagnosis and the consultant child psychiatrist was high (kappa values of 0.90).
Data Analysis
Software All the CFA models in the study were conducted using Mplus (version 7) software (Muthén and Muthén 2013).
Extraction As clinical diagnosis for each disorder resulted in binary scores (disorder present that was coded 1, and disorder not present that was coded 0), the mean and variance-adjusted weighted least squares (WLSMV) extraction was used for all the CFA analyses (Rhemtulla et al. 2012). This is a robust estimator, recommended for CFA with ordered-categorical scores, including dichotomous scores. The WLSMV estimator does not assume normally distributed variables. According to measurement experts, relative to other estimators, the WLSMV estimator provides the best option for modeling categorical data, including dichotomous data (Beauducel and Herzberg 2006;Lubke and Muthén 2004;Millsap and Yun-Tein 2004).
Model Fit For the CFA models, at the statistical level, model fit can be examined using χ 2 values (WLSMVχ 2 values in the current case). As all types of χ 2 values, including WLSMVχ 2 , are inflated by large sample sizes, the fit of the models is generally interpreted by researchers using approximate fit indices, such as the root mean squared error of approximation (RMSEA), the comparative fit index (CFI), Tucker-Lewis Index (TLI), and the weighted root mean square residual (WRMR). For models based on maximum likelihood estimation, the guidelines suggested by Hu and Bentler (1998) are that RMSEA values close to 0.06 or below can be taken as good fit, close to 0.07 to < 0.08 as moderate fit, close to 0.08 to 0.10 as marginal fit, and close to > 0.10 as poor fit. For the CFI and TLI, values of 0.95 or above are taken as indicating good model-data fit, and values of 0.90 and < 0.95 are taken as acceptable fit. The cutoff score for good fit suggested for WRMR is less than 0.90 (Yu and Muthen 2002). For the present study, these appropriate fit indices, rather than the χ 2 statistic, were used as evidence of model fit. However, it is worth noting that despite the widespread use of these indices and fit values, a simulation study by Nye and Drasgow (2011) concluded that appropriate indices cutoff values for WLSMV estimation can vary across conditions.
Reliability In relation to reliability, ω h and ω t , ECV, PUC, and H values were computed using the program developed by Watkins (2013) and cross-checked using the program developed by Dueber (2016). In a bi-factor model, the ECV of the general factor will be high and the ECV of the group factors will be low whenever there is little common variance beyond that of the general factor. High values for general factor indicate the presence of a strong general factor dimension (unidimensionality) in the bi-factor model (Reise et al. 2013a). In contrast, low ECV values for the general factor do not indicate support for the presence of a strong general factor dimension (unidimensionality) but support for a multidimensionality model. A model-based internal consistency reliability that is analogous to alpha coefficient that is especially useful for bi-factor models is omega hierarchical when referring to the general factor (ω h ; Zinbarg et al. 2006) and the omega subscale (ω t ) when referring to the group factors. The ω h can be interpreted as an estimator of how much variance in summed (standardized) scores can be attributed to the general factor (Brunner et al. 2012). The values for ω h and ω t range from 0 to 1, with 0 indicating zero reliability and 1 reflecting perfect reliability. For a bi-factor model, the percentage of uncontaminated correlations (PUC; Bonifay et al. 2015) indicates the bias that could result from forcing multidimensional data into a unidimensional model (Bonifay et al. 2015, p. 507). The ECV, the PUC, and ω h and ω t values can be examined concurrently to decide if the indicators in a bi-factor model can be interpreted as having sufficient reliability to view them as essentially unidimensional or if they should be considered multidimensional. According to Reise et al. (2013a, b), if PUC > 0.80, then the indicators in the bi-factor model, if supported, can be interpreted as primarily unidimensional. When the PUC < 0.80, then such an interpretation requires ECV > 0.60 and ω h > 0.70. Failure to meet these criteria would mean that a multidimensional interpretation is warranted for the set of indicators, even if the bi-factor measurement model shows good fit. H is an index of construct reliability or replicability to estimate the reliability of the underlying group and general factors (Hancock and Mueller 2001). According to Rodriguez et al. (2016), H values > 0.80 are indicative of a stable well-defined latent variable, and values < 0.70 are indicative of a factor that is not worth specifying.
Convergent and Divergent Validities
To test the convergent and divergent validities of the factors in the optimum model, the broad CBCL internalizing and externalizing scores were regressed on the factors of the optimum model.
Missing Values and Fit of the Null Model
There were no missing values for the clinical cases used in the present study. Table 3 shows the results of all the CFA models tested for children and adolescents separately. Based on guidelines proposed by Hu and Bentler (1998), all fit indices for the one-factor model (M1s) in both groups showed poor fit. For the two-factor (M2s) and the three-factor (M3s) models, for both groups, the RMSEA indicated good fit, whereas the CFI and TFI for both groups indicated close to acceptable or acceptable fit. The exception was that the CFI value for the three-factor model in the adolescent group indicated good fit. The WRMR values for the one-, two-, and three-factor model for both groups were noticeably above 0.90. The fit values for both the bi-factor models (i.e., with two [M4s] and three [M5s] group factors) showed good fit in terms of the RMSEA, CFI, and TLI values. For both groups, the WRMR values for the bi-factor model with two group factors were lower than that for the bi-factor model with three group factors. Indeed, these values for the bi-factor model with two group factors were either just below or close to 0.90. Overall, there was reasonable support for both the bi-factor models. As the two bi-factor models were not nested, it was not possible to compare these models using the χ 2 (or WLSMVχ 2 ) difference test or the approximate fit indices based on χ 2 for the models. However, given that the WRMR values in the bi-factor model with two group factors were close to 0.90, and also because the bi-factor model with two group factors is more parsimonious than the bi-factor model with three group factors, the bi-factor model with two group factors was interpreted as the optimum model for both the groups in the present study and was used in subsequent reliability and validity analyses. Table 4 presents the standardized factor loadings of the 13 disorders on their respective latent factors in the optimum bi-factor model. For the adolescent P-factor, all disorders except GAD, ADHD, CD, and ODD had salient loadings, based on Thurstone's (1947) classical criterion for Bsalience^as standardized loading ≥ 0.3. For the internalizing group factor for adolescents, all disorders except OCD, PTSD, and DYTH had salient loadings, and for the externalizing group factor, all three externalizing disorders (ADHD, CD and ODD) had salient loadings on it. The loadings for SPE, GAD, ADHD, CD, and ODD were much higher (relatively) on their group factors (0.40, 0.50, 0.55, 0.88, and 0.76, respectively) than the P-factor (0.34, 0.29, 17, 20 and 0.24, respectively). Thus, much of the variance in the P-factor can be attributed to the internalizing disorders, with the externalizing disorders contributing negligible to low amounts of variances. Indeed, at the disorder level, with the exception of SPE, GAD, ECV values (which indicate the amount of common variance contributed by a disorder to the P-factor), the other internalizing disorders were high, ranging from 0.53 to 0.99. These values were very low for the externalizing disorders (ADHD = 0.09, ODD = 0.05, and CD = 0.09). Taken together, these findings can be interpreted as indicating questionable support for a P-factor since it was saturated with mostly variances from the internalizing disorders and negligible to low variances from the externalizing disorders. Additionally, because there was low amount of variance left in the group factor for internalizing disorders, the internalizing factors may be substantively of less use. In contrast, because there was high amount of variance left in the group factor for externalizing, it can be interpreted that there is support for the externalizing group factor, even after removing the variances allocated to the general factor.
Factor Loadings for the Factors in the Optimum Bi-factor Model
For the child P-factor, all disorders except SPE, AGRO, GAD, ADHD, CD, and ODD had salient loadings. For the internalizing group factor in this group, all disorders except DYTH and MDD had salient loadings. For the externalizing group factor, all three externalizing disorders (ADHD, CD, and ODD) had salient loadings. The loadings for ten disorders (seven internalizing disorders [SAD, SOP, SPE, PD, AROG, GAD, and OCD], and three CFI comparative fit index, CI confidence interval, RMSEA root mean square error of approximation, TLI Tucker-Lewis Index, WRMR weighted root mean square residual, I internalizing disorders, E externalizing disorders, D distress disorders, F fear disorders, P P-factor **p < .01; ***p < .001 externalizing disorders [ADHD, CD, and ODD]) were relatively higher on their group factors than the P-factor. Thus, relatively more of the variance in the P-factor can be attributed to the internalizing disorders, with the externalizing disorders contributing negligible to low amounts of variances. At the disorder level, with the exception of PTSD, DYTH, and MDD, ECV values of the other internalizing disorders were low, ranging from 0.00 to 0.47. These values were especially low for the externalizing disorders (ADHD = 0.00, ODD = 0.10, and CD = 0.12). Taken together, these moderate amounts of variances for the internalizing disorders and negligible to low variances from the externalizing disorders on the P-factor can be interpreted as indicating support for a weak P-factor. Additionally, as there was a moderate amount of variance left in the group factor for internalizing, the internalizing factors may be substantively meaningful. In contrast, because there was high amount of variance left in the group factor for externalizing, it can be interpreted that there is support for the externalizing group factor, even after taking out the variances allocated to the general factor.
Reliability for the Factors in the Optimum Bi-factor Model
Table 4 also presents the model-based reliability indices for all the factors in the optimum bifactor models for children and adolescents separately. As shown, for both groups, the ECV values of the P-factor were much higher than the internalizing and externalizing group factors, with the values for the P-factor being higher for adolescents than children (for adolescents, P-factor = 0.52, internalizing group factor = 0.21, externalizing group factor = 0.27; for children, P-factor = 0.42, internalizing group factor = 0.35, externalizing group factor = 0.24). For adolescents and children, the ω h values for the P-factor were 0.65 and 0.48, respectively. The ω t values for the internalizing and externalizing group factors were 0.14 and 0.75, respectively, for adolescents, and 0.41 and 0.73, respectively, for children. Thus, although much of the reliable variance were attributed to the P-factor, there were still high levels of variances for the externalizing group factors left in both groups and moderate levels of variances for the internalizing group factor in children. The PUC values of the P-factor in adolescents and children were 0.53 and 0.39 respectively. As noted earlier, Reise et al. (2013a, b) have proposed that if PUC > 0.80, or if the PUC < 0.80, ECV > 0.60, and ω h > 0.70, then the bi-factor model can be interpreted as primarily unidimensional. As the findings failed to meet either of these criteria, it is not appropriate to interpret the findings of the present study as supporting a unidimensional model for the childhood disorders in the optimum bi-factor model in both age groups.
The H values for the adolescent P-factor and internalizing and externalizing group factors were 0.85, 0.62, and 0.84, respectively. They were 0.87, 0.77, and 0.79, respectively, for children. According to Rodriguez et al. (2016), H values > 0.80 are indicative of a stable welldefined latent variable, and values < 0.70 are indicative of a factor that is not worth specifying. Based on this guideline, the P-factor and the externalizing group factor for adolescents can be considered well-defined stable factors for this group and the P-factor and the externalizing and, to a lesser degree, the internalizing group factors for children can be considered well-defined stable factors accordingly. Table 5 shows the findings of the predictions of broad CBCL externalizing and internalizing scores by the factors in the two group (internalizing and externalizing) bi-factor model for children and adolescents. As shown, for children, the P-factor predicted both the CBCL internalizing and externalizing scores positively. The internalizing group factor predicted CBCL internalizing positively, and the externalizing group factor predicted CBCL externalizing positively. For adolescents, the P-factor also predicted both the CBCL internalizing and externalizing scores positively. The externalizing group factor predicted CBCL externalizing positively, and the internalizing group factor did not predict CBCL either externalizing or internalizing. Taken together, these findings can be interpreted as support for the convergent and divergent validities of the factors in the two-group (internalizing and externalizing) bifactor model for both children and adolescents.
Discussion
Based on interviews of parents of clinic-referred (ACPU) children and adolescents, the first aim of the present study was to simultaneously examine the structure of the major DSM-IV/DSM-IV TR childhood internalizing disorders (SAD, SOP, SPP, PD, AG, GAD, OCD, PTSD, DYTH, and MDD) and externalizing disorders (ADHD, CD, and ODD). Five models were compared: (i) one-factor, (ii) two-factor oblique with primary factors for internalizing and externalizing disorders, (iii) three-factor oblique model with primary factors for distress, fear and externalizing disorders, (iv) bi-factor model with orthogonal P-factor and internalizing and externalizing group factors, and (v) bi-factor model with orthogonal P-factor and group factors for distress, fear, and externalizing. For both adolescents and children, the two-and three-factor models (but not the one-factor model) showed adequate fit. Also, for both groups, both bi-factor models showed good fit and were supported. Between these models, the bi-factor model with two group factors is more parsimonious and showed slightly better fit in terms of their WRMR values. Thus, the bifactor model with the internalizing and externalizing group factors was interpreted as the optimum model. For this model, there was support for the convergent and divergent validities of the factors. For children, the P-factor predicted both the broad CBCL internalizing and externalizing scores positively. The internalizing group factor predicted CBCL internalizing positively, and the externalizing group factor predicted CBCL externalizing positively. For adolescents, the P-factor predicted both the CBCL internalizing and externalizing scores positively. The externalizing group factor predicted CBCL externalizing positively, and the internalizing group factor did not predict either CBCL externalizing or internalizing.
The relatively good and better support for the bi-factor models over the first-order oblique models found in the present study is consistent with past studies involving children (Lahey et al. 2015;Martel et al. 2017;Tackett et al. 2013) and adolescents (Carragher et al. 2016; Table 5 Standardized path coefficients for the regression of the CBCL internalizing and externalizing scores on the factors in the two-group (internalizing and externalizing) bi-factor model Child Adolescent Internalizing total score 0.53*** 0.39*** 0.04 0.72*** 0.16 − 0.05 Externalizing total score 0.21*** − 0.02 0.89*** 0.20*** − 0.06 0.78*** P P-factor, I internalizing group factor, E externalizing group factor *p < .05; **p < .01; ***p < .001 Castellanos-Ryan et al. 2016;Laceulle et al. 2015;Noordhof et al. 2015;Patalay et al. 2015;Tackett et al. 2013), as well as adults (Caspi et al. 2014;Lahey et al. 2012;Stochl et al. 2015). Despite this, the findings also extend existing data in this area. The present study used clinicreferred children provided with specific clinical diagnoses. All past studies in this area involving children and adolescents have utilized community samples and have used dimensional scores (derived from questionnaires and rating scales) of children's and adolescents' problems, to model their broad dimensions, such as internalizing and externalizing.
Although it is worth noting that the support for the bi-factor model in this study corresponds to past findings, the P-factor and group factors in the present study differed in important ways from those of past studies. Most past studies have generally found that both the internalizing and externalizing disorders contributed high variances to the P-factor. In contrast, in the present study, for both age groups, there were relatively low variances for the externalizing disorders on the P-factors, with the P-factor being saturated with mostly variances from the internalizing disorders (similar to studies by Laceulle et al. (2015)). Consequently, in both age groups, and unlike previous studies, there was high amount of variance in the externalizing group factors. It is possible that the differences found across the present study compared to those of past studies may be explained by the fact that unlike past studies that used community samples and measured psychopathology dimensionally using rating scales, the present study used a clinic-referred sample and measured psychopathology in terms of categorical clinical diagnosis.
The support for a strong P-factor and externalizing group factor for adolescents were also reinforced in terms of the reliability of these factors. The ECV value of the P-factor was much higher than the internalizing and externalizing group factors, and the ω h value for the P-factor was 0.65, while the ω t values for the internalizing and externalizing group factors were 0.14 and 0.75, respectively. Similarly, a moderately strong P-factor, a moderately strong internalizing group factor, and a strong externalizing group factor for children were supported in terms of the reliability of these factors. The ECV value of the P-factor was much higher than the internalizing and externalizing group factors for this group, and the ω h value for the P-factor was only 0.48, and the ω t values for the internalizing and externalizing group factors were 0.41 and 0.73, respectively. Additionally, the findings here indicated that the PUC values of the Pfactor in adolescents and children were 0.53 and 0.39 respectively. Reise et al. (2013a, b) have proposed that if PUC > 0.80, or if the PUC < 0.80, ECV > 0.60, and ω h > 0.70, then the bifactor model can be interpreted as primarily unidimensional. As the findings of the present study failed to meet either of these criteria, it can be argued that they are not supportive of unidimensional P-factors for the childhood internalizing and externalizing disorders in both children and adolescents. Also, the H values for the adolescent P-factor and internalizing and externalizing group factors were 0.85, 0.62, and 0.84, respectively. The same factors were 0.87, 0.77, and 0.79, respectively, for children. According to Rodriguez et al. (2016), H values > 0.80 are indicative of a stable well-defined latent variable, and values < 0.70 are indicative of a factor that is not worth specifying. Based on this guideline, and consistent with the argument presented, the P-factor and the externalizing group factor for adolescents can be considered well-defined stable factors for this group. Furthermore, the P-factor and the externalizing (and to a lesser degree, the internalizing group) factors for children can be considered well-defined stable factors in this group. Based on the combined findings for children and adolescents, the present authors' speculate that the P and all group factors in children and adolescent are meaningful, and they need to be considered when examining substantive issues, such as the validity and external correlates of the factors. In support of this, as shown in the present study, for children, the internalizing group factor still predicted CBCL internalizing positively, and the externalizing group factor still predicted CBCL externalizing positively, even after removing the variances for the P-factor. For adolescents, the externalizing group factor still predicted CBCL externalizing positively, even after removing the variance for the P-factor. It may be worth noting that as there has been limited evaluation of the reliability of the factors in the bifactor model (Martel et al. 2017;Murray et al. 2016), the present reliability findings extend existing data. Furthermore, ω h and ω t values and ECV, PUC, and H reliability indices (and not just ω h and ω t values, as in past studies) were combined and applied here, providing a more sophisticated and accurate interpretation of the dimensionality of the P-factor in the bi-factor model compared to that of previous studies.
As already illustrated, the findings in the present study also showed differences in the bifactor model across children and adolescents. While there were negligible to low variances for the internalizing group factor in adolescents, there was moderate amount of variance in this factor in children. Thus, for adolescents, there was support for the P-factor and the externalizing group factor. For children, there was support for the P-factor and both the externalizing and internalizing group factors. Also, because the reliability was relatively higher for the Pfactor in adolescents than children, it can be argued that the adolescent P-factor is relatively stronger than the child P-factor.
Given that the P-factor can be interpreted as strength of comorbidity of the internalizing and externalizing disorders (Murray et al. 2016), the present findings appear to propose that comorbidity (in particular considering the internalizing disorders, as the P-factor was saturated with variances from the internalizing disorders) may manifest differently at different developmental stages and that there is stronger comorbidity in adolescents than children. Indeed, consistent with this view, Castellanos-Ryan et al. (2016) have suggested that the factor loading on the P-factor could vary developmentally, with internalizing symptoms becoming stronger with increasing age. Caspi et al. (2014) has proposed a dynamic mutualism process hypothesis to account for this. According to this hypothesis, symptoms both across and within domains can reinforce one another through local (bi-directional associations between different symptom typologies) interactions such that, over time, these local interactions can lead to an increase in symptom inter-correlations. However, it should be noted that Murray et al. (2016) found no support for this hypothesis and therefore further research is recommended.
The findings of the present study have implications for (i) taxonomy in relation to children and adolescents, (ii) understanding the comorbidity of the internalizing disorders and externalizing disorders, (iii) trans-diagnostic assessment and diagnosis and treatment, and (iv) research on bi-factor models of psychopathology. In relation to taxonomy, support for the Pfactor and broad internalizing and externalizing factors is inconsistent with the DSM approach that separates anxiety, depressive, and externalizing disorders into different diagnostic groups. It is also inconsistent with how the relevant disorders are organized in DSM-5, which suggests four different groups for the internalizing disorders. One group comprises SAD, SOP, SPP, PD, and AG, whereas GAD, OCD, PTSD, and MDD combined with DTYH are each in three different groups.
The support for the bi-factor model implies that current taxonomy that considers the different types of these disorders as discrete diagnostic categories may need reconsideration to recognize the high degree of comorbidity among them. Instead, the support for the bi-factor model in the present study suggests that for children and adolescents, all the common childhood psychopathologies could be grouped under an overall group called childhood psychopathology and separated into subgroups of internalizing disorder and externalizing disorders. The list of symptoms in the internalizing disorder group could be the key nonoverlapping symptoms for the different anxiety and depressive disorders, and the list of symptoms in the externalizing disorder group could be the key non-overlapping symptoms for ADHD, CD, and ODD. To recognize that the major specific symptoms present in an individual, appropriate descriptors could be added to the diagnosis of internalizing disorder or externalizing disorder. For example, when an individual has panic and social phobia, the diagnosis could be internalizing disorder with panic and social phobia. Despite this proposal, it needs to be stressed that further research and replication of the findings in the present study is needed before such changes could be adopted.
In relation to understanding the comorbidity of the internalizing and externalizing disorders, the support found in the present study for the P-factor, with salient loadings for virtually all internalizing disorders, can be inferred as indication of the strength of the associations of these disorders with the underlying latent factor and by extension the comorbidity of the disorders. Consistent with past studies (for a meta-analysis study, see Angold et al. 1999;Krueger and Markon 2006), these findings suggest high comorbidity among the internalizing disorders on one hand, and the externalizing disorders on the other, and between the internalizing and externalizing disorders.
In relation to assessment, diagnosis, and treatment, the close associations between the internalizing and externalizing disorders, via the P-factor, found in the present study highlight the need for a comprehensive evaluation of all the internalizing and externalizing disorders for a better understanding of a child's or an adolescent's psychopathology. The findings also imply that treatment may have to focus on general risk factors for psychopathology. In this respect, recently developed trans-diagnostic treatment approaches, such as for anxiety and depression disorders in children and adolescents (Ehrenreich-May and Bilek 2012), would be valuable. In brief, trans-diagnostic approaches focus on common factors that produce symptoms in related classes of disorders, thereby addressing multiple concerns or disorders within an individual (McEvoy et al. 2009).
In relation to research on the bi-factor model of psychopathology, the present findings suggest that researchers not only need to demonstrate good fit of such models (as has been the case with most of the studies in this area) but also need to examine the reliability of the P and group factors in the bi-factor model, as carried out in the present study. This enabled the present work to demonstrate that although the bi-factor model with two group factors was supported, all group factors in children and adolescents were also meaningful, and they need to be considered in structural models of psychopathology across these groups.
There are several strengths to the present study. First, it involved a large clinical sample, with clinical diagnoses of the major internalizing and externalizing disorders, derived via structured clinical interviews. Therefore, these findings appear more useful from a clinical viewpoint. Second, to increase the creditability of our findings, structural models were conducted separately for children and adolescents. The close comparability of the findings across these groups attest to the robustness of the findings reported here. Third, recent methodological developments were applied to evaluate the reliability of the factors in a bifactor model (e.g. ECV, PUC, and H).
Despite these significant strengths, there are limitations in this study that need to be considered when interpreting the findings. First, not all disorders relevant to children and adolescents, such as eating disorders, autism spectrum conditions, psychotic symptoms, and substance abuse disorders, were included in the analyzed sample. It cannot be ruled out that the inclusion of these disorders may have produced different results. Second, the present study examined the factor structure of the common DSM-IV anxiety and depressive disorders at the diagnostic level. Thus, the findings may not reflect the factor structure of anxiety and depressive disorders at the level of symptoms. As noted by Seeley et al. (2011), when analyses are focused at the disorder level, the underlying dimensionality associated with diagnostic criteria for specific disorders is ignored, and consequently, associations among symptoms might show different patterns of associations than those obtained on the basis of diagnostic categories/classifications. Third, approximately 75.3% and 66.8% of the participants had ADHD and ODD/CD respectively. Thus, it is not known if this exerted any influence on the findings. Fourth, as this study examined clinic-referred children, the findings here may not be applicable to internalizing and externalizing disorders in children and adolescents from the general community. Indeed, there is evidence that in clinical samples, comorbidity rates are usually higher than among general population samples (Angold et al. 1999) due to a methodological problem called Berkson's bias (i.e., people with multiple disorders are more likely to be referred to clinics than are people with single disorders). Fifth, all the participants in this study were from the same clinic. It is possible that this may constitute an additional bias. Sixth, the present study used a predominantly male sample, and this may have added some genderrelated bias to the findings. Seventh, it is important to keep in mind that although children were the target of analysis, the information about these disorders was derived from interviews of parents and not children themselves. It is therefore possible that this may have also influenced parameter estimates. Finally, the present data (in line with previous P-factor literature) refer to a Westernized population of developing individuals. Therefore, generalization to different cultural groups should be treated with caution (i.e., potential cultural bias of the findings needs to be considered).
In conclusion, although, the present study found that the bi-factor model with the internalizing and externalizing disorders showed good fit and was the optimum model. However, the fact that there was (i) support for a strong externalizing group factor for adolescents, (ii) a moderately strong internalizing group factor, and (iii) a strong externalizing group factor for children weakens the support for a dominant P-factor in both these groups. Thus, the conclusion of the present study differs from the prevailing general consensus for a robust dominant P-factor for psychopathology (Castellanos-Ryan et al. 2016;Laceulle et al. 2015) because it calls for the inclusion of the group factors in structural models of psychopathology. Nevertheless, this view is not without support from previous studies (Laceulle et al. 2015). Like the findings in the present study, these studies also showed that there was substantial variance in the group factors even after removing the variances for the general factor. Given the discrepancy of the findings in the present study and that of past studies, more studies are needed that take into consideration the aforementioned limitations. Such studies will be valuable as they could have implications for the understanding of the development and the course of childhood psychopathology across different age and cultural groups (prioritizing those referring to relatively under-researched populations), which in turn can have implications for more developmentally and culturally responsive diagnosis and treatment of childhood disorders.
Compliance with Ethical Standards
Conflict of Interest The authors declare that they have no conflict of interest.
Ethical Approval All procedures performed in this study involving human participants were in accordance with the ethical standards of University's Research Ethics Board and with the 1975 Helsinki Declaration.
International Journal of Mental Health and Addiction
Informed Consent Informed consent was obtained from all participants.
Publisher's Note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. | 2018-11-09T15:04:37.708Z | 2018-11-08T00:00:00.000 | {
"year": 2018,
"sha1": "51b91502994224c582aefdcd9cc254002578f145",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s11469-018-0017-3.pdf",
"oa_status": "HYBRID",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "c9570843dfb29e60a2d56714262c4083fd27bd2c",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": [
"Psychology"
]
} |
119125640 | pes2o/s2orc | v3-fos-license | A simple and universal setup of quasi-monocolor gamma-ray source
Strict classic 3-D dynamics theory reveals that arbitrarily high center frequency light source can be achieved by flexible application of not-too-strong static electric field and static magnetic field. The magnitudes of the fields are not required to be high.
Even though significant application value has promoted intensive investigation on short-wavelength light source (or X-ray or higher energy photons source) for decades [1][2][3][4][5][6][7][8][9][10][11][12][13][14], there are still considerable difficulty in making a satisfactory achievement on this issue. Because 1 µm (1nm) wavelength radiation means the time cycle being 3.33f s (3.33as), we therefore wish the time cycle of electron motion to be 3.33f s (3.33as). Clearly, the difficulty is just that so rapid electron oscillation, which can be finished hundreds (or more) times per f s, is unavailable. This limits the efficiency of generating X-ray. For example, in the high-order harmonics generation (HHG), if the driving laser is of 1 µm wavelength, the driven atomic dipole moment oscillation is usually of a time cycle˜3.33f s. The HHG is only because the time shape of the driven atomic dipole moment oscillation differs from sin 2π t 3.33f s greatly. This determines that low-order harmonics are dominant. According to most HHG experimental results [7][8][9][10][11][12][13][14], low-order harmonics components whose orders are tens can be warranted but components at higher order hundreds are negligible. On the other hand, in free-electronlaser (FEL) [2][3][4][5], the efficiency of generating radiations at desired short wavelength is also far from satisfactory because sufficiently rapid electron velocity oscillation is unavailable.
Clearly, if we can directly set up a sufficiently rapid electron velocity oscillation, we will obtain an efficient monocolor short-wavelength light source at desired short wavelength. If 1nm wavelength is desired, we should set up an electron velocity oscillation whose time cycle is˜3.33as. To drive so rapid oscillation usually demands, according to current knowledge, electromagnetic field of very short wavelength, which is unavailable because they are jsut what we are pursuing. Namely, we are trying to get what we are pursuing by applying what we are pursuing.
We consider a simple configuration containing merely static electric field (along x-direction) and static magnetic field (along z-direction). When an electron is input into this configuration, its behavior can be described by dimensionless 3-D relativistic Newton equations (RNEs) Moreover, E s and B s are constant-valued electric field and magnetic one and meet E s = ηcB s , λ = c/ω and ω are reference wavelength and frequency, and where the values of these constants const are determined from the initial whose solution reads Note that the solutions (10,11) will cause Γ = 1 If noting Γ can be formally expressed as Γ = 1 + C 2 y + C 2 x −W B η * X, which agrees with Takeuchi's theory [15], we can find that the electronic trajectory can be expressed as or There will be an elliptical trajectory for η < 1 and a hyperbolic one for η > 1 [15,16]. The time for an electron travelling through an elliptical trajectory can be exactly calculated by re-writing Eq.(10) as [15] where a = ( The equation can be written as a more general form where u = ) and ). In addition, σ = η The motion on an ellipical trajectory is very inhomogeneous. The time for finishing the ηX > 0 half might be very short while that for the ηX < 0 half might be very long. We term two halves as fast-half and slow-half respectively. If η is fixed over whole space, a fast-half is always linked with a slow-half and hence makes the time cycle for finishing the whole trajectory being at considerable level.
It is interest to note that if there is B = 0 at the region u > −1 + ξ, the electron will enter from E, B = E ηc -region into (E, B = 0)-region with an initial velocity whose x-component is υ x1˜ds M −(−1+ξ) > 0 and y-component υ y1 is = 0. Then, the electron will enter into the (E, B = 0)region a distance because of υ x1 > 0. After a time T tr = , the electron will return into the E, B = E ηc -region and the returning velocity will have a x-component −υ x1 . During this stage, the electron will move υ y1 * T tr along the y-direction. Then, the motion in the E, B = E ηc -region can be described by an acute-angled rotation along the ellipse u = −1 + ξ → u = −1. Thus, a complete closed cycle along the x-direction is finished even though the motion along the y-direction is not closed. Repeating this closed cycle will lead to an oscillation along the x-direction.
Clearly, the time cycle of such an oscillation is T x = T tr +2σM * arcsin (−1 + ξ) − π 2 + 2σ 2ξ − ξ 2 . Under fixed values of ∆, E and B, the smaller ξ is, the smaller T x is. There will be T x = 0 at ξ = 0. In principle, arbitrary value of T x < T c can be achieved by choosing suitable value of ξ. Namely, arbitrary high center frequency (> ω B ) oscillation can be achieved by choosing suitable value of ξ. Although the time history of x (t) might cause its Fourier spectrum being of some spread, the center frequency will be 1 Tx . This result implies a simple and universal method of setting up quasimonocolor light source at any desirable center wavelength. That is, applying vertically static electric field E = E x and static magnetic field B = B z and on purpose letting a B = 0 region existing and the ratio E cB < 1, then injecting electron along the y-axis with a velocity slightly above | E cB |, and close to the boundary line between the B = 0 region and the B = 0 region. As shown in can lead to a quasi-monocolor oscillation source with any desired center frequency up to γ-ray level.
In conclusion, we have described a simple and universal method of achieving quasi-monocolor light source at any desirable center wavelength. The kernel of this method is to utilize the motion of fast electron near the surface of magnetic field. Here, fast electron means its velocity being > E cB . Actually, it is to adopt the fast-half of an ellipse and replace the time-consuming slow-half with a faster orbit governed by E only.
Acknowledgment
This work is supported by National Science Fund no 11374318. | 2015-03-03T03:04:43.000Z | 2015-03-03T00:00:00.000 | {
"year": 2015,
"sha1": "8e40583ab3d5bcb8522872a50beaa3c31c0b8ba3",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "8e40583ab3d5bcb8522872a50beaa3c31c0b8ba3",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
231733563 | pes2o/s2orc | v3-fos-license | Neighborhood Walkability as a Predictor of Incident Hypertension in a National Cohort Study
The built environment (BE) has been associated with health outcomes in prior studies. Few have investigated the association between neighborhood walkability, a component of BE, and hypertension. We examined the association between neighborhood walkability and incident hypertension in the REasons for Geographic and Racial Differences in Stroke (REGARDS) Study. Walkability was measured using Street Smart Walk Score based on participants' residential information at baseline (collected between 2003 and 2007) and was dichotomized as more (score ≥70) and less (score <70) walkable. The primary outcome was incident hypertension defined at the second visit (collected between 2013 and 2017). We derived risk ratios (RR) using modified Poisson regression adjusting for age, race, sex, geographic region, income, alcohol use, smoking, exercise, BMI, dyslipidemia, diabetes, and baseline blood pressure (BP). We further stratified by race, age, and geographic region. Among 6,894 participants, 6.8% lived in more walkable areas and 38% (N = 2,515) had incident hypertension. In adjusted analysis, neighborhood walkability (Walk Score ≥70) was associated with a lower risk of incident hypertension (RR [95%CI]: 0.85[0.74, 0.98], P = 0.02), with similar but non-significant trends in race and age strata. In secondary analyses, living in a more walkable neighborhood was protective against being hypertensive at both study visits (OR [95%CI]: 0.70[0.59, 0.84], P < 0.001). Neighborhood walkability was associated with incident hypertension in the REGARDS cohort, with the relationship consistent across race groups. The results of this study suggest increased neighborhood walkability may be protective for high blood pressure in black and white adults from the general US population.
INTRODUCTION
A primary risk factor for cardiovascular diseases (CVD), hypertension affects ∼1 in three adults in the US (∼75 million) (1). Regular physical activity (PA, ≥150 min per week) is associated with reduced risk of hypertension and is widely recommended for CVD prevention and all-cause mortality (2)(3)(4). Given the strong association between PA and health outcomes, there has been recent interest in how one's immediate surroundings (built environment (BE)) affect individual physical activity level. Neighborhood walkability, a measure of walking friendliness and important component of the BE, has been associated with PA and cardiometabolic risk factors in previous cohort studies (5)(6)(7)(8)(9). Particularly, there is a burgeoning interest in understanding the impact of neighborhood walkability on blood pressure and hypertension (10)(11)(12)(13)(14). Yet studies of walkability and hypertension in US populations are lacking, even though this association may differ by national and regional context. Moreover, in a cross-sectional analysis of neighborhood characteristics (including walkability) and prevalent hypertension in the US, Mujahid et al. noted that-because of the history of residential segregation-race may confound this relationship (15). Therefore, there is a need to the evaluate the association between walkability and hypertension and also consider the impact of racial and geographic differences.
The purpose of this study was to investigate the association between objectively measured neighborhood walkability and incidence of hypertension in the REasons for Geographic and Racial Differences in Stroke (REGARDS) Study. Given the protective effect of PA on hypertension and stroke risk, as well as the racial disparities in these cardiovascular outcomes, we sought to understand the relationship between walkability and new onset (incident) hypertension in a study population that includes both black and white participants from the continental US. In secondary analyses, we investigated effect modification by age, race, and geographic region and tested associations between walkability and hypertension status across two study visits.
Study Population
REGARDS is a population-study that was designed to observe racial and geographic differences in stroke incidence in the US, with oversampling in the Southeastern states with high stroke incidence, i.e., Stroke Belt and Stroke Buckle. The cohort is composed of 30,239 white and black adults aged 45 and older. Data was collected via telephone survey and in-home physical assessment at enrollment between 2003 and 2007 (baseline visit). Follow-up data were collected on ∼51% of the original cohort (n = 15,550) using similar methods during a second visit an average of 10 years after enrollment (between 2013 and 2017). The REGARDS study design and objectives have been described elsewhere (16). The study protocol was approved by the institutional review boards of all participating institutions, and all participants provided written informed consent.
Outcome of Interest
The measurement of blood pressure (BP) in the REGARDS study across both study visits has been described (17). Systolic blood pressure (SBP) and diastolic blood pressure (DBP) were defined as the average of two measurements taken by a trained technician using a standard protocol and regularly tested aneroid sphygmomanometer after the participant was seated for 5 min. Incident hypertension was defined as SBP ≥140 and/or DBP ≥90 mmHg and/or treatment with antihypertensive medication at the second study visit among those without hypertension at baseline. The use of anti-hypertensive medication was self-reported.
Walkability Measurement
Street Smart Walk Score R is a validated and widely used walkability instrument that is derived from an algorithm that measures BE in proximity to each participant's residential address (18,19). The score is based on proximity to walking routes to nearby amenities (e.g., parks, libraries, shopping centers, restaurants). The algorithm, after adjusting for intersection density and average block length, assigns the values from 0 to 100 with higher values reflecting greater walkability (0-49: Car-Dependent, 50-69: Somewhat Walkable, 70-89: Very Walkable, 90-100: Walker's Paradise). In the current study we dichotomized walk score as ≥70 (more walkable) and <70 (less walkable). All participants' addresses at baseline were validated and linked to a walk score. Scores were collected in 2018 as part of an ancillary study in REGARDS.
Covariates
Covariates included baseline sociodemographic factors (age, sex, race, household income, geographic region), health-related behaviors (smoking, alcohol use, exercise frequency), and cardiometabolic traits (body mass index, dyslipidemia, diabetes). Age, sex, and race were self-reported. Race was classified as white or black. Income was self-reported and categorized into five groups: <$20,000, $20,000-$34,999, $35,000-$74,999, ≥$75,000, and refused to answer. Region was defined as Stroke Buckle (coastal plains of North Carolina, South Carolina and Georgia), Stroke Belt (the rest of North Carolina, South Carolina and Georgia, as well as Tennessee, Mississippi, Alabama, Louisiana, and Arkansas), or non-belt (other states of the continental US) (20). Self-reported smoking status was categorized as current smoking (yes or no). Self-reported alcohol use was categorized according to the National Institute on Alcohol Abuse and Alcoholism as heavy (>7 drinks/week for women, >14 drinks/week for men), moderate (≤7 drinks/week for women, ≤14 drinks/week for men), and none (0 drinks/week). Baseline exercise was categorized by self-reported frequency of exercise per week (none, 1 to 3, 4 or more). Body mass index (BMI) was calculated as a ratio of weight (kg) to square of height (m 2 ). Dyslipidemia was defined as self-reported physician diagnosis of hyperlipidemia or current use of lipid-lowering medication. Diabetes was defined as self-reported current use of hypoglycemic medication, fasting blood glucose ≥126, or non-fasting blood glucose ≥200.
Statistical Analysis
We compared subject characteristics and covariates between more walkable and less walkable areas using Pearson chisquare tests for categorical variables and independent t-tests for continuous variables among participants without baseline hypertension (see Table 1) and among all participants who completed a second visit (see Supplementary Table 1). In primary analyses (N = 6,894 without baseline hypertension), we determined overall and race-stratified risk ratios between walkability (dichotomized as <70 vs. ≥70 and, separately, considered as a continuous variable) and incident hypertension using modified Poisson regression with robust variance estimation. The risk ratios were unadjusted (Model 1, see Table 2) and adjusted for sociodemographic factors (age, sex, race, income, and geographic region), health-related behaviors (smoking status, alcohol use, and exercise frequency), cardiometabolic traits (diabetes, dyslipidemia, and BMI), and baseline blood pressure (SBP and DBP) in Model 2 (see Table 2). We further stratified these models by age (45-54, 55-64, ≥65) or geographic region (see Supplementary Table 2). We also conducted multivariate linear regression to determine the association between neighborhood walkability and second visit SBP and DBP, adjusting for covariates described above and antihypertensive medications to account for medication effect on blood pressure measurements ( Table 3).
Assuming that walkability at an individual level may remain relatively stable over time, we assessed the relationship between walkability and hypertension across two study visits (N = 15,550) in secondary analyses. We defined a multinomial outcome where participants who completed both study visits were categorized as hypertensive at both study visits ("always hypertensive"), normotensive at the first visit and hypertensive at the second visit ("incident hypertension"), hypertensive at the first visit and normotensive at the second visit ("blood pressure decline"), and reference category normotensive at both visits ("always normotensive"). We then conducted a multinomial logistic regression and report a crude and adjusted odds ratio for hypertension status across both visits considering age, race, sex, region, income, alcohol use, smoking status, exercise frequency, BMI, dyslipidemia, and diabetes as covariates (see Supplementary Table 3). All the analyses were performed using SAS v9.4 (SAS Corp).
RESULTS
Among 6,894 participants without prevalent hypertension, 2,515 developed hypertension by the second visit (see Figure 1). A total of 6.8% (N = 468) were living in more walkable areas defined as walk score ≥70 at the first visit. Supplementary Figure 1 shows the distribution of walk scores among the 6,894 participants (mean(sd) 26.1(24.7), median 19). Those who lived in more walkable areas had mean(sd) age of 61.4(8.1) years, were more likely to be black and college graduates (see Table 1). Participants in more walkable areas were more likely living in non-belt areas compared to those in less walkable areas. Mean baseline SBP was similar in participants living in more walkable areas and those living in less walkable areas, whereas the mean DBP at baseline was slightly higher for those in more walkable areas. Baseline characteristics of the 15,500 participants who completed the two study visits (including participants with prevalent hypertension) can be found in Supplementary Table 1. Like the data presented in Table 1, those living in more walkable areas were more likely to be black, be college graduates, and live outside the stroke belt and stroke buckle.
The incidence of hypertension in those living in more walkable (≥70) areas at baseline was lower than those living in . There was not a significant interaction between race and walkability (P = 0.75). The results in the racial strata were consistent with that of the full cohort, although with the smaller race-specific sample sizes, these relationships did not remain significant after full adjustment in Model 2 (see Table 2). After multivariable adjustment, walkability was not significantly associated with second visit SBP and DBP among those without prevalent hypertension (see Table 3). Diastolic blood pressure was lower among black participants living in more walkable areas compared to those in less walkable areas in crude analysis. This same trend was observed among whites although not significant. The relationships were not statistically significant after multivariable adjustment. When stratifying the results by geographic region, the protective effect of higher neighborhood walkability for incident hypertension was consistent across strata (P interaction = 0.69), but not statistically significant (see Supplementary Table 2). Participants in more walkable areas also had lower risk of incident hypertension across age categories with the strongest protective relationship observed in the youngest age group (45-54 years). However, we did not see modification of the relationship between neighborhood walkability and incident hypertension by age-group (P interaction = 0.25), and the relationship was not significant in any age strata (see Supplementary Table 2).
In a secondary analysis of hypertension status across two study visits (n = 15,550, see Supplementary Table 3), living in more walkable areas was associated with a lower odds of being "always hypertensive" vs. "always normotensive" (OR [95% CI]: 0.70[0.59, 0.84]), and these associations persisted in both blacks and whites, separately. Additionally, the results for "incident hypertension" were consistent with the primary model. Finally, neighborhood walkability category was not associated with "blood pressure decline" vs. "always normotensive" as part of this analysis.
DISCUSSION
There is a growing interest in understanding the role of the BE in community member activity levels. Previous studies have linked neighborhood walkability to hypertension and blood pressure. We found in a population of geographically and racially diverse older adults that higher neighborhood walkability was protective for incident hypertension, even in the fully adjusted model including measures of exercise. The association was consistent in both race groups and was not modified by age and geographic region. Further investigation is needed to better understand these relationships and help spur additional investment in walking infrastructure which could help improve community health.
Previous work has asked similar questions on neighborhood walkability and incident hypertension (10)(11)(12)(13)(14)(15). Multiple studies in Canadian cohorts have shown that living in a more walkable neighborhood (as measured by Walk Score and other validated walkability indices) predicts a lower risk of developing hypertension, diabetes, and CVD (10)(11)(12). Similarly, in a crosssectional study of middle-aged and older adults in China, higher walkability was associated with lower odds of CVD. While exercise partially mediated this relationship, there was no significant interaction with BE (21). These findings, along with our significant results which remained after adjusting PA, suggest that there may be additional health-promoting factors in more walkable environments that benefit older adults even if they are less likely to be as physically active in those environments as they age.
Comparing these results to our own support the notion that neighborhood walkability may have a consistent protective effect across different sub-groups. However, it is important to note that these associations may differ depending on geography and/or age. For example, a study in an Australian cohort that was similar to REGARDS in terms of age and comorbidities found no significant association between neighborhood walkability (not measured by Walk Score) and incident hypertension (13). Similarly, a longitudinal study of older Taiwanese adults found no associations between Walk Score and exercise or hypertension (22). Yet in Portland, Oregon, higher neighborhood walkability was associated with lower blood pressure after 1 year of followup among adults aged 50-75 (20). In our stratified analyses, we found that higher neighborhood walkability was protective for incident hypertension among all regional and age groups (although not statistically significant in smaller strata), with greatest effects among those living outside the stroke belt/buckle and among younger age groups (<65). However, there were no significant interactions between Walk Score and age or region of the country related to stroke risk (belt, buckle, non-belt).
The REGARDS cohort is comprised of a biracial sample of older US individuals and we did not see modification of the relationship between walkability and incident hypertension by race. Additionally, we observed that walkability category was not only protective against incident hypertension among older normotensive adults, but also against persistent hypertension (i.e., "always hypertensive" at both study visits) which was consistent across race groups in a larger secondary analysis. These results could mean that walkability may not be a major contributor to racial disparities in hypertension risk, even though other neighborhood characteristics (e.g., socioeconomic status) have been linked to these disparities (23). These results suggest increased neighborhood walkability is an aspect of the BE that is consistently protective for hypertension across subgroups of older adults in the US.
Strengths and Limitations
Overall, the mechanisms by which aspects of the BE, e.g., walkability, affect health outcomes is complex, and even the measurements we used are not without error. Street Smart Walk Score is limited in its ability to directly determine walking behaviors. The score calculation is based on the density of destinations in a given block, which does not necessarily lend itself to predicting whether people will walk (potentially due to neighborhood safety or effects of housing segregation) so much as it informs us that they are in close proximity to a place where they can walk (24)(25)(26). Even with these limitations, the score has been validated as an appropriate measurement of neighborhood walkability in the US (18,19). In this study, we observed that neighborhood walkability was associated with hypertension risk among those in more walkable areas (Walk Score ≥70). However, in sensitivity analyses exploring other categorizations of the Walk Score-"Walkable" (≥50) vs. "Car-Dependent" (<50)-and as a continuous variable, there was not a significant association. These findings suggest that there may be a non-linear relationship between neighborhood walkability and hypertension, and future studies should continue to evaluate this relationship and potentially more sensitive definitions of walkability.
Although walkability data were based on participants' addresses at baseline (2003)(2004)(2005)(2006)(2007), the Walk Scores were calculated in 2018 for a REGARDS ancillary study (5). The software did not allow us to backdate the calculation to agree with the timing of the baseline visit. Additionally, second visit data were collected an average of 10 years after the baseline visit. Therefore, we cannot fully account for how participants' environments changed during the time between the baseline visit and when the Walk Scores were collected (e.g., gentrification), as well as the second visit (e.g., moving to a different neighborhood). Finally, although we attempted to adjust for all potential confounders, including self-reported exercise, we could not rule out the potential for unknown or residual confounding (27). Still, we believe the size and nationwide, biracial composition of the cohort makes this study one of the most comprehensive studies of this walkability metric and incident hypertension to date.
In conclusion, we sought to determine if walkability is a novel risk factor for hypertension, and we found that neighborhood walkability may be protective for incident hypertension among older adults in a large sample from the REGARDS Study. Our study constitutes one of the first to report on the association between neighborhood walkability and incident hypertension in the US and evaluate potential effect modification by race, age, and geographic region. Future studies should address additional nuances in these relationships related to race, age, and region as well as other measures of the BE. Continued understanding of the relationship between the built environment and cardiovascular health could potentially lead to neighborhood improvements which spur cardiovascular disease risk reduction on the community level.
DATA AVAILABILITY STATEMENT
The datasets used and/or analyzed during the current study are not publicly available but are available from the corresponding author upon reasonable request.
ETHICS STATEMENT
The studies involving human participants were reviewed and approved by University of Alabama-Birmingham Institutional Review Board. The patients/participants provided their written informed consent to participate in this study.
AUTHOR CONTRIBUTIONS
AJ initiated the study, performed the data analysis, and prepared the Introduction, Results, and Discussion sections of the text. NSC and AP assisted in the development and implementation of the analysis plan and prepared the Methods section of the text. VH and GH were instrumental in the data collection for the REGARDS cohort. GH also provided feedback on the data analysis plan. NC and SJ assisted in the study design, as well as provided the walkability data and their expertise on the metric in REGARDS. MI (corresponding) assisted in the study design, development of the analysis plan, and provided expertise on hypertension in REGARDS. Both SJ and MI provided mentoring and extensive feedback from the inception of this project to subsequent data analysis and manuscript preparation. They contributed equally as last authors. All authors contributed to the article and approved the submitted version.
FUNDING
This work was supported by a cooperative agreement U01 NS041588 co-funded by the National Institute of Neurological Disorders and Stroke (NINDS) and the National Institute on Aging (NIA), National Institutes of Health, Department of Health and Human Services. AJ was also supported by the Medical Scientist Training Program (T32GM008361, National Institute of General Medical Sciences). The content is solely the responsibility of the authors and does not necessarily represent the official views of the NIGMS, NINDS, or the NIA. Representatives of the NINDS were involved in the review of the manuscript but were not directly involved in the collection, management, analysis or interpretation of the data. | 2021-02-02T18:28:02.964Z | 2021-02-01T00:00:00.000 | {
"year": 2021,
"sha1": "aa7afd67208be93850add65a975938b2bd8ec43f",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fpubh.2021.611895/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "aa7afd67208be93850add65a975938b2bd8ec43f",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
237577246 | pes2o/s2orc | v3-fos-license | Irinotecan Plus Doxorubicin Hydrochloride Liposomes for Relapsed or Refractory Wilms Tumor
Purpose The prognosis of relapsed or refractory pediatric Wilms tumor (WT) is dismal, and new salvage therapies are needed. This study aimed to evaluate the efficacy of the combination of irinotecan and a doxorubicin hydrochloride liposome regimen for relapsed or refractory pediatric WT. Patients and Methods The present study enrolled relapsed or refractory pediatric WT patients who were treated with the AI regimen (doxorubicin hydrochloride liposomes 40 mg/m2 per day, day 1, and irinotecan 50 mg/m2 per day with 90-min infusion, days 1–5; this regimen was repeated every 3 weeks) at Sun Yat-sen University Cancer Center from July 2018 to September 2020. The response was defined as the best-observed response after at least two cycles according to the Response Evaluation Criteria of Solid Tumors (RECIST 1.1), and toxicity was evaluated according to the Common Terminology Criteria for Adverse Events (CTCAE 4.03). Results A total of 16 patients (male:female, 8:8) with a median age of 4.2 years (0.5–11 years) with relapsed or refractory disease were enrolled in this study, including 14 patients with relapsed disease and two patients with refractory disease. These patients received 1–8 courses (median, 3 courses) of the AI regimen. Fourteen patients were assessable for response: two with complete response (CR), five with partial response (PR), two with stable disease (SD), and five with progressive disease (PD). The objective response rate was 50% (two CR, five PR), and the disease control rate was 64% (two CR, five PR, and two SD). Seven out of 14 patients (50%) were alive at the last follow-up, ranging from 2.6 to 32.4 months. The median progression-free survival and median overall survival were 3.5 months (range 0.5–12 months) and 8 months (range 1–28 months), respectively. Sixteen patients were assessable for toxicity, with the most common grade 3 or 4 adverse events being alopecia (62%), leukopenia (40%), abdominal pain (38%), diarrhea (23%), and mucositis (16%), etc. No fatal adverse events have been observed, and modest adverse effects can be administered. Conclusion Irinotecan and doxorubicin hydrochloride liposome regimens have positive efficacy on relapsed or refractory pediatric WT with well-tolerated toxicity. A prospective clinical trial is warranted.
INTRODUCTION
Wilms tumor (WT) is an embryonal tumor that accounts for 90% of childhood renal tumors (1,2). Medical advances have greatly improved the survival rate of children diagnosed with WT in recent decades, such as the AREN0533 study, which showed that excellent overall survival (OS) was achieved after omission of primary lung radiotherapy (RT) in patients with lung nodule complete response, and event-free survival (EFS) was significantly improved in patients with lung nodule incomplete response using four cycles of cyclophosphamide/ etoposide in addition to vincristine/actinomycin D/doxorubicin (DD4A) drugs. The survival rate of advanced-stage WT treated according to the Children's Oncology Group (COG) AREN0533 protocol was over 90%, with the overall relapse rate decreasing to less than 15% (3). However, the prognosis of relapsed or refractory patients is still dismal. Conventional surgery, RT, and chemotherapy, such as the combination of actinomycin D and vincristine and/or doxorubicin, are generally used as standard therapies for WT (4)(5)(6). Relapsed WT is clinically heterogeneous, and the prognosis of patients treated with standard-dose chemotherapy and radiation was better than that of patients with adverse prognostic features, including unfavorable histology, relapse less than 12 months from diagnosis, and initial treatment with three-drug chemotherapy (4,5). Studies have shown that high-dose chemotherapy (HDT) followed by autologous hematopoietic stem cell rescue (HSCR) can improve the prognosis of relapsed WT patients (7,8). However, relapsed WT patients rarely receive HDT followed by HSCR in China. Few effective chemotherapeutic agents are available for relapsed and refractory patients who have previously received multidrug chemotherapy (such as ifosfamide, carboplatin, etoposide, cyclophosphamide, doxorubicin), and new salvage chemotherapy agents need to be explored. Limited options are available for these types of patients due to toxicity and side effects on bone marrow, cardiac function, and impaired function of the liver and kidney (9)(10)(11).
Irinotecan, a topoisomerase I inhibitor, is a semisynthetic analog of camptothecin with modest toxicity appearing as myelosuppression, controllable non-hematologic side effects, and powerful effectivity against pediatric solid tumors in both a xenograft model and patients (12)(13)(14)(15). Irinotecan combined with other chemotherapy agents (such as vincristine, temozolomide, bevacizumab) has been reported in clinical applications to treat pediatric solid cancer, including a subset of patients with relapsed WT (16)(17)(18)(19)(20)(21). The results of the COG AREN0321 Study showed that the overall response rate (ORR) for the VI regimen (irinotecan combined with vincristine) used to treat newly diagnosed diffuse anaplastic Wilms tumor (DAWT) was 79% (22). For relapsed or refractory nephroblastoma, several retrospective clinical studies have shown that irinotecan-containing regimens have positive clinical efficacy and tolerable toxicity (21,(23)(24)(25). Doxorubicin hydrochloride liposomes are a novel formulation of doxorubicin encapsulated in polyethylene glycol-coated liposomes and were designed to enhance the efficacy and reduce the dose-limiting toxicities of conventional doxorubicin (26). Research has shown that the ORR of doxorubicin hydrochloride liposomes alone for pediatric sarcoma is 37.5% (27). We attempted to treat relapsed or refractory WT with an irinotecan-doxorubicin hydrochloride liposome (AI) regimen in July 2018. In this study, we retrospectively analyzed the efficacy and toxicity of the AI regimen in 16 patients with relapsed or refractory WT from July 2018 to September 2020.
Patients
From July 2018 to September 2020, 16 pediatric patients with relapsed and refractory WTs who received a doxorubicin hydrochloride liposome plus irinotecan regimen at Sun Yat-sen University Cancer Center were selected and included in the analysis. The inclusion criteria were as follows: (1) patients with relapsed and refractory WT aged ≤18 years; (2) doxorubicin hydrochloride liposome plus irinotecan chemotherapy regimen; and (3) complete clinical data. The exclusion criteria were patients who had previously received chemotherapy with doxorubicin-containing liposomes or irinotecan regimens or patients with grade ≥2 cardiac function insufficiency.
This study was approved by the Sun Yat-sen University Cancer Center Ethical Review Board (B2021-071-01).
Treatment Schedule
The frontline treatment of WT was performed according to the National Wilms Tumor Study (NWTS)-5 protocol. According to the NWTS-5 protocol, stage I/II WT with favorable histology was treated with the actinomycin D and vincristine (VA) regimen for frontline treatment; cyclophosphamide, pirarubicin, and vincristine (CAV) alternating with carboplatin and etoposide (CE) for the first relapse; and VIP (ifosfamide, cisplatin, and etoposide) regimen or AI regimen for the second relapse. Stage III/IV WT with favorable histology was treated with the actinomycin D, pirarubicin, and vincristine (VAD) regimen for initial chemotherapy; CAV alternating with CE for the first relapse; and VIP regimen or AI regimen for the second relapse. If the lung metastases were PD after frontline chemotherapy, we delayed the whole lung RT until the lesions shrunk or were removed for stage IV patients. For relapsed patients, surgery or RT was given if necessary. Patients with relapsed or refractory WT received an AI regimen until disease progression, unacceptable toxicity, or patient withdrawal, but no more than eight courses, and were evaluated for efficacy every two cycles. The AI regimen included doxorubicin hydrochloride liposomes (40 mg/m 2 per day with more than 30-min infusion, day 1) and irinotecan (50 mg/m 2 per day with 90-min infusion, days 1-5), repeated every 3 weeks. Doxorubicin hydrochloride liposomes should be given with anti-allergic pretreatment (including cimetidine, dexamethasone, diphenhydramine, or Benadryl) half an hour prior; atropine was routinely administered 30 min before irinotecan. Before the AI regimen started, standard chemotherapy consent was signed by the guardian of all patients. Surgery was performed during or after the AI regimen, according to the response to the AI regimen and the opinion of the surgeon, and RT was performed after the AI regimen or surgery if necessary.
Stage and Pathology
The clinical stage was based on the COG staging system. Initial pathology was performed according to the COG protocol and classified into favorable histology (FH) group and an unfavorable histology (UFH) group. The FH group was classified into four subtypes: mesenchymal, epithelial, blastemal predominant, and mixed. The UFH group included diffuse anaplasia and focal anaplasia.
Efficacy and Toxicity Evaluation
The response was defined as the best-observed response after at least two cycles of the doxorubicin hydrochloride liposome plus irinotecan regimen. The efficacy evaluation of all patients was evaluated after the AI regimen and before surgery and RT. A CT scan was used to evaluate the recurrent lesions in the thoracic region or abdomen, and MRI was used for the recurrent lesions in the bone. According to the Response Evaluation Criteria of Solid Tumors (RECIST) standard for efficacy evaluation, analysis was divided into complete response (CR), partial response (PR), stable disease (SD), and progressive disease (PD). Progression-free survival (PFS) was defined as the time from the start of the doxorubicin hydrochloride liposome plus irinotecan regimen to the progression of the disease or the time of the last follow-up. OS was defined as the time from the start of the doxorubicin hydrochloride liposome plus irinotecan regimen to death or the last follow-up.
Toxicity assessment is based on the Common Terminology Criteria for Adverse Events (CTCAE 4.03).
Statistical Analysis
SPSS software version 22.0 (IBM, Chicago, IL) was used for statistical analysis, and the Kaplan-Meier method was used to calculate the OS rate and PFS rate.
Patient Characteristics
A total of 16 patients (male:female, 8:8) diagnosed with relapsed or refractory WT were enrolled in this study, including 14 patients with relapsed disease and two patients with refractory disease, with a median age of 4.2 years (0.5-11 years) at relapsed or refractory disease and a median time of 17.5 months (7-108 months) between tumor diagnosis and relapsed or refractory disease. Most of the patients had advanced-stage disease at initial diagnosis (stage II: N = 4, stage III: N = 6, stage IV: N = 5), and one patient had bilateral disease of the kidney at initial diagnosis. The histology of all the patients at initial diagnosis was classified as the FH group. All the patients had received multiple salvage regimens before receiving AI regimen chemotherapy. The cumulative doses of doxorubicin were 150-400 mg/m 2 (median 250 mg/m 2 ) ( Table 1). There were no differences in the cumulative amounts of anthracycline between the four stage II patients at diagnosis and the other patients regarding cumulative doxorubicin doses before the start of AI regimen, because after the recurrence of the four stage II patients, DOXOcontaining salvage chemotherapy regimen was used before the AI regimen. These patients received 1-8 courses (median, three courses) of the AI regimen.
Before the start of the AI regimen, isolated local recurrence occurred in two patients, isolated lung metastasis occurred in nine patients, and both local and distant metastases (including liver, lung, pelvic cavity, and bone) occurred in five patients.
Response and Survival
The duration from the initial treatment with the AI regimen to subsequent PD was 0.5-12 months (median, 3.5 months). Two patients were not evaluable for response, including patient #15 who was lost to follow-up after only one course of the AI regimen and patient #16 with no evaluable lesions. With a median of three cycles of the AI regimen (1-8 cycles), 14 patients were assessable for response: two CR, five PR, two SD, and five PD ( Table 2).
With a median follow-up time of 10.5 months (1.1-34.8 months), seven out of 14 patients (50%) were alive at the last follow-up, ranging from 2.6 to 32.4 months. Both patients who Of the five patients who achieved PR (after 4-6 courses of the AI regimen), four patients achieved CR after further clinical management (patients #3 and #5 received surgery and irradiation of pulmonary lesions, patients #13 and #14 received whole-lung irradiation), and one patient (patient #8) achieved PR after four courses of the AI regimen but PD after six courses and then received other salvage therapies. Of the five patients who achieved PR after the AI regimen, three patients (patients #3, #5, and #14) were alive without disease, one patient (patient #8) was alive with disease, and one (patient #13) patient died of disease at the last follow-up.
Both patients (patients #2 and #4) who achieved SD (after 2-3 courses of the AI regimen) changed to a further salvage chemotherapy regimen but died of tumor progression at the last follow-up.
Of the five patients who achieved PD, patient #7 underwent removal of the lung lesions and then received radiation of the whole lung and was alive at the last follow-up for 3.7 months. The other four patients (patients #1, #9, #10, and #11) died of disease progression at the last follow-up. Patient #13 achieved PR after receiving six cycles of AI regimen, but the patient refused surgery because the whole lung needed to be removed to achieve complete removal of the tumor. Then, the patient received whole-lung RT but relapsed after 7 months and died of disease.
The disease control rate (DCR) was 64% (of the 14 evaluable patients, two CR, five PR, and two SD), and the objective response rate (ORR) was 50% (two CR, five PR). The median PFS was 3.5 months (range 0.5-12 months), and the median survival duration was 8 months (range 1-28 months) (Figure 1, Table 3).
Toxicity
A total of 16 patients were systemically assessed for toxicities ( Table 4). No fatal adverse events or renal toxicity were observed, and modest adverse effects could be administered at the outpatient service (the patient received chemotherapy in the hospital department and was discharged after the completion of chemotherapy, and the toxicities of chemotherapy were treated in the outpatient department in our cancer center). The common grade 3 toxicity-related events were diarrhea (23%), abdominal pain (38%), and leukopenia (19%). The only grade 4 toxicity event was leukopenia (19%). Grade 1-2 vomiting and nausea were easily treated. Generally, grade 3 was manageable if antidiarrhea medications were routinely used. In this study, 11 patients received pegylated recombinant human granulocyte colony-stimulating factor (PEG-rhG-CSF) as an injection, and most patients had mild myelosuppression or febrile neutropenia. Mild cardiac (44%) and modest hepatic (13%) toxicity was observed. Several nonspecific symptoms, including mucositis and fatigue, occurred and were readily managed. None of the patients delayed chemotherapy because of the toxicity and side effects of chemotherapy.
DISCUSSION
The prognosis of relapsed or refractory WT was poor, especially in high-risk relapsed patients who received three chemotherapy agents for frontline therapy. The AREN1921 study was a stage II prospective clinical trial (NCT04322318) investigating the treatment of newly diagnosed stage II-IV DA WTs or FH WTs that had returned (relapsed). According to the AREN1921 study, very high-risk relapsed FH WTs (those treated with three or more drugs for the initial WT) were treated with the regimen ifosfamide, carboplatin, etoposide (ICE)/cyclophosphamide/topotecan. However, in our study, all of the relapsed or refractory WT patients had received cyclophosphamide, etoposide, carboplatin, doxorubicin, and ifosfamide. In recent years, several retrospective studies have shown that irinotecan-containing regimens have a certain effect in recurrent WTs, but the combination of irinotecan and chemotherapy drugs is not uniform (21,(23)(24)(25). A International Society of Paediatric Oncology (SIOP) retrospective study showed that 14 patients with evaluable relapsed WT who received irinotecan-containing regimens (including vincristine, temozolomide, bevacizumab, etc.) had an ORR of 21.4%, and the response rate was not satisfactory (25). Anthracyclines were effective agents for patients with WT. Concerns about the cardiotoxicity of anthracyclines have restricted the dose of anthracyclines. Studies have revealed that doxorubicin-induced heart failure (HF) occurs in 3%-5% of patients treated with 400 mg/m 2 doxorubicin (28). Cumulative doses of doxorubicin in patients with WT in COG and SIOP studies were no greater than 250 mg/m 2 (22,29). Doxorubicin hydrochloride liposomes are a novel formulation of doxorubicin encapsulated in polyethylene glycol-coated liposomes, and their (27). According to our experience in doxorubicin hydrochloride liposomes, in this study, we accepted the regimen of doxorubicin hydrochloride liposome as 40 mg/m 2 for a single day of treatment. In our study, among the 14 evaluable patients with FH WT, two patients achieved CR, five achieved PR after AI regimen chemotherapy, and the ORR was 50%, indicating that the AI regimen was effective for relapsed and refractory FH WTs. However, the SIOP study showed that irinotecan-containing regimens had poor efficacy in relapsed WT, with an ORR of 21.4% (25). The response rate of our study for patients with relapsed and refractory WTs is better than that of the SIOP clinical study, indicating that the efficacy of irinotecan combined with doxorubicin hydrochloride liposomes is superior to that of irinotecan-containing regimens (such as vincristine, temozolomide, and bevacizumab).
The COG AREN0321 clinical study showed that irinotecan combined with vincristine showed good efficacy in newly treated DA WT patients (22). The SIOP clinical study enrolled 14 patients with evaluable efficacy; eight patients experienced a first relapse, and nine patients had a high-risk histological type (tumors included those with diffuse anaplasia or blastemal-type histology after preoperative chemotherapy), including four DA WTs and five blastemal types (BTs). The ORRs of intermediaterisk histological type (tumors were stromal, epithelial, focal anaplasia, mixed, or regressive histology) and high-risk histological type WT patients were 33.3% and 11.1%, respectively. The results of the SIOP study indicate that the relapsed high-risk WT was not sensitive to irinotecan-containing salvage regimens. In the present study, all patients were diagnosed with FH WT. Whether the AI regimen was effective for high-risk histological type WT needs to be further explored.
Interestingly, of the 14 evaluable patients, one patient had local recurrence and achieved CR, the ORR of nine patients with isolated lung metastasis was 55.5% (one CR, four PR, one SD, three PD), and the ORR of four patients with both local and distant recurrence was 25% (one PR, one SD, two PD). These results indicate that the AI regimen may be more effective for solitary local or lung relapse patients. Nevertheless, sample size expansion is required to verify this conclusion.
In the present study, two CR patients achieved longer survival after AI regimen chemotherapy; four out of five PR patients who achieved CR after further clinical management (surgery or RT) survived at the last follow-up, and the response to the AI regimen was converted into a survival benefit; five PD patients had poor survival, most of whom died within 2 years, and a new therapeutic strategy needs to be explored.
In this study, the median number of treatment courses for patients receiving the AI regimen was 3 (1-8 courses), and the median cumulative dose of doxorubicin hydrochloride liposomes was 120 mg/m 2 (40-240 mg/m 2 ). Seven patients had mild NED, no evidence of disease; AWD, alive with disease; 1: age at relapse or refractory; 2: stage at the initial diagnosis; 3: Patient #7 did not receive whole-lung radiotherapy due to PD of the lung lesions after initial chemotherapy; 4: the bilateral lung metastases were refractory to initial treatment, and they were too many to operate; 5: the response of the right lung lesions was PD after first-line chemotherapy. abnormalities in the electrocardiogram, but none of the patients had severe cardiotoxicity (such as HF and arrhythmia). Because of the short follow-up time, a long time was needed to follow up on long-term cardiotoxicity. In the study, most patients relapsed more than two times and received high-intensity chemotherapy in the past. Considering that the AI regimen may cause severe bone marrow suppression, most patients were given long-acting granulocyte colony-stimulating factors to prevent neutropenia. The most common grade 3 side effects observed in this study included leukopenia (19%), abdominal pain (38%), diarrhea (23%), and mucositis (16%). The only grade 4 toxicity event was leukopenia (19%). The toxicity of the AI regimen was tolerable, with no treatment-related deaths, and none of the patients delayed treatment due to toxicity. Studies have shown that the dose-limiting toxicity of doxorubicin hydrochloride liposomes is mucositis (27). The incidence of mucositis in this study was not high, suggesting that doxorubicin hydrochloride liposomes at 40 mg/m 2 are safe for relapsed and refractory WTs. Whether increasing the dose of doxorubicin hydrochloride liposomes can further improve the efficacy is worthy of exploration. Of note, this is the first report of a therapeutic regimen combining these two agents. In this study, we noted that the adverse effects were commonly self-limiting and easily controllable with routine intervention, and this therapeutic regimen was generally continued without delayed therapy. Nevertheless, we acknowledge some limitations. As a retrospective single-arm study, the comparison could not be performed because this study lacks a control group, which may cause selection bias due to the non-randomized design. Furthermore, limited sample sizes were enrolled in this study. However, all patients with manageable adverse effects could continue the therapeutic regimen without delay of therapy, and this study provided valuable experience for the treatment of relapsed or refractory WT.
In conclusion, the combination regimen of irinotecan and doxorubicin hydrochloride liposomes indicates promising efficacy for relapsed or refractory WT patients with tolerable toxicities, especially for the FH WTs. A prospective clinical trial is warranted.
DATA AVAILABILITY STATEMENT
The original contributions presented in the study are included in the article/supplementary material. Further inquiries can be directed to the corresponding authors.
ETHICS STATEMENT
The studies involving human participants were reviewed and approved by Sun Yat-sen University Cancer Center. Written informed consent to participate in this study was provided by the participants' legal guardian/next of kin. | 2021-09-21T13:17:51.920Z | 2021-09-21T00:00:00.000 | {
"year": 2021,
"sha1": "e502400c517d2314c2d144169eaeb92f20401216",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fonc.2021.721564/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "e502400c517d2314c2d144169eaeb92f20401216",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
149869215 | pes2o/s2orc | v3-fos-license | Response of Soil Chemical Properties , Performance and Quality of Sweet Potato ( Ipomoea Batatas L . ) to Different Levels of K Fertilizer on a Tropical Alfisol
Studies showed that K fertilizer in both the years significantly influenced N, P and K concentrations compared with the control and also increased the soil concentrations of these nutrients from 0 160 kg ha K fertilizer. However, soil K only increased up to 80 kg ha fertilizer after which there was a decrease. There was a reduction in the values of Ca and Mg in the soil as the levels of K increased. Leaf nutrient concentration of the sweet potato was consistent with the values of soil chemical properties recorded. 80 kg ha K fertilizer was observed to be the highest value of sweet potato growth and tuber yield after which there was a reduction. The yield decrease was adduced to excessive K application leading to imbalanced sweet potato plant nutrition compared with N, P, Ca and Mg. K fertilizer significantly influenced moisture, vitamin C and carbohydrate compared with the control. The highest values of fibre and protein were obtained at 80 and 40 kg ha K fertilizer, respectively. Dry matter and fat contents of the sweet potato reduced by K application from 0 160 kg ha application rate.
Objective:
Field experiments were carried out in 2015 and 2016 cropping seasons to determine the various levels (0, 40, 80, 120 and 160 kg ha -1 ) of potassium fertilizer (muriate of potash , KCl), on soil chemical properties, leaf nutrient contents, performance and proximate quality of sweet potatoes (Ipomoea batatas L.).
Method:
The five treatments were arranged in a randomized complete block design with three replicates.
Results:
Studies showed that K fertilizer in both the years significantly influenced N, P and K concentrations compared with the control and also increased the soil concentrations of these nutrients from 0 -160 kg ha -1 K fertilizer.However, soil K only increased up to 80 kg ha -1 fertilizer after which there was a decrease.There was a reduction in the values of Ca and Mg in the soil as the levels of K increased.Leaf nutrient concentration of the sweet potato was consistent with the values of soil chemical properties recorded.80 kg ha -1 K fertilizer was observed to be the highest value of sweet potato growth and tuber yield after which there was a reduction.The yield decrease was adduced to excessive K application leading to imbalanced sweet potato plant nutrition compared with N, P, Ca and Mg.K fertilizer significantly influenced moisture, vitamin C and carbohydrate compared with the control.The highest values of fibre and protein were obtained at 80 and 40 kg ha -1 K fertilizer, respectively.Dry matter and fat contents of the sweet potato reduced by K application from 0 -160 kg ha -1 application rate.
Conclusion:
For best tuber yield, quality and economic response of K fertilizer to the sweet potato in the agro-ecological zone or in other similar soil conditions elsewhere in the tropics could be achieved by applying 80 kg ha -1 K fertilizer.
INTRODUCTION
Soils are integral components of agriculture and serve as a medium for numerous ecological, chemical and physical pro-* Address correspondence to this author at the College of Agricultural Sciences, Landmark University, PMB 1001, Omu-Aran, Kwara State Nigeria; Tel: +2348034813715; Emails: adekiya2009@yahoo.com,adekiya.aruna@lmu.edu.ngcesses [1].However, most soils of Africa are poor compared to most other parts of the world due to lack of volcanic rejuvenation which has resulted in various cycles of weathering, erosion and leaching, leaving soils poor in nutrients [2].Soil fertility depletion is a fundamental root cause of the declining per capita food production; it has largely contributed to poverty and food insecurity [3].In 30 years, as much as 37 African countries have lost over 132 million tons of nitrogen (N), 15 million tons of phosphorous (P) and 90 million tons of potassium (K) from cultivated lands [4].Despite the cost of inorganic fertilizer, there is ample evidence that the use of fertilizers can be highly profitable.In addition to increased productivity, increased inorganic fertilizer use benefits the environment by reducing the pressure to convert forests and other fragile lands to agricultural uses and by increasing biomass production that helps in soil organic carbon content [3].
The sweet potato (Ipomoea batatas L.) is a member of the Convolvulaceae family.It is a perennial crop that is usually grown as an annual crop.Its short growing cycle of about 4 -5 months has given it an advantage over other tuber crops like yam, cassava, and cocoyam.The sweet potato is important for its tubers which can be boiled, baked, roasted or fried for human consumption.The tubers can also be processed into flour for bread making, starch for noodles as well as used as raw material for industrial starch and alcohol [5].Also, sweet potato roots are an excellent source of vitamin A (in form of beta carotene), vitamin C, manganese, copper, dietary fiber, vitamin B6, potassium and iron [6,7].
The sweet potato requires a large quantity of nutrients, especially K for sustainability and improved cultivation.However, in Nigeria, farmers usually ignored K fertilizer application.This action has resulted in low tuber yield of sweet potato.K is absorbed by sweet potato in large quantities than any other nutrients.According to a study [8], the nutrients removed by the sweet potato crop producing 14 tonnes of biomass per hectare has been estimated to be 51.6 kg N ha -1 , 17.2 kg P 2 O 5 ha -1 , 71.0 kg K 2 O ha -1 .6.1 kg MgO ha -1 , 6.3 kg CaO ha -1 and 0.8 kg Fe ha -1 .
K is essential in processes like photosynthesis, translocation of photosynthates, protein synthesis, control of ionic balance, regulation of plant stomata, turgor maintenance, stress tolerance, water use efficiency and activation of plant enzymes [9 -11].However, for better response to the applied nutrients, their optimum limits must be defined with reference to the soil characteristics for individual crops as soils and crops vary widely in their nutrient supply and utilization efficiency [12].In China [13], K rate varied from 150 -300 kg K 2 O ha -1 .
In India, the mean optimum rate was observed to be 120 kg K 2 O ha -1 [14].In Calabar, (Ultisol) southeast Nigeria, the value ranged between 120 -160 kg K 2 O ha -1 [15].Rhue et al. [16] reported that the maintenance of optimum K level in the soil is important for the production of the potato plant.
Potassium fertilization may also influence the tuber qualities (fibre, carbohydrate, protein, moisture, fat dry matter and vitamin C) of sweet potatoes.This hypothesis has not been tested for Alfisol of Nigeria derived savannah.Chapman et al. [17] found increases in specific gravity and chip colour up to the optimum K fertilization (KCl) rate.However, K fertilization generally reduces specific gravity [18].Studies on the effect of K fertilizer on soil chemical properties, sweet potato yield and quality are very scarce in Nigeria.It was hypothesized that soil chemical properties, sweet potato yield and qualities react differently to different rates of muriate of potash (KCl) fertilizer.Therefore, the objectives of this study were to evaluate the effect of K fertilizer on soil chemical properties, growth, yield, leaf nutrient concentration, and qualities of sweet potato tuber grown in a derived savannah ecology.
Site Description and Treatments
Field experiments were carried out at the Teaching and Research Farm, Landmark University, Omu-Aran, Kwara state, Nigeria in 2015 and 2016 cropping seasons.The soil at Landmark University is an Alfisol classified as Oxic Haplustalf or Luvisol [19].The Landmark University lies between Lat 8 o 9'N and Long 5 o 61'E and is located in the derived savanna ecological zone of Nigeria.There are two rainy seasons, one from March to July and the other from mid-August to November.The mean annual rainfall in the area is about 1300 mm and the mean annual temperature is 32 o C. The experimental site in 2015 was under maize cultivation for two years and left to fallow for one year and in 2016, the site was left to fallow for two years before cultivation to the sweet potato.The site for 2015 experiment was just adjacent to that of 2016.
Each year, the experiment consisted of 5 levels (0, 40, 80, 120 and 160 kg ha -1 ) of potassium fertilizer applied as muriate of potash (KCl).The five treatments were arranged in a randomized complete block design with three replicates.The size of the experimental field each year was 187 m 2 .Each block comprised of 5 plots, each of which measured 3 × 3 m 2 .The blocks were 1 m apart and the plots were 0.5 m apart.Different sites within the same locality were used for the two years' experiment.
Planting of Sweet Potato and Application of Potassium Fertilizer
Each year, conventional tillage that involved ploughing, harrowing and ridging was performed before planting.Three ridges were maintained per plot.The planting of sweet potato vine of about 25-30 cm long was carried out in April at each year on ridges at a distance of 1 m × 1 m to give a plant population of 10,000 plants per ha.Potassium fertilizer was applied at the rates 0, 40, 80, 120 and 160 kg ha -1 during planting at 10 cm away from the planted cuttings by banding.Also, at this stage, nitrogen (urea) and phosphorus (single super phosphate) at the rates of 60 kg N ha -1 and 50 kg P 2 O 5 ha -1 respectively, were applied as basal applications to boost the growth of the potato as the site was deficient in these nutrients.
Determination of Soil Properties
In both the years (2015 and 2016) prior to the start of the experiments, soil samples (0 -15 cm) were randomly collected from 10 different points from the experimental site (to serve as composite soil sample).The soil samples were bulked together, air-dried and sieved with a 2-mm sieve for chemical analysis, as described in a study [20].The hydrometer method [21] was used to carry out the particle-size analysis.The pH of the soil sample was determined in a soil/water (1: 2) suspension using a digital electronic pH meter.The Walkley and Black procedure by wet oxidation using chromic acid digestion [22] was used for the determination of Organic Carbon (OC).Organic Matter (OM) was calculated by multiplying OC by 1.724.Micro-Kjeldahl digestion and distillation techniques [23] were used for the determination of total N. Bray-1 extraction followed by molybdenum blue colorimetry [24] was used for the determination of soil available P. Exchangeable K, Ca and Mg were extracted using 1 M NH 4 OAc, pH 7 solution.Exchangeable K was thereafter analysed with a flame photometer and Ca and Mg with an atomic absorption spectrophotometer [25].At the end of each year's experiment (at harvest of sweet potato), soil samples were also collected on plot basis (with three samples from each plot and latter bulk together) and similarly analysed for soil chemical properties as explained above.
Growth, Yield and Tuber Quality of Sweet Potato Measurements
Vine length, number of leaves, vine weight and tuber weight were determined at harvest (150 days after planting).The vine length was measured by a meter rule, the number of leaves per plant was determined by counting while the vine weight and tuber weight were determined by weighing on a top loading balance after washing and cleaning to remove any traces of sand from the sweet potato tubers.Samples of sweet potato tuber from each plot were taken for proximate analysis.The moisture, dry matter, crude fibre, crude protein, crude fat and carbohydrate contents of the sweet potato were determined using standard chemical methods described by the Association of Analytical Chemists [26].The moisture content was determined by drying 2 g of each sample at 105 o C till constant weight was achieved.Total dry matter was determined by oven drying at 70°C to constant weight.Soxhlet extraction technique using petroleum ether (40 -50 o C) was used to evaluate the fat content of the samples.The crude protein content of the sample was determined by micro-Kjeldahl digestion and distillation method [27].The carbohydrate content of the sample was estimated by using the method described in a study [28].Vitamin C content of the sample was determined by dichlorophenolindophenol titration [29].
Sweet Potato Leaf Analysis
Leaf samples were collected from sweet potato crops each year on a plot basis, oven dried for 24 h at 80 o C and ground in a Willey mill.These samples were analyzed for N, P, K, Ca and Mg content [30].Leaf N was determined by the micro-Kjeldahl digestion method.Ground samples were digested with nitric-perchloric-sulphuric acid mixture for the determination of P, K, Ca and Mg.Phosphorus was determined colorimetrically using the vanadomolybdate method, K was determined using a flame photometer and Ca and Mg were determined by the EDTA titration method [31].
Statistical Analysis
Data collected were subjected to statistical Analysis of variance (ANOVA) using IBM SPSS Statistics 21 and the Microsoft Excel 2013.The treatment means were compared using Duncan's Multiple Range Test (DMRT) at p = 0.05 probability level.
Initial Physical and Chemical Properties of Soil at the Experimental Sites
The physical and chemical properties of the two sites are presented in Table 1.The soils were sandy loam in texture, low in Organic Matter (OM), N, P, K, Ca and Mg.However, Mg for 2015 was adequate according to the critical values of nutrient in agro-ecological zones of Nigeria [32].Hence it is expected that the application of K fertilizer would enhance soil fertility and sweet potato performance.
Effect of K Fertilizer on Soil Chemical Properties
Table 2 shows the results of the effects of K fertilizer on the soil's chemical properties.Application of K fertilizer significantly increased the concentrations of N, P and K in the soil compared with the control.K fertilizer increased the N and P from 0 -160 kg ha -1 while K increased up to 80 kg ha -1 fertilizer after which there was a decrease.Application of K fertilizer reduced the concentrations of Ca and Mg in the soil compared with the control.The reduction was from 0 -160 kg ha -1 fertilizer.Values followed by similar letters under the same column are not significantly different at p=0.05 according to Duncan's multiple range test
Effect of K Fertilizer on Nutrient Concentration of the Sweet Potato Leaf
Table 3 shows the result of the effect of K fertilizer on the nutrient concentration of the sweet potato leaf.Application of K fertilizer increased N, P and K concentrations in sweet potato leaves significantly compared with the control.The values of N and P increased from 0 -160 kg ha -1 K fertilizer.K in the sweet potato leaves was at its peak at 80 kg ha -1 K fertilizer after which there was a decrease.Application of K fertilizer reduced the values of Ca and Mg in the sweet potato leaves with 160 kg ha -1 having the least value while the control (no fertilizer application) had the highest value.
Effect of K Fertilizer on the Growth and Yield of Sweet Potato
The effects of K fertilizer on the growth and yield of sweet potatoes are as shown in Table 4.In both the years, the application of K fertilizer increased the growth -vine length, vine weight, number of leaves per plant and tuber yield of sweet potatoes significantly compared with the control.The growth and yield increased from 0 -80 kg ha -1 K fertilizer after which, there was a slight decrease observed.There were no significant differences between 80, 120 and 160 kg ha -1 K fertilizer.Using the mean of the two years, compared with the control, the application of 40, 80, 120 and 160 kg ha -1 fertilizer increased the sweet potato tuber yield by 65.7, 134.3, 133.3 and 127.6%, respectively.
Effect of K Fertilizer on Proximate and Vitamin C Contents of Sweet Potato Tuber
The effect of K fertilizer on proximate and vitamin C contents of sweet potato is presented in Figs.(1a-c, 2a-d).K fertilizer influenced moisture content, vitamin C and carbohydrate significantly compared with the control.Application of K fertilizer increased the protein content of sweet potato tuber up to 40 kg ha -1 level, after which, there was a reduction in the content.There was a reduction in the dry matter contents of sweet potato tuber as the levels of K fertilizer increased from 0 -160 kg ha -1 (Fig. 1b).The carbohydrate content increased significantly at all levels of K fertilizer application compared with the control.The highest value of carbohydrate was obtained at 80 kg ha -1 fertilizer after which, there was a reduction.The moisture content of the sweet potato tuber increased significantly with the application of K fertilizer at all levels compared with the control.The moisture content increased from 0 -160 kg ha -1 fertilizer (Fig. 2a).The fat contents (Fig. 2b) decreased progressively with the levels of K fertilizer from 0 -160 kg ha -1 , whereas, fibre and Vitamin C (Figs. 2c & Fig. 2d respectively) were influenced with the application of K fertilizer with both having their highest values at 80 kg ha -1 K fertilizer. Fig.
(3) describes the relationship between tuber yields of the sweet potato in response to K fertilizer application.Increasing K fertilizer rate from 0 to 80 kg ha -1 (A) increased the tuber yield of sweet potato to 25 t ha -1 .Further increase in the rate of K fertilizer to 120 kg ha -1 (B) did not result in a significant increase in sweet potato tuber yield.
DISCUSSION
The response of the soil to K fertilizer showed that the soil was deficient in K. Table 1 shows that the soil of the site was deficient in K according to the critical value of nutrients (0.16 -0.20 cmol kg -1 ) [32] for crops in the agro-ecological zones of Nigeria.There was an increase in N, P and K but Ca and Mg decreased.The increases in N, P and K in the soil were because of fertilization.However, Ca and Mg decreased in the soil and also in the plants, this means the amounts of both Ca and Mg were not sufficient in the soil so the concentrations in the plants decreased related with the plant growth (a dilution effect occurred).When plants grew bigger, the concentrations were diluted when compared to the small ones.Above 80 kg ha -1 K fertilizer, the K soil concentration started to reduce.This suggests that this rate of K fertilizer is sufficient for the sweet potato.
The response of leaf nutrient concentration of sweet potato was consistent with the values of soil chemical properties recorded for this experiment.For both the years pooled, the correlations between soil N and leaf N, soil P and leaf P, soil K and leaf K, soil Ca and leaf Ca, soil Mg and leaf Mg were all significant having r values of 0.960, 0.956, 0.914, 0.979 and 0.931, respectively.There was increased nutrient availability in the soil, hence increased uptake by sweet potato plants.
Application of K fertilizer increased the growth and tuber yield of sweet potato in this study.The function of K in increasing growth and yield of crops might be attributed to its role in cell division, formation and translocation of carbohydrates (sugar and starch), the activator of several enzymatic systems, in the regulation of osmosis or control of water in plants, cell permeability, in the conversion of sugar into organic acids in roots, and in the regulation of N uptake by roots [33].Potassium is known to increase tuber development in roots and tuber crops which respond highly to the generous application of K fertilizer.Hence, the positive response of the tuber yield of sweet potato can be adduced to the high starch synthesis and translocation activity brought about by K fertilizer application.Moreover, K application enhanced the stomata resistance coupled with the reduced transpiration rate and increased relative water content thus, it may improve the water storage capacity of the cell and provide conditions for better yield [34].
In this study, K application increased vine length, vine weight and numbers of sweet potato leaves compared with the control.This can be related to the fact that K enhances N uptake.The result that K fertilizer at 80 kg ha -1 increased growth and yield implies that this amount corresponds to other factors and sweet potato requirement.
The decrease in yield might also be the result of excessive K application leading to imbalanced sweet potato plant nutrition compared with N, P, Ca and Mg.The decrease in yield occurs because uptakes of other nutrients are usually affected.From this study, the uptake of Ca and Mg greatly affected at 80 kg ha -1 K level (Table 4).However, nutritional balances between soil K, Ca and Mg are important in plant nutrition [35].Shukla and Mukhi [36] observed the antagonistic relationship between K, Ca and Mg during absorption by the roots and translocation from the roots to the shoot.Excess of K fertilization restricts Mg and Ca absorption in the tissue which may cause Mg and Ca deficiencies.It was reported [37] that the high rate of K fertilization increased K/Mg ratio and decreased Mg concentration in potato petioles.The K fertilizer applied above this (80 kg ha -1 ) rate will lead to yield reduction.This study revealed that the optimum K fertilizer in the form of KCl for sweet potato cultivation is 80 kg ha -1 on a tropical Alfisol.However, in another location Calabar, southeast Nigeria, Uwah et al.,[15] recommended 120 -160 kg ha -1 KCl (muriate of potash) fertilizer for the sweet potato.The optimum yield of sweet potato has also been obtained by applying 150 -300 kg K 2 O [13].Trehan [38] and Sharma and Trehan [39] reported that the response of potato to K is considerably influenced by the soil type and agro-climatic zones.
Estimation of nutrients within the leaves or fruit is very important and complex because it is associated with many physiological processes that occur within the plants and fruits.Potassium fertilizer plays a major role in plant physiological process.It was reported [40] that fruit weight and fruits size increase with an increase in potassium fertilizer.Potassium has been noted to improve fruits quality by increasing the total sugar and total soluble solids in fruits [41].The increase in the moisture content of the fruit due to an increase in potassium fertilizer observed in this experiment can be attributed to the major role played by potassium in the swelling and expansion of cells and has a close relationship with water [42].
Vitamin C content in tuber was positively affected by the application of K as compared with the control, this is in support of the findings of another study [43] that found the use of potassium as a fertilizer leads to increase juice and vitamin C content in banana and grapes as a result of maintaining the pH and total acidity in fruits.It has also been reported [44] that potassium improves the transfer of radiation energy into primary chemical energy in the form of ATP (Photophosphorylation) and NADPH.This energy is required for all synthetic processes in plant metabolism, which express the quality of the crops.The high energy status in crops well supplied with potassium also promotes the synthesis of secondary metabolites, like vitamin C [45].
The 40 kg ha -1 K fertilizer increased the protein content of sweet potato tuber compared with the control because potassium facilitates the uptake and assimilation of nitrogen into simple amino acids and amide which favors peptide synthesis leading to protein synthesis.The observed improvement in the carbohydrate of the sweet potato upon the application of K fertilizer can be explained on the basis of the positive effect of K on the translocation of the assimilates [45].Application of K fertilizer at optimum level increases starch concentration in plants and K deficiency changes carbohydrate metabolism such as the accumulation of soluble carbohydrate and the decrease in starch content [46].However, heavy application rate of potassium may decrease starch content [47].
For this study, K fertilizer did not increase the dry matter yield of the sweet potato, this could be due to KCl used as the source of K fertilizer.Bansal and Trehan [48] also reported a reduction in dry mater content in tubers when fertilized with KCl.This can be adduced to the chloride ion rather than the potassium itself.However, K 2 SO 4 increases dry matter content of tubers [49,50] Using the means of the two years (Fig. 3), yields of the sweet potato increased by adding K fertilizer up to 120 kg -1 .There was no significant difference between 80 and 120 kg ha -1 K as they produced the same yield value (25 t ha -1 ) of the sweet potato.The increase in the application of K fertilizer from 80 to 120 kg ha -1 K could highlight an economic loss to the farmers, considering the cost and scarcity of K fertilizer in Nigeria and other tropical countries.Either applying 80 or 120 kg fertilizer, the farmers will still have the same yield of the sweet potato.Therefore, the best economic response of K fertilizer to the sweet potato in the agro ecological zone or other similar soil conditions elsewhere in the tropics could be achieved by applying 80 kg ha -1 K fertilizer.
CONCLUSION
Application of K fertilizer in both the years increased soil N, P and K concentrations significantly compared with the control and also increased the soil concentrations of these nutrients from 0 -160 kg ha -1 K fertilizer.Soil K was only increased up to 80 kg ha -1 fertilizer after which there was a decrease.There was a reduction in the values of Ca and Mg in the soil as the levels of K increased.The response of the nutrient concentration of sweet potato leaf was consistent with the values of soil chemical properties recorded for this experiment.Application of 80 kg ha -1 K fertilizer resulted in the highest value of sweet potato growth and tuber yield after which there was a reduction.Using the mean of the two years, compared with the control, application of 40, 80, 120 and 160 kg ha -1 fertilizer increased the sweet potato tuber yield by 65.7, 134.3, 133.3 and 127.6%, respectively.K fertilizer influenced moisture, vitamin C and carbohydrate significantly compared with the control.Dry matter contents of the sweet potato reduced by K application from 0 -160 kg ha -1 application rate.Therefore, the best yield, quality and economic response of the sweet potato to K fertilizer in the agro ecological zone or other similar soil conditions elsewhere in the tropics could be achieved by applying 80 kg ha -1 K fertilizer.
ETHICS APPROVAL AND CONSENT TO PARTI-CIPATE
Not applicable.
HUMAN AND ANIMAL RIGHTS
No animals/humans were used for studies that are the basis of this research.
CONSENT FOR PUBLICATION
Not applicable.
Table 1 . Initial soil physical and chemical properties of the site before experimentation.
Akinrinde and Obigbesan (2000)ents according toAkinrinde and Obigbesan (2000).
Table 2 . Effect of K fertilizer on soil chemical properties.
Values followed by similar letters under the same column are not significantly different at p=0.05 according to Duncan's multiple range test
Table 3 . Effect of K fertilizer on leaf nutrient concentration of sweet potato.
Values followed by similar letters under the same column are not significantly different at p=0.05 according to Duncan's multiple range test | 2019-05-12T13:39:14.042Z | 2019-04-30T00:00:00.000 | {
"year": 2019,
"sha1": "3a5ff3f78e792e50f953c0e91636bc2a57c06283",
"oa_license": "CCBY",
"oa_url": "https://openagriculturejournal.com/VOLUME/13/PAGE/58/PDF/",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "3a5ff3f78e792e50f953c0e91636bc2a57c06283",
"s2fieldsofstudy": [
"Agricultural and Food Sciences"
],
"extfieldsofstudy": [
"Chemistry"
]
} |
237215944 | pes2o/s2orc | v3-fos-license | Merging familiar and new senses to perceive and act in space
Our experience of the world seems to unfold seamlessly in a unitary 3D space. For this to be possible, the brain has to merge many disparate cognitive representations and sensory inputs. How does it do so? I discuss work on two key combination problems: coordinating multiple frames of reference (e.g. egocentric and allocentric), and coordinating multiple sensory signals (e.g. visual and proprioceptive). I focus on two populations whose spatial processing we can observe at a crucial stage of being configured and optimised: children, whose spatial abilities are still developing significantly, and naïve adults learning new spatial skills, such as sensing distance using auditory cues. The work uses a model-based approach to compare participants’ behaviour with the predictions of alternative information processing models. This lets us see when and how—during development, and with experience—the perceptual-cognitive computations underpinning our experiences in space change. I discuss progress on understanding the limits of effective spatial computation for perception and action, and how lessons from the developing spatial cognitive system can inform approaches to augmenting human abilities with new sensory signals provided by technology.
Introduction
Our experience of the world seems to unfold seamlessly in a unitary 3D space. For this to be possible, the brain has to merge many disparate cognitive representations and sensory inputs. How does it do so? Here, I review work on two key combination problems: coordinating multiple frames of reference (e.g. egocentric and allocentric) and combining multiple sensory signals (e.g. visual and proprioceptive). I focus on two populations whose spatial processing we can observe at a crucial stage of being configured and optimised: children, whose spatial abilities are still developing significantly, and naïve adults learning new spatial skills, such as sensing distance using novel auditory cues.
Spatial frames of reference
Spatial relationships can be stored in different frames of reference, with advantages for specific tasks. To open my car door, it is most useful to store where it is relative to my hand (a body-or self-referenced, egocentric representation). In contrast, to find the car in the car park, perhaps from a new viewpoint, it is most useful to store where it is relative to stable external landmarks (an externally referenced, allocentric representation). The brain represents spatial representations with different coordinate frames using different specialised substrates (review, Burgess 2008)-for example, those in body-referenced frames useful for guiding immediate action in parietal cortex (Bremmer et al. 1997), and those in frames using external landmarks in the hippocampus (Hartley et al. 2014).
1 3 and tasks earlier than allocentric ones. Particularly, when egocentric and allocentric responses conflict, young children tend to follow an incorrect egocentric strategy. For example, in studies by Acredolo (Acredolo 1978;Acredolo and Evans 1980), younger infants who learned to turn to one side (e.g. their right) to find a target, and were then moved and rotated 180°, persevered with this now incorrect egocentric response. This points to the multiple challenges of encoding more complex allocentric versus simpler egocentric spatial relationships, updating representations correctly to account for own movement, and selecting the correct reference frame when different frames conflict (more discussion: Nardini et al. 2009, and below).
Development: coordinating multiple reference frames
Most of the time, multiple potential encodings or frameswhich may be more or less useful for a specific task-are available. Beginning in 2006, our studies addressed the question when and how multiple reference frames are coordinated in development. In an initial study, 3-6-year olds attempted to recall the locations of objects on an approximately 1m 2 board incorporating small surrounding landmarks (Nardini et al. 2006). Board and/or participant were moved between hiding and recall in a factorial design that varied the validity of (1) the self, (2) the wider room, and (3) the small surrounding landmarks as a basis for recall. Children were already competent from age 3 years when self-and/or room-based reference frames were available, but only above chance from 5 years at using the surrounding landmarks alone (and disregarding the other frames). Subsequent modelling of responses indicates that at intermediate ages, children's responses are a mixture between using the incorrect frames and the correct one (Negen and Nardini 2015). A highly controlled version of the same task using VR-in which children no longer interact with a miniature moving array, but are immersed in the virtual test environment (Negen et al. 2018a) reached the same conclusion. Simple (e.g. body-referenced) representations are reliably used from a young age, but when these are not valid, correctly coordinating and using only relevant landmarks to respond emerges later, at 4-5 years of age.
Development: coordinating multiple landmarks
Tracing the earliest ages at which allocentric recall (i.e. using only external landmarks) is demonstrably above chance identifies a starting point for allocentric abilities, but these very earliest abilities may be based only on very simple or partial information about external landmarks. For example, in Negen et al. (2018a), the earliest above-chance use of the allocentric frame could be explained by encoding position just along one axis of the space-far short of a fully accurate spatial representation. Similarly, allocentric recall that can be based on roughly matching visual features emerges earlier than that requiring strict representation of spatial relationships (Nardini et al. 2009). A VR study of 3-to-8-year olds' recall with respect to several distinct landmarks asked how abilities to coordinate these develop (Negen et al. 2019a). The study looked for markers of performance beyond that explicable by use of just the single nearest landmark. The results showed that until around 6 years, allocentric performance was supported by use of a single landmark-a strategy better than egocentric, but still subject to significant errors (e.g. mirror reversals). Only after 6 years was there evidence for coordination of multiple landmarks to improve precision and avoid such errors. Interestingly, however, this was also moderated by the complexity of the environmentin an extremely simple (less naturalistic) space, there was earlier evidence for coordination of multiple landmarks.
Coordinating multiple reference frames and landmarks: developmental mechanisms and bottlenecks
These studies reveal crucial computational changes in spatial recall during early life. We see a progression from reliance on simple (body-based/egocentric) encodings, to those using simple elements of the external environment (e.g. single landmarks, or features of landmarks), to those coordinating multiple landmarks. The competence of typical adults at perceiving and acting flexibly in space emerges from this long developmental trajectory. On comparable experimental tasks, clinical groups with spatial difficulties (e.g. Williams Syndrome) appear to remain at levels of development typical of pre-allocentric children (e.g. Nardini et al. 2008a), as do adult hippocampal patients (King et al. 2002). What are the developmental mechanisms, and what bottlenecks hold back younger children (or clinical groups) from flexible spatial recall? The degree to which these changes represent either reshaping of abilities to encode and represent the relevant information (e.g. by the hippocampus), or abilities to correctly select the relevant encoding (disregarding irrelevant cues or reference frames) is one key question for future research. Initial evidence that individual differences linked to inhibitory control are one predictor of performance (Negen et al. 2019a) suggests that not only encoding, but also selection plays a role. Evidence in the same study that a simpler environment shows earlier development also suggests a role for processes of attention and cue selection. These findings raise interesting questions about how closely the present coordination problems in spatial cognitive development are linked to development of more general, central, cognitive capacities, such as inhibition or cognitive control.
Multisensory processing of spatial information
We sense the world using multiple channels of sensory input, including visual, auditory, and haptic. The challenge of situating ourselves in space includes coordinating and combining these disparate information sources. For example, for dealing with changes of viewpoint (see above), visual information is useful for detecting the new viewpoint (e.g. using visual landmarks) and potentially for tracking own movement between the different viewpoints (e.g. using optic flow). Non-visual (e.g. vestibular and kinesthetic) information also crucially helps track own movement to account for viewpoint changes (Simons and Wang 1998;Wang and Simons 1999), including during development (Nardini et al. 2006;Negen et al. 2018a). This is evident in the studies just mentioned because when viewpoint changes happen in absence of movement-related information (e.g. a new viewpoint is presented, but the participant did not walk there), accuracy is poorer in adults and takes longer to be above chance in childhood.
Measuring combination of multisensory spatial signals
The evidence reviewed above for the role of movement, as well as vision, comes from spatial tasks that create large cue conflicts. In key test conditions, a viewpoint change is experienced without the corresponding movement-i.e. the environment is rotated in front of the participant, or the participant is virtually 'teleported'. This leaves unclear the extent to which performance is poor because of (a) the absence of useful movement information, or (b) an incorrect reliance on the (erroneous) movement information that states that no viewpoint change has occurred. We saw that young children just mastering these tasks switch between the latter erroneous strategy and one that correctly disregards movement information (Negen and Nardini 2015), and that performance on a related task is predicted by individual differences in inhibitory control (Negen et al. 2019a). To more clearly determine how spatial signals and cues interact, a more recent approach (Cheng et al. 2007) applies Bayesian decision theory to questions about how spatial information is combined. This avoids selection and conflict problems and also lets us measure the degree to which using two signals together leads to the precision benefits expected for a rational (Bayesian) ideal decision-maker. The approach essentially (see Ernst and Banks 2002;Rohde et al. 2016) varies the availability of cue 1 and cue 2 across conditions (testing cue 1 alone, cue 2 alone, and cues 1 + 2 together) to test for Bayesian precision benefits. It also uses small conflicts (cue 1 vs. cue 2 indicate slightly differing target locations) to measure the relative reliance on (weighting for) each cue.
Combination of multisensory signals for navigation
We applied this approach to a developmental navigation task (Nardini et al. 2008b). Illuminated visual landmarks in an otherwise dark room ('cue 1') could potentially be used together with non-visual (vestibular, kinesthetic) movement information ('cue 2') to return collected objects directly to their previous locations after walking two legs of a triangle (i.e. triangle completion). A Bayesian decision-maker would be measurably more precise with both cues together than with either alone. While adults met this prediction, children aged 4 and 8 years did not-they were no more precise with two cues together than with the best single cue, and the model that best explained their precision and cue weighting was one in which they selected a single cue to use on any trial, rather than combining (averaging) them. This indicates that issues with development of spatial recall in earlier tasks (e.g. Nardini et al. 2006) did not only reveal an immaturity in selecting the correct representation, but that there are also fundamental immaturities in combining multiple valid signals efficiently when these are available. The finding of efficient or near-optimal spatial cue combination in adults has been replicated and extended (Bates and Wolbers 2014;Chen et al. 2017;Sjolund et al. 2018), while the finding showing immaturity in cue combination long into childhood has been replicated in many tasks, also including more basic (e.g. table-top, non-navigational) spatial information-described next.
Development of spatial combination of multisensory information
Basic abilities to understand multisensory correspondences and to benefit from redundant multisensory information of some kinds are present in early life (Bahrick and Lickliter 2000;Kuhl and Meltzoff 1982). However, a growing body of research shows specifically that the Bayes-like precision benefits adults experience when combining multisensory spatial signals take until around age 10 years of life or later to emerge. As well as not showing multisensory precision gains when navigating (Nardini et al. 2008b), unlike adults (Ernst and Banks 2002), children do not improve their precision at comparing the heights of bars with vision and touch together (Gori et al. 2008), in part because they overweight the less reliable cue. Similarly, unlike adults (van Beers et al. 1999), children do not improve their abilities to localise a point on a table-top with vision and proprioception together (Nardini et al. 2013). Even within the single sense of vision, unlike adults (Hillis et al. 2004), children do not combine two distinct cues to surface orientation (stereo disparity and texture) until the age of 12 years (Nardini et al. 2010); younger children's behaviour best fits switching between following one cue or the other on any trial.
Development of multisensory spatial combination: mechanisms and bottlenecks
These failures to achieve Bayes-like precision gains during perception long into childhood may at first seem surprising. From a decision-theoretic point of view, children-whose precision at most simple 'unimodal' perceptual tasks takes many years to attain adult levels-would especially stand to benefit from efficiently combining the relatively noisy information sources they have. However, to achieve efficient combination, the system must overcome a number of developmental challenges (Nardini and Dekker 2018).
Challenge 1: calibration
First, the different senses or signals need to be correctly calibrated. Initial evidence suggesting that calibration plays a role includes a study in which we found combination of visual and auditory signals to localise targets at below age 8 years in a task that improved unisensory calibration (Negen et al. 2019b).
Challenge 2: appropriate weighting
Second, efficient, Bayes-like combination of signals requires each to be weighted in proportion to its relative reliability, or inverse variance (Ernst and Banks 2002;Rohde et al. 2016). There is evidence for mis-weighting of signals in development, including overweighting of unreliable (Gori et al. 2008) and even completely irrelevant (Petrini et al. 2015) cues.
Challenge 3: neural substrates for efficient combination
A third challenge-not necessarily distinct from the above two, but expressing them at a different level of analysis, is maturation of the still poorly understood neural substrates for efficient averaging of sensory signals. It is clear that combination takes place at multiple levels of a hierarchy of sensory processing and decision-making (Rohe and Noppeney 2016), including in early 'sensory' areas (Gu et al. 2008).
Our initial work using fMRI shows that immaturities in the earliest component of this network accompany inefficient cue combination. 'Automatic' combination of visual cues to 3D layout (surface slant) in early sensory ('visual') areas, for stimuli displayed in the background while participants carry out a different task at fixation, is present in adults (Ban et al. 2012) and in 10-to-12-year olds, but not 6-to-10-year olds (Dekker et al. 2015). Thus, acquiring efficient multisensory combination abilities for spatial judgments would seem to depend on developmental reshaping of sensory processing at a very early level.
Enhancing human perception and action in space: opportunities
In this final section, I sketch out applications of the work reviewed above to the newer domain of optimising human perception and action using 'new' sensory signals-for example, enhancing spatial abilities using new devices or sensors (Nagel et al. 2005). There is increasing evidence that the organisation of neural substrates for perception and action in space can be remarkably flexible (Amedi et al. 2017). For example, some blind individuals are expert at using click echoes to sense spatial layout, recruiting 'visual' cortex for perception of layout through sound (Thaler et al. 2011). Advances in wearable technology also make it increasingly feasible to provide people with novel sensors and signals. Devices to substitute or augment spatial perception via sound or vibrotactile cues have been developed and show promising signs of everyday use and reshaping perception (Maidenbaum et al. 2014). Which challenges must be met in order for approaches such as these to be integrated effectively into people's everyday spatial cognitive repertoire?
Enhancing human perception and action in space: challenges
There are key parallels between children first learning to coordinate natural sensory signals (Sect. "Coordinating multiple sensory signals", above) and people of all ages learning to coordinate newly learned sensory skills into their existing multisensory repertoire. As an example, consider learning to use a new device that translates distance or depth to an auditory signal such as pitch. The three challenges identified above are also crucial here: first, achieving an accurate calibration of the new sense to the familiar representation of space, second, appropriately weighting the new signal with the old one when both provide useful information, third, at the neural level of analysis, being able to implement these processes in highly efficient circuits supporting subjectively effortless or 'automatic' perception (e.g. those in early 'sensory' areas).
Enhancing human perception and action in space: initial findings
With these questions and issues in mind, we have embarked on new studies of the scope to enhance human perception and action in space using new sensory signals. In an initial study (Negen et al. 2018b), in a VR environment, we trained healthy adults to use an echo-like auditory cue, together with a noisy visual cue, to judge distance to an object. Within five short (approx. 1-h) training sessions, we found evidence for efficient Bayes-like combination, including improved precision (albeit falling short of the Bayes-optimal improvement) and reweighting with changing cue reliabilities. Recalling that children often do not show combination even with familiar, natural cues (Nardini et al. 2008b), this suggests that the mature perceptual-cognitive system may bring some advantages to novel cue combination problems and offers a promising outlook on flexibly enhancing human spatial abilities. However, many questions remain-including the prospects and training time course for eventually embedding such new abilities in low-level sensory processing, most likely to support subjectively effortless or 'automatic' perception.
Enhancing human perception and action in space: future directions
Ongoing work is investigating the manner in which newly acquired spatial skills become embedded in perception. For example, there is initial evidence that within ten training sessions, and with another visual cue with a more natural form of noise (uncertainty), participants still do not attain Bayesoptimal performance; however, the skill enhances speed (as well as accuracy) of responses and resists verbal interference . Sensitive model-based tests of some of these abilities are assisted by analysis methods beyond those in the classic cue combination literature (Aston et al. 2021). Key future directions include investigating extended training, neural substrates (using fMRI), motor/action tasks, and other perceptual problem domains (e.g. sensing object properties, as well as their spatial locations).
Summary and conclusions
The research described here has addressed two combination problems underlying perception in action in space: coordinating multiple reference frames and coordinating multiple sensory signals. Our understanding of development in these domains has been improved by adoption of a model-based approach, which, for example, compares performance with the predictions for an ideal (Bayesian) decision-maker. Both systems show substantial and extended development during childhood. In the domain of reference frames, key outstanding questions include the extent to which developmental improvements in abilities to either represent or select relevant information play a crucial role, and the extent to which these can be linked to maturation of specific brain systems and/or development of broader cognitive abilities. In the domain of multiple sensory signals, key outstanding questions include factors limiting efficient combination of signals in childhood, and the extent to which these can be tied to specific elements of information processing models and/or maturation of specific neural substrates. There are important parallels between the information processing challenges for children using their familiar senses and those for adults learning to use new sensory signals. Therefore, developmental research also has an important role in guiding the search for optimal approaches to enhancing human spatial abilities using technology. | 2021-08-20T06:17:27.821Z | 2021-08-19T00:00:00.000 | {
"year": 2021,
"sha1": "7ca867842cdd0bb133d44f0ae812684d57ef42df",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s10339-021-01052-3.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "43603f87062e9316a46023e4cdca723a6bdc4d4c",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
4378937 | pes2o/s2orc | v3-fos-license | ACVIM consensus update on Lyme borreliosis in dogs and cats
An update of the 2006 American College of Veterinary Internal Medicine (ACVIM) Small Animal Consensus Statement on Lyme Disease in Dogs: Diagnosis, Treatment, and Prevention was presented at the 2016 ACVIM Forum in Denver, CO, followed by panel and audience discussion and a drafted consensus statement distributed online to diplomates for comment. The updated consensus statement is presented below. The consensus statement aims to provide guidance on the diagnosis, treatment, and prevention of Lyme borreliosis in dogs and cats.
form of LB in dogs, Lyme nephritis, is less common than Lyme arthritis.
No experimental model to study its pathogenesis, treatment, and prevention in over-represented (retriever) breeds has been developed, and no validated staining techniques are available to prove that glomerular immune-complexes are Lyme-specific in kidney biopsy specimens from living dogs. Despite these limitations, strategies for empirical management of Lyme nephritis are given, as recently offered by the International Renal Interest Society (IRIS) Glomerular Disease Study Group. [2][3][4][5] The objectives of this consensus statement are to review the evidence and provide findings and recommendations that address these topics regarding Borrelia spp. infection in dogs or cats:
and associated ticks
There are at least 52 Borrelia species, 6 including 21 in the LB group (B. burgdorferi sensu lato; Bb-sl; these gram-negative spirochetes generally migrate within the host interstitially), 29 in the relapsing fever group (migrating hematogenously), and 2 undetermined members. In dogs residing in North America, LB has only been associated with B.
burgdorferi sensu stricto (Bb), of which at least 30 subtypes or strains exist, based on outer surface protein C (OspC) genotyping. 7 The strains appear host-specific; different strains are more common in people as compared with dogs. 7,8 In Europe, coinfections of Bb with other Bb-sl strains (ie, B. garinii) may predispose dogs to illness. 9 Other Bb-sl species causing human LB (ie, B. mayonii 10
| Topic 1b: Geographic distribution and epidemiology of Bb infection
The geographical persistence and spread of Bb is related to the 2year, 3-stage (larva, nymph, and adult) life cycle of its Ixodes spp. vector, which feeds on a variety of hosts. One blood meal occurs per stage, and uninfected tick larvae hatch to feed on Borrelia-infected reservoir hosts, principally mice, squirrels, shrews, birds (I. scapularis) and lizards (I. pacificus). 22 Within endemic geographical areas, the prevalence of B. burgdorferi in nymphal or adult ticks can reach approximately 50%. [34][35][36] Although nymphs are likely responsible for the majority of Bb transmission to humans and dogs because the small size of this stage allows them to feed on the host undetected, dogs may be less susceptible to transmission of Bb from infected nymph versus adult infected ticks. 37,38 Borrelia infection often occurs in the warmer months as a result of the questing behavior of ticks and the recreational habits of humans (owners) and their dogs. 39 Later the same summer, nymphs molt to adults which feed on large mammals, preferentially deer, but also dogs and humans. Adult Ixodes ticks can be active in the fall, winter, and early spring when ambient air temperatures exceed 48C (408F). 40 Deer are important for the maintenance, amplification, and spread of the tick population because adult ticks mate on them. 22 42 The same may be true for dogs. Travel history of sick or seropositive dogs is an important historical question because cases in nonendemic areas may occur after travel to or importation from endemic disease areas.
The main vector for Bb-sl in Europe is I. ricinus and the distribution of LB follows its expansion. 43 The highest prevalence was found in Subclinical histologic evidence of mild-to-moderate synovial changes and tick bite site perivasculitis and perineuritis are consistent findings in dogs experimentally infected with Bb after tick exposure; the changes seen are milder in 18-week old versus 6-week old exposed puppies. 38,[48][49][50] Although neurologic signs were described in a few seropositive dogs in the past, 51 recent field studies showed no association of neurologic signs in seropositive dogs, thus neuroborreliosis as seen in human and equine 52 patients is not well-documented in dogs. [53][54][55][56] Fatal myocarditis was described in Boxer pups with Bbpositive immunohistochemistry, for which no other cause was found; 57 there may be a genetic (breed) predisposition for autoimmune myocarditis triggered by a Lyme antigen which mimics cardiac myosin. 58
| Topic 2c: Considerations in cats
Cats living in Bb-endemic areas are sometimes seropositive.
| Topic 3a
In dogs, serology is the only recommended modality to evaluate for exposure to Bb (Table 2). Validated serologic tests for Bb exposure in North America include in-house and reference laboratory C 6 -based months (without re-exposure), whereas OspF antibodies increase by 6-8 weeks and remain increased in untreated carriers. [83][84][85]87 The OspC antibodies probably increase again in field conditions (ie, re-exposure is a natural booster), thus finding OspC antibodies in a nonvaccinated dog may indicate recent exposure or re-exposure, without specifying when the dog was first infected in its life. The OspA antibodies usually are a marker for vaccination, but they may develop transiently in early infection, 83,85 or possibly later during chronic infections, as seen in infected humans, because Bb displays antigenic variation, and expresses its antigenic repertoire over time to avoid host immunity. [88][89][90][91] The C 6 result has been shown to wane after treatment; [92][93][94] OspF antibodies also may wane. 95 Determination of quantitative titers to C 6 (or potentially OspF), pre-and 3 94
| Topic 3b
In cats, several studies document antibodies against Bb occur in the serum of cats that are naturally exposed or infected with Bb after being experimentally infested with I. scapularis. 70 For ICGN with profound proteinuria, hypoalbuminemia, nephrotic syndrome, or rapidly progressive azotemia, single drug or combination treatment consisting of rapidly acting immunosuppressive agents (
| Topic 4c
Because borreliosis in cats has never been confirmed in a single cat, the optimal treatment plan is unknown. In cats with suspected anaplasmosis, clinical signs rapidly resolve after doxycycline is administered at 5 mg/kg q12h or 10 mg/kg PO q24h for 14-28 days. 77
Pros Cons
Treatment of possible Bb-associated periarticular inflammation Treatment is not needed if periarticular inflammation is not present; older (18 week old) infected puppies showed milder histologic changes than younger (6 week old) infected puppies
Treatment of possible coinfections Treatment is not needed if coinfection is not present
Possible prevention of future Lyme arthritis or Lyme nephritis There is no ability to monitor the response to treatment if the dog is truly nonclinical; the vast majority of Bb-seropositive dogs never become ill nor proteinuric Unnecessary owner cost Overuse of antibiotics may cause microbial resistance in the environment at large Possible adverse effects of treatment Possible laxity in checking for proteinuria in carriers, even though they may not all be cleared with treatment Theoretically, a subclinically infected dog may be in a premunitive state that could be protective, at least for that particular strain reduction is considered acceptable as compared with being indication for continued treatment in the absence of clinical signs. The argument for treating until Quant C 6 results wane by at least 50% is that the organism may never be cleared as it enters "protected" collagen tissue, and may develop into a latent cystic or L-form. Clinicians who treat believe treatment may lessen the likelihood of future development of immunecomplex disease such as ICGN or nonclinical histologic changes found in experimental dogs (eg, arthritis, perivasculitis, and perineuritis), although this has never been confirmed by a controlled study. 140,141 Combinations of products with different mechanisms also may be used. See Table 6 for a comparison of some commonly used tick control products.
In 1 study of 9 cats infested with wild-caught I. scapularis twice, 2 cats seroconverted after the first infestation, became seronegative, and then seroconverted again after the second infestation suggesting a new primary infection. 76 Thus, it appears that Bb infection does not induce preventive immunity in cats and repeated infection can occur without tick control. In another study of naturally exposed cats with and without clinical signs referable to borreliosis, whether or not the owner purchased a tick control product was recorded. 142
| Topic 7b: Bb vaccination
The efficacy of tick control products is excellent, as proven by prevention of seroconversion after tick exposure challenge. 135,137,143 However, compliance for using these products properly is an ongoing problem, and many veterinarians in Bb-endemic areas also recommend
CONFLICT OF INTEREST DECLARATION
The
OFF-LABEL ANTIMICROBIAL DECLARATION
Authors declare no off-label use of antimicrobials.
INSTITUTIONAL ANIMAL CARE AND USE COMMITTEE (IACUC) OR OTHER APPROVAL DECLARATION
Authors declare no IACUC or other approval was needed. | 2018-04-03T01:20:56.371Z | 2018-03-22T00:00:00.000 | {
"year": 2018,
"sha1": "fb7accbf8f8b5c3ed36c6c288cd03219faf7cbf7",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.1111/jvim.15085",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "fb7accbf8f8b5c3ed36c6c288cd03219faf7cbf7",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
198489162 | pes2o/s2orc | v3-fos-license | STEM-based Science Learning Design in the 2013 Curriculum
This paper is aimed to discuss about design of STEM-based science learning in junior high school in the 2013 curriculum, and use the model of Project Based Learning (PjBL). STEM is an teaching and learning approach that integrates science, technology, engineering, and mathematics to reach the 21st century skills which is proclaimed in 2013 curriculum. In this paper only discuss one of teaching and learning model that supports and appropriate with the STEM approach namely the Project Based Learning models. Learning strategies for integrating STEM education are identified as project-based, problem-based and inquiry-based learning. Through this learning model the students carry out a project in collaborative ways in yielding a product.
Introduction
The curriculum has an important role in process of education. Curriculum has an important roles and influence learning activities as well. The National Curriculum (2013 Curriculum) equips Indonesian students to master 21st-century skills. They need to be prepared to have 21st-century skills such as critical thinking skill, creative, they are able to solve a problem, and make a decission and cooperative way through collaboration and communication [1]. The learning undertaken by teachers must be oriented toward the 21st-century learning that has the characteristics as follows: (1) learning approach centered on students; (2) students are taught to be able to collaborate; (3) learning material is associated with problems faced in daily life; and (4) in an effort to prepare students to be responsible citizens [2].
In achieving the above objectives it is necessary to have an approach that can accommodate 21stcentury learning characteristics. One of the approaches that can accommodate 21st-century learning characteristics, and one approach that can accommodate the characteristics of 21st-century learning and strengthen the 2013 curriculum implementation, the STEM approach was adopted (Science, Technology, Engineering, and mathematics). What is STEM? STEM as an acronym for Science, Technology, Engineering, and Mathematics [3]. Science: the study of the natural world; Technology: One surprise the STEM definition for technology includes any product made by humans to meet a want or need, (so much for all technology being digital); a chair is technology, so is a pencil; any product kids create to solve a problem can be regarded as technology; engineering: the design process kids use to solve problems; math: the language of numbers, shapes, and quantities that seems so irrelevant to many students [4].Other definition of technology and engineering, technology is any modification of the natural world made to fulfill human needs or desires, and engineering is a systematic and often iterative approach to designing objects, processes, and systems to meet human needs and wants [5].
Additionally, STEM education, as the world peoples'worldviews are changing, challenges the meaningful framework of it in its practical nature. Shilling in his reports through the Department of Education of the US (2016)/ [6] argues, "The complexities of today's world require all people to be equipped with a new set of core knowledge and skills to solve difficult problems, gather and evaluate evidence, and make sense of information they receive from varied print, and, increasingly, digital media.The learning and doing STEM helps develop these skills and prepare students for a workforce where success results not just from one knows, but what one is able to do with that knowledge." The movement and spirit of STEM Education then leads to an effort of building critical human capital competencies for a 21th-century economy. [7] There are several STEM definitions put forward by experts that STEM is defined as combining scientific disciplines, technology, engineering, and mathematics [8]. STEM is an interdisciplinary field consisting of four disciplines namely science, technology, engineering, and mathematics [9]. Integration of STEM in class is a type of curriculum integration [10]. An expert stated that STEM is more than just a group of subject areas. This is a movement to develop deep mathematical and scientific foundations, students must be competitive in 21st-century work [11]. But this movement far surpasses students who are eager for certain jobs. STEM develops a set of thinking, teamwork, investigative and creative skills that students can use in all areas of their lives. STEM is not an independent class -this is a way to intentionally include different subjects throughout the existing curriculum [11].
The STEM approach helps teachers in teaching and learning. As one expert said, STEM education is a growing trend, many believe it can help teachers meet this challenge [3]. STEM integration has several characteristics in learning science. There are six characteristics of a great STEM lesson: (1) STEM lessons focus on real-world issues and problems; (2) STEM lessons are guided by the engineering design process; (3) STEM lessons immerse students in hands-on inquiry and open-ended; (4) STEM lessons involve students in productive teamwork; (5) STEM lessons apply rigorous math and science content your students are learning; (6) STEM lessons allow for multiple right answers and reframe failure as an necessary part of learning [4]. The STEM approach helps students and teachers in solving problems in learning. Some of the benefits of the STEM approach make students able to solve for better, innovators, investors, independent, logical thinkers, and literacy [12].
To apply the STEM approach to science learning many learning models can be used. Learning strategies for integrated STEM education have also been identified and classified as project-based learning, problem-based and inquiry-based learning [9]. Selection of learning model submitted to a teacher to adjust the characteristics of teaching materials.
In this paper only one class room is discussed to apply the STEM approach with the PjBL (Project Based Learning) model. The implementation of the PjBL model is considered in accordance with STEM and in line with the 2013 curriculum [10]. The PjBL model emphasizes contextual learning through complex activities, such as students' freedom to plan exploration of learning activities, carry out collaborative projects and ultimately produce products [11]. The PjBL model is a learning model that uses the object in learning. A project is carried out by students independently or in groups within a certain period, and produces products, and the results presented. The PjBL model collaborates with students and investigates teams of 4-5 people.
The skills will need and developed students in team is planned, organize, negotiations, and make the consensus concerning the tasks undertaken by each of team members. The skills will need and developed students were an essential skill as they foundation for the success of the project.
The Directorate of Junior High Schools conducted gradually in developing STEM-based science teaching materials. Until now the STEM learning unit in the junior high school science subjects was only two, namely the classification of material and its changes -making prototypes and energy and electric power -energy saving miniature homes. While the teacher's ability varies greatly in designing STEM-based science learning. Therefore, the authors consider that it is necessary to do Junior High School Science Learning Design to help them and participate in socializing them through seminars.
The rest of this paper is organized as follow: Section 2 discuss STEM-based science learning design in the 2013 curriculum. Section 3 concludes this work.
STEM-based Science Learning Design in the 2013 Curriculum
Before designing STEM-based science learning with the PjBL model, we must first equip ourselves with knowledge about STEM and PjBL. In the Preface section, several STEM definitions have been disclosed. As a strengthening of the STEM definitions, STEM is not only a practical strengthening of education in the STEM field separately, but rather develops an educational approach that integrates science, technology, engineering and mathematics by focusing the educational process on solving real problems in daily life and professional life [13].
Learning design with the STEM approach with the preparation of Learning Implementation Plans, consisting of basic competencies, indicators of competency achievement, learning objectives, prerequisite abilities, 21st-century skills development, development of strengthening character education, material analysis, learning scenarios (approaches, models, methods and descriptions activities) learning resources, tools and materials and assessment.
In this discussion not all Learning Implementation Plans were discussed, but only a few components. The teaching materials for the STEM approach must of course be adjusted to the characteristics of STEM learning. Not all topics in the curriculum can be taught using the STEM approach in accordance with their scientific characteristics [1] is shown in Table 1.
Learning Objectives
This section describes the learning objectives according to the indicators formulated.
Analysis of STEM Learning Materials
This section identifies learning processes that are appropriate to the four domains of science, technology, engineering, and mathematics.
Description in the analysis section of STEM learning material as in the Table 1 3) Investigation with a purpose -experimentation, modeling, learning from cases, managing variable, accurate observatio and measuring, seeing patterns; 4) Informed decision making, reporting on justifying conclusions; 5) Iteration toward understanding; 6) Explaining scientifically 7) Investigating planning; 8) Analysing and interpreting data from scientific investigation using a range of tools for analysis (tabulation, graphical interpretation, visualization, and statistical analysis) locating patterns. b. Technology 1) Identifying criteria, problem specifications; 2) "Messing about" with and understanding materials; 3) Investigation for the purpose of application-designing and running models, reading and learning from case studies; 4) Informed decision making, reporting on and justifying design decisions; 5) Iteration toward a good enough solution; 6) Explaining failures and refining solutions; 7) Prioritizing criteria, trading them off againstn each other, and optimizing. c. Engineering 1) Begins with problem, need or desire that leads to an engineered solution; 2) Using models and simulation to analyze existing solutions; 3) Engineering investigation to obtain data necessary for identifying criteria and conctraints and to test design ideas; 4) Analyzing and interpreting data collected from test of designs and investigations to locate optimal design solutions. d. Mathematics 1) Mathematical and computational thinking are fundamental tools for representing varibles and their relationships. These ways of thinking allow for making predictions, testing theory, and locating paterns or correlations; 2) Mathematical and computational thinking are integral to design by allowing engineers to run tests and mathematical models to assess the performance of a design solution before prototyping.
Learning Design
Learning Design describes generally about the essential concepts, learning models, Scientific and Engineering Practice and the Crosscuting Concept which is used in the presentation of a topic with the STEM approach, presented in the Table 2.
Prerequisite ability
This section explains the abilities that must be had before, both by the teacher and students before carrying out STEM learning on selected topics. This section explains the 21st century skills that are trained in learning, namely thinking critically, thinking creatively, communicating, and collaborating. The description of each task is as follows [2]: 1. Critical thinking developed when people who taking part in art activities, making, testing and improving products; 2. Creative thinking, developed when people who taking part in art activities, making, testing and improving products; 3. Communicate, which is developed when students doing a special discussion, making,Testing and improving products and presenting them; 4.Collaborate, developed when people who taking part in art activities, making, testing and improving products; G. Development of Character Education Strengthening This section explains the values of Strengthening Character Education that are trained in learning activity, such as religious, nationalist, independent, integrity, and mutual cooperation. Description of each of the characteristics developed in learning as follows [2] : a. Religion, including gratitude, tolerance, confidence, not imposing a will, loving and maintain the integrity of God's creation; b. Nationalism, including obeying the rules developed when students taking a lessons; c. Independent, including hard work, creative and innovative, discipline, not giving up easily, and lifelong learners are developed when students carry out activities design, create, test, and improve products; d. Integrity, including honesty and responsibility developed when students carry out activities to design, create, test, and improve products; e. Mutual cooperation, including cooperation developed when students carry out discussion activities, gathering information, designing, creating, testing, and improving products.
Learning Scenarios
Learning steps are described for each class meeting into learning activities, syntax in STEM -PjBL learning model, activity description, and time allocation needed according to the number of meetings that have been determined. The following are the stages of the STEM -PjBL learning process (see Table 3) [9]. Phase 1. Reflection The purpose of this first phase is to bring students into the context of the problem and inspire students to immediately begin investigating. This phase is also intended to connect what is known and what needs to be learned.
Phase 2. Research
The second phase is a form of student research. The teacher provides science learning, chooses reading, or other methods to gather relevant sources of information. The learning process occurs more during this stage, student learning progress concretizes abstract understanding of the problem. During the research phase, teachers often guide discussions to determine whether students have developed conceptual and relevant understanding based on the project. Phase 3. Discovery The discovery phase generally involves the process of bridging the information and information that is known in the preparation of the project. When students are independent and determine what is still unknown. Some STEM-PjBL models divide students into small groups to present possible solutions to problems, collaborate, and build collaboration between friends in groups.
Phase 4. Application
In the application phase the purpose is to test the product / solution in solving the problem. In some cases, students test products made from the conditions set before, the results obtained are used to correct The final phase in each project in making products / solutions by communicating between friends and class. Presentation is an important step in the learning process to develop communication and collaboration skills as well as the ability to accept and apply constructive feedback. Often assessments are carried out based on the completion of the final step of this phase.
Learning Resources
This section presents learning resources that are used as references in STEM learning in selected topics.
Tools and Materials
This section presents the needs of tools and materials used as references in STEM learning on selected topics.
Learning Assessment
This section identifies the techniques and forms of assessment used to see the achievement of Basic Competencies based on the GPA that has been formulated and the assessment instruments used.
Bibliography
This section presents various references used in compiling units according to the rules of writing bibliography.
Conclusion
Based on the results of the discussion above, it was concluded that the design of STEM-based science learning in the 2013 Curriculum integrates science, technology, engineering and mathematics designed to develop the skills of critical thinking, creative, communicating and collaborating, developing the strengthening of religious character education, nationalist, independent integrity and mutual cooperation as well. | 2019-07-26T10:25:44.861Z | 2019-06-01T00:00:00.000 | {
"year": 2019,
"sha1": "04281ee0599990de053ed03c4920ca1f7d7f1a85",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1742-6596/1233/1/012094",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "086e0d2ab2102283997d435c463f25eff94d8aba",
"s2fieldsofstudy": [
"Education"
],
"extfieldsofstudy": [
"Physics",
"Engineering"
]
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.